Facebook Will Use A.I. to Spot Suicidal Tendencies in Posts

Photo Credit: Getty Images

Facebook is planning to use advanced artificial intelligence to help the platform detect posts, videos, and Facebook live streams that may contain suicidal thoughts.

The "proactive detection" software will deploy globally after a trial on text-based posts centered around United States users. However, the European Union has strict data privacy laws that are causing some issues.

"We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live," Guy Rosen, Facebook's VP of product management, said in a blog post. "This will eventually be available worldwide, except the EU."

"This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide. We continue to work on this technology to increase accuracy and avoid false positives before our team reviews."

According to Facebook, comments like “Are you ok?” and “Can I help?” can potentially be an indicator of possible suicidal thoughts.

If the team reviews a post and determines that an immediate intervention is necessary, Facebook may work with first responders to send help, Business Insider reports.

The platform may also reach out to users via Facebook Messenger with resources, such as links to the Crisis Text Line, National Eating Disorder Association, and National Suicide Prevention Lifeline.

Facebook will use the artificial intelligence to figure out which user is in the most distress at the time.

The move is a part of an ongoing effort to help and support users that are in need or distress. Facebook has previously faced criticism for its Facebook Live feature, where some users have live streamed graphic events, including suicide.

The automated effort’s test began earlier this year and appears to be a success so far, according to Rosen.

“Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts,” said Rosen.

That number does not include reports from people directly in the Facebook community, but just from the A.I. software. Facebook users are still able to report potential self-harm to Facebook directly. Those posts go through the same human analysis as those flagged by the A.I. tool.


Sponsored Content

Sponsored Content