The world’s largest social media company, Facebook, sees every word said on its platform, and they’ve been trying to use that vantage point to help people who may be suicidal, but the hidden costs might be too high.
Facebook’s tool “uses signals to identify posts from people who might be at risk, such as phrases in posts and concerned comments from friends and family,” Catherine Card, Facebook’s director of product management, wrote in a blog post.
And when Facebook determines someone is suicidal, the company contacts local law enforcement, according to a recent New York Times report.
In 2017, Facebook started relying more heavily on artificial intelligence, or AI, to identify users who might be at risk of hurting themselves or attempting to end their own lives.
This technology became a priority for Facebook developers after several stories of users, including a 12-year-old girl, broadcasting their suicides on Facebook Live.
Here are four reasons you should care about Facebook’s suicide prevention feature:
The Potential Upside
Facebook is uniquely positioned to have a meaningful impact on global suicide rates, given that it has access to the posts and correspondence of more than one billion users, or roughly 13 percent of the people on Earth. The New York Times described the social platform’s tech as “most likely the world’s largest suicide threat screening and alert program.” If Facebook’s tools work as intended, countless lives could be saved every year.
The Downside: Computers Are Flawed
Unfortunately, technology isn’t perfect, and detecting intent — particularly intent as complex and multi-faceted as the intent to end one’s own life — is much more complicated than setting up a series of keyword-detecting algorithms. In a different post, Facebook’s Catherine Card said that it can be difficult to teach a computer to pick up on all the nuances of human language.
“A human being might recognize that ‘I have so much homework I want to kill myself’ is not a genuine cry of distress, but how do you teach a computer that kind of contextual understanding?” Card asked. That’s why human beings are still part of the process, she explained. If Facebook’s algorithm flags a post, “a trained member of Facebook’s Community Operations team reviews it to determine if the person is at risk,” Card said.
The Danger of False Positives
Card’s blog post — and much of the reporting that followed — points out that the technology is prone to yielding false alarms. These false positives could result in disastrous unintended consequences, such as people who are not at risk for suicide having to be hospitalized, undergo psychological evaluation or have unnecessary, high-stress interactions with law enforcement. The AI may not even be able to distinguish between a person struggling with suicidal thoughts and a person looking to discuss mental illness candidly with their Facebook friends, said Mason Marks, a medical doctor and research fellow at Yale and NYU law schools, in an interview with NPR.
“People … might fear a visit from police, so they might pull back and not engage in an open and honest dialogue. … And I’m not sure that’s a good thing,” Marks said.
The Cost in Privacy
Lastly, Facebook’s suicide prevention feature begs a few interesting questions for users concerned with the network’s reputation for mishandling personal data. After a long two years of privacy-related scandals emerging from the network, Facebook users might reasonably wonder whether the status of their mental health might be used to advertise to them, or leaked to nefarious third-party analytics companies.
“I think this should be considered sensitive health information,” said Natasha Duarte, a policy analyst at the Center for Democracy and Technology, in an interview with Business Insider. “Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such.”
Unlike other aspects of Facebook, this isn’t something a user can opt out of. When either a user or an algorithm detects a possible suicide threat, “a trained member of Facebook’s Community Operations team reviews it to determine if the person is at risk,” according to another post by Facebook’s Card on the subject.
A reminder that for better or worse, what gets said online has consequences.