Suicide is ranked as the third leading cause of death among people ages 10 to 14 and second among people ages 15 to 24 according to the Center for Disease Control and Prevention. Obviously, suicide and depression are a serious problem facing society. People who are contemplating suicide often feel helpless and reach out, but hearing and acting on cries for help doesn’t always happen in time. Tragically, many people who have struggled with depression and/or suicidal thoughts have used social media to post notes about their intentions to take their own lives or even live stream their suicides.
In response, Facebook announced a few months ago that it would be taking more initiative in using its platform for social good. One of Facebook’s tools to aide with suicide prevention is artificial intelligence.
Facebook has developed algorithms that recognize patterns in user’s posts to flag them in case they are at risk of committing suicide. Critics have opined that social media platforms need to do more in these instances, including at times shutting off a live stream of suicidal acts. However, by leaving the streams up, some platforms such as Facebook believe they can buy time for the poster to get help. The new algorithms look for posts that contain feelings of despair or comments from friends asking if the poster is “okay.” Then the algorithms trigger a response on a live-stream or post for the poster to seek help. They have the option to contact a helpline or connect with another friend.
Consider this, a study by Vanderbilt University Medical Center data scientists found that machine-learning algorithms that they developed were 80-90 percent accurate when predicting if someone would attempt suicide in the next two years. This number rose to 92 percent when predicting if someone would attempt suicide within the next week. When comparing these rates to the accuracy of doctors’ ability to make this prediction, they are incredibly insightful; research shows that doctor’s predictions are only slightly better than 50 percent. A separate study by researchers at Florida State University involved training algorithms that learned which combination of factors could predict if someone would attempt suicide. Clearly, AI has serious potential in reducing the number of suicides if it has this high level of accuracy.
Facebook’s algorithms cannot currently seek help on their own. They still rely on some sort of user input. The company is grappling with applying the algorithms autonomously, to contact help on their own in a way that is sensitive to the interpersonal dynamics of users and not invasive to their privacy—this is not an easy balance. Facebook also still uses static tools where the friends of a poster who is likely to make an attempt on their life can flag posts for Facebook to rapidly respond to. These tools are still useful, but the use of AI could greatly increase their effectiveness.
Suicide prevention is another of many areas where AI is desperately needed. By the time a human or doctor would be able to recognize that a person is about to make an attempt on their own life, they may be too late, as research shows. Algorithms make this type of diagnosis far more accurate and have the ability to save countless lives. As SIIA wrote in previous blogs and issue briefs, this type of AI does not displace human work or the work of doctors. Rather, it enhances the ability for doctors or common users to take note of red flags raised by some users. Human users must work in tandem with AI tools to achieve the best results and make the most accurate predictions as there is still the chance that the AI could be incorrect. While not a cure-all for depression and suicide, Facebook’s AI tools provide great help for users to seek, and receive, the help that they need.