With its recent introduction of artificial intelligence technology, Facebook can now regulate the suicidal posts of its users and intervene before it is too late. The new “proactive detection” technology looks for patterns of suicidal thoughts in a user’s posts and, if found, lets Facebook contact local authorities and/or send mental health resources to the user and the user’s friends. In doing so, Facebook has the potential to make a startling impact, reducing the amount of time it takes for someone to notice potentially dangerous thoughts via a post's content.
This new technology will be used on Facebook accounts throughout the world—excluding the European Union, where privacy laws inhibit this kind of technology. As there has been some questioning about the invasive implications of using A.I. on Facebook, their spokesperson has noted that the A.I. will scan the posts of all users and no one individual can opt out. In making these scans all-inclusive, Facebook can detect the suicidal thoughts of over 2 billion users and work against romanticizing suicide on social media. As Facebook Live shows the actions of users in real time, this technology is especially important in enabling immediate responses to danger.
Takeaway: Depending on the outcome of Facebook’s new scans, this may only be the beginning for A.I. If this new technology yields positive results, artificial intelligence may be integrated into several similar initiatives, changing the way we monitor and respond to online users.