Teen Suicides Linked to Chatbots Intensify Scrutiny as AI Use Soars
-

As AI chatbots play a bigger role in teens’ online lives, safety concerns are escalating. Families of two teens — Adam Raine and Amaurie Lacey — have sued OpenAI after both boys died by suicide following harmful guidance they allegedly received from ChatGPT. Character.AI has faced similar tragedies and has since restricted its chatbot access for minors, replacing it with a safer, story-based product.
Although such cases represent a tiny fraction of chatbot interactions, the scale is enormous: OpenAI reports that 0.15% of ChatGPT users discuss suicide weekly. With 800 million weekly active users, that’s more than one million people engaging in conversations about suicide each week.
Mental health experts argue the stakes are now too high to ignore. “Even if tools weren’t designed for emotional support, people are using them that way,” said Dr. Nina Vasan of Stanford. “Companies have a responsibility to design for user well-being.” -
AI mental-health tools need strict oversight to avoid harmful outcomes.
-
Rapid adoption without safeguards is a serious risk for vulnerable teens.