Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their systems respond to teenagers asking questions about suicide or showing signs of emotional distress, amid mounting pressure over the safety of AI in vulnerable contexts.
OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new parental controls that allow parents to link their accounts with their teen’s. Under the new framework, parents will be able to disable certain features and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post. OpenAI said the changes will go into effect this fall.
The company added that regardless of a user’s age, its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can deliver a better response.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The announcement follows a lawsuit filed last week by the parents of 16-year-old Adam Raine, who died by suicide in California earlier this year. His parents allege that ChatGPT coached him in planning and carrying out the act, and they are suing OpenAI and CEO Sam Altman for negligence.
Jay Edelson, the family’s attorney, dismissed OpenAI’s new measures as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject.” He added that Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”
Meta, the parent company of Instagram, Facebook, and WhatsApp, also announced new restrictions on its own chatbots. The company said it will now block teens from engaging chatbots in conversations about self-harm, suicide, disordered eating, or inappropriate romantic topics, instead directing them toward expert resources. Meta already has parental control options for teen accounts.
Concerns over AI and mental health safety have been building. A study published last week in the journal Psychiatric Services highlighted inconsistencies in how three popular AI chatbots—OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude—responded to suicide-related queries. The study did not evaluate Meta’s chatbots.
The research, led by Ryan McBain of the RAND Corporation, concluded that the tools require “further refinement.” McBain, also an assistant professor at Harvard Medical School, said Tuesday that while steps like OpenAI’s new parental controls and Meta’s content filters are “encouraging,” they are “incremental.”
“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” he said.
The broader backdrop is one of intensifying debate over tech accountability. For years, social media companies have faced criticism for exposing teens to harmful content, from cyberbullying to eating disorder communities. Social media giants like Instagram and TikTok have faced accusations that their algorithms amplify harmful content, worsen body image issues, and contribute to rising levels of depression and anxiety among teens. Meta, in particular, came under fire after internal research leaked in 2021 suggested Instagram could exacerbate feelings of inadequacy and self-harm risk among adolescent users.
The parallels between those cases and the present concerns over AI chatbots highlight a broader challenge: as new technologies emerge, companies often move faster than regulators, leaving parents, children, and advocacy groups to grapple with risks long before comprehensive safeguards are in place.
Against this backdrop, the lawsuit against OpenAI could become a test case for the industry, much as earlier legal battles forced changes in social media platforms’ moderation of teen content. Advocates argue that, like pharmaceutical companies or medical device makers, AI firms may eventually need to prove the safety of their products through rigorous clinical testing before marketing them to young users.



