OpenAI on Tuesday announced it will launch a dedicated ChatGPT experience with parental controls for users under 18 years old, as the artificial intelligence company works to enhance safety protections for teenagers.
The new version will automatically redirect minors to an age-appropriate ChatGPT experience that blocks graphic and sexual content and, in rare cases of acute distress, can involve law enforcement, the company said. OpenAI is also building technology to better predict a user’s age. If the system is uncertain or lacks sufficient information, it will default to the under-18 experience.
The safety updates come after the Federal Trade Commission recently launched an inquiry into several tech companies, including OpenAI, over how AI chatbots like ChatGPT potentially affect children and teenagers. The agency said it wants to understand what steps companies have taken to “evaluate the safety of these chatbots when acting as companions,” according to a release.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
OpenAI has faced mounting scrutiny over this issue, particularly after a lawsuit from a family blamed the chatbot for their teenage son’s death by suicide. In response, the company last month outlined how ChatGPT will handle “sensitive situations.”
“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” OpenAI CEO Sam Altman wrote in a blog post on Tuesday.
The company has been preparing these controls for months. In August, OpenAI said it would release parental tools to help guardians understand and shape how their teens use ChatGPT. On Tuesday, it shared more details, saying the parental controls will roll out at the end of the month.
Parents will be able to link their ChatGPT account with their teen’s via email, set blackout hours for when the teen cannot use the chatbot, manage which features are disabled, guide how the chatbot responds, and receive alerts if the teen is flagged to be in acute distress. ChatGPT remains intended for users ages 13 and up, OpenAI said.
“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions,” Altman wrote.
U.S. versus Europe: A growing Safety policy gap
The U.S. response to AI safety for minors has so far leaned on investigations and voluntary compliance. The FTC inquiry into OpenAI and other chatbot makers signals heightened oversight but stops short of binding rules. Much of the American approach mirrors broader debates about tech regulation, where agencies intervene after harms are alleged rather than imposing preemptive safeguards.
Europe, by contrast, is already advancing concrete legislation. The European Union’s landmark AI Act, agreed upon in principle earlier this year, classifies systems like chatbots as “high risk” when used by children and requires stricter obligations for companies. That includes mandatory transparency around training data, opt-out options, and strong age-verification protocols. Some EU countries, such as Italy, have even temporarily banned ChatGPT in the past over concerns about inadequate safeguards for minors.
This divergence could put OpenAI and its peers in a tricky position. Measures that satisfy U.S. regulators may fall short of Europe’s legally binding requirements, forcing AI firms to build dual compliance systems.
Looking forward, OpenAI’s rollout of parental controls is expected to set a precedent in the U.S., where safety features are often introduced after lawsuits or scandals. If regulators deem the new system sufficient, ChatGPT may continue to expand into classrooms and teen use cases with limited additional oversight.
But in Europe, OpenAI may face a stricter test. Regulators there are unlikely to accept company-designed safeguards alone; they want verifiable compliance with the AI Act’s standards. Analysts say this could lead to “regionalized ChatGPTs,” where the product available to teenagers in the EU differs meaningfully from what is offered in the U.S.
Meanwhile, privacy advocates warn that prioritizing safety “ahead of privacy,” as Altman described, may open new debates about surveillance and data collection on minors. They argue that embedding parental controls and distress alerts risks building an infrastructure that could be misused for broader monitoring.
However, OpenAI’s parental controls represent a major shift in how AI is being tailored for minors. Time and how it effectively prevents harm will tell whether the move becomes a genuine safeguard or a defensive measure against lawsuits and regulators.



