OpenAI has formed a new Expert Council on Well-Being and AI, a group of eight specialists who will advise the company on how its artificial intelligence tools, including ChatGPT and the short-form video app Sora, affect users’ mental health, emotions, and motivation.
The council will help OpenAI define what healthy interactions with AI should look like, as the company moves deeper into integrating emotionally aware technologies across its platforms. According to OpenAI, the group will meet regularly and conduct ongoing evaluations to ensure that future product developments align with mental health safety standards.
The formation of the council comes at a critical time for OpenAI, which has faced increasing scrutiny over the psychological and social implications of generative AI systems. Regulators, child safety advocates, and mental health professionals have raised concerns that chatbots could exacerbate anxiety, loneliness, or depressive behaviors—especially among young users who spend long hours engaging with conversational AI.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
In September, the U.S. Federal Trade Commission (FTC) opened an inquiry into several technology firms, including OpenAI, Google, and Meta, over the potential mental health risks posed by AI chatbots. The investigation is examining how these systems collect user data, influence emotions, and potentially manipulate behavior through tailored responses.
OpenAI is also facing a wrongful death lawsuit filed by the family of a teenager who died by suicide. The lawsuit alleges that ChatGPT generated harmful content and failed to identify or mitigate signs of distress during the user’s interaction. Legal experts have said the case could set an important precedent for AI accountability and content moderation standards in emotionally sensitive contexts.
In response to such concerns, OpenAI has ramped up its AI safety and well-being initiatives. The company has introduced an age prediction system that automatically activates teen-appropriate settings for users under 18 years old. These settings restrict access to certain features and moderate chatbot tone to reduce the risk of emotionally triggering responses.
OpenAI has also implemented new parental control features, allowing parents to monitor AI interactions and receive alerts if their child exhibits signs of emotional distress or unsafe behavior. These tools, introduced in September, were developed with early input from members who now sit on the Expert Council.
OpenAI emphasized that the council will play a key role in establishing measurable standards for AI safety and emotional intelligence in human-machine interactions.
The company said it will also collaborate with the Global Physician Network, a consortium of mental health clinicians and behavioral researchers who will test ChatGPT for signs of emotional bias, manipulation, or mental health risks. Their findings will inform internal policy frameworks and guide updates to AI behavior models.
The Expert Council’s first in-person meeting took place last week, during which members began discussions on what constitutes “healthy AI engagement.” Early priorities reportedly include the development of emotional safeguards for AI responses, prevention of dependency behavior, and guidelines for the responsible use of AI in educational and therapeutic settings.
The eight members of OpenAI’s Expert Council on Well-Being and AI include leading figures in psychiatry, psychology, and digital interaction research:
- Andrew Przybylski, Professor of Human Behavior and Technology at the University of Oxford, is known for his research on the psychological impact of digital technologies.
- David Bickham, Research Scientist at Boston Children’s Hospital’s Digital Wellness Lab, who studies the influence of media and technology on youth mental health.
- David Mohr, Director of the Center for Behavioral Intervention Technologies at Northwestern University, specializing in digital mental health solutions.
- Mathilde Cerioli, Chief Scientist at Everyone.AI, a nonprofit exploring how AI impacts child development.
- Munmun De Choudhury, Professor at Georgia Tech’s School of Interactive Computing, who has published extensively on mental health analytics and online behavior.
- Dr. Robert Ross, pediatrician and former CEO of The California Endowment, a nonprofit focused on health equity.
- Dr. Sara Johansen, Clinical Assistant Professor at Stanford University and founder of the Digital Mental Health Clinic, which researches technology’s role in therapy.
- Tracy Dennis-Tiwary, Professor of Psychology at Hunter College and author of “Future Tense,” which explores how anxiety can be reframed as a positive force.
The council’s formation underlines OpenAI’s intent to position itself as a leader in responsible AI development, particularly as the use of emotionally intelligent chatbots continues to expand globally.
ChatGPT currently has more than 700 million active users, according to independent analytics, and serves as a daily companion for many students and professionals. Meanwhile, Sora, OpenAI’s experimental video-generation platform, is being tested for educational and storytelling use cases that could deeply engage users’ emotions.
By establishing this council, OpenAI is signaling that the next phase of AI innovation will focus on empathy, ethics, and emotional intelligence, balancing rapid growth with the responsibility to safeguard human well-being.



