On Wednesday, OpenAI researcher Zoë Hitzig published a powerful guest essay in The New York Times announcing her resignation on Monday—the same day OpenAI began testing advertisements inside ChatGPT for free and low-tier users.
Hitzig, an economist, published poet, and junior fellow at the Harvard Society of Fellows, spent two years at OpenAI helping shape model development, pricing, and deployment strategy. Her departure adds to a striking wave of high-profile exits from leading AI labs this week, reflecting growing unease among researchers as companies accelerate the commercialization of AI.
Hitzig did not condemn advertising outright. Instead, she argued that the intimate, confessional nature of data users share with ChatGPT—medical fears, relationship struggles, religious beliefs, and personal vulnerabilities—creates an unprecedented “archive of human candor” that makes advertising especially dangerous. She warned that OpenAI risks repeating Facebook’s trajectory: early promises of user control and transparency gradually eroded, leading to privacy scandals and Federal Trade Commission findings that marketed changes actually reduced user power.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“I once believed I could help the people building A.I. get ahead of the problems it would create,” Hitzig wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”
She highlighted existing tensions: OpenAI claims it does not optimize for user activity solely to generate ad revenue, yet reporting suggests the company already optimizes for daily active users, often by making models more flattering and sycophantic—behavior that can increase emotional dependence. Hitzig pointed to documented cases of “chatbot psychosis” and wrongful death lawsuits alleging ChatGPT reinforced suicidal ideation or validated paranoid delusions leading to violence.
She proposed structural alternatives: cross-subsidies (e.g., high-value AI labor paid by businesses subsidizing free access), independent oversight boards with binding authority over data use, and data trusts or cooperatives giving users control over their information, citing precedents like Switzerland’s MIDATA cooperative and Germany’s co-determination laws. She concluded with two feared outcomes: “a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.”
The essay landed amid a week of significant departures across AI labs:
- Anthropic: On Sunday, Mrinank Sharma, who led Anthropic’s Safeguards Research Team and co-authored a widely cited 2023 study on AI sycophancy, announced his resignation. In an open letter, Sharma warned that “the world is in peril” and said he had “repeatedly seen how hard it is to truly let our values govern our actions” inside the organization. He plans to pursue a poetry degree—coincidentally aligning with Hitzig’s background as a published poet.
- xAI: At least nine employees, including co-founders Yuhuai “Tony” Wu (who resigned Monday) and Jimmy Ba (Tuesday), publicly announced departures over the past week, according to TechCrunch. Six of xAI’s original 12 co-founders have now left. The exits follow Elon Musk’s announcement of an all-stock merger with SpaceX ahead of a planned IPO, valuing xAI at $1.25 trillion, though the timing’s relation to vesting schedules or other factors remains unclear.
These departures appear unrelated in specifics but coincide with a period of rapid commercialization across the AI industry that has tested internal cultures. Researchers who joined to pursue fundamental questions about AI safety, alignment, and societal impact increasingly face pressure to prioritize productization, revenue, and scaling—often at odds with cautious, long-term research agendas.
The timing of Hitzig’s resignation, on the day OpenAI launched ads in ChatGPT, amplified its impact. OpenAI began testing clearly labeled ads at the bottom of responses for free and $8/month “Go” tier users in the U.S., while Plus, Pro, Business, Enterprise, and Education subscribers remain ad-free. The company insists ads will not influence model answers and personalization is opt-in, using chat history and past ad interactions—but not sharing raw chats with advertisers.
The rollout followed a week of public sparring with rival Anthropic, which ran Super Bowl ads declaring Claude would remain ad-free and depicting other AI chatbots awkwardly inserting product placements. OpenAI CEO Sam Altman called the ads “funny” but “clearly dishonest,” insisting OpenAI would never run ads in the manner depicted and framing the model as a way to bring AI to users who cannot afford subscriptions. Anthropic countered that including ads in Claude conversations “would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking,” noting that over 80% of its revenue comes from enterprise customers.
Hitzig’s essay and the surrounding departures highlight a broader reckoning in the AI research community. As companies shift from research-focused origins toward revenue-generating products, tensions between safety/alignment priorities and commercial imperatives have intensified. The wave of exits—spanning OpenAI, Anthropic, and xAI—suggests that many researchers who joined to shape AI’s future are questioning whether their values can still be realized inside these organizations.
As Hitzig warned, the path forward risks creating either manipulative free tools or exclusive premium services—outcomes that could shape public trust in AI for years to come.



