Home Latest Insights | News OpenAI Offers $555,000 Role to Tackle AI’s Growing Risks as Safety Concerns Resurface

OpenAI Offers $555,000 Role to Tackle AI’s Growing Risks as Safety Concerns Resurface

OpenAI Offers $555,000 Role to Tackle AI’s Growing Risks as Safety Concerns Resurface

OpenAI is offering to pay more than half a million dollars a year to recruit a senior executive tasked with confronting the darker side of artificial intelligence, a move that indicates how seriously the company now views the risks emerging alongside rapid advances in its models.

The company is hiring a new “head of preparedness,” a role that will sit within OpenAI’s Safety Systems team and carry a base salary of $555,000 a year, plus equity. The position is designed to lead efforts to identify, assess, and mitigate the risks posed by increasingly capable AI systems, ranging from cybersecurity threats and misinformation to mental health harms and the erosion of human agency.

OpenAI chief executive Sam Altman flagged the role publicly over the weekend, describing it as demanding and urgent. In a post on X on Saturday, Altman said the job would be “stressful” and warned that whoever takes it on would be “jump[ing] into the deep end pretty much immediately.”

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

Altman framed the role as essential at a moment when AI capabilities are accelerating faster than many anticipated.

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges,” he wrote.

He pointed to early warning signs already seen in 2025, including the impact of AI systems on mental health and the growing ability of models to identify serious vulnerabilities in computer security systems.

The hiring push comes as OpenAI’s products, particularly ChatGPT, have become embedded in everyday life for millions of users. The chatbot is widely used for research, writing emails, planning trips, and completing routine tasks. But as adoption has spread, so too have concerns about unintended consequences.

Some users now engage with chatbots as a substitute for therapy or emotional support. Mental health experts have warned that, in certain cases, this can worsen underlying conditions. There have been instances where interactions with AI systems appeared to reinforce delusions or encourage other harmful behaviors.

OpenAI acknowledged those risks last year. In October, the company said it had begun working with mental health professionals to improve how ChatGPT responds to users exhibiting signs of psychosis, self-harm, or other concerning behavior. Those efforts form part of a broader attempt to adapt safety systems as AI becomes more persuasive, emotionally responsive, and context-aware.

The new head of preparedness will be directly responsible for building and coordinating the frameworks designed to anticipate such risks before they scale. According to the job listing, the role involves leading “capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”

The position also carries symbolic weight inside OpenAI, a company founded on the mission of developing artificial intelligence that benefits humanity as a whole. From its early days, safety and alignment were central to its identity. However, as OpenAI transitioned from a research-focused lab into a commercial powerhouse under pressure to release products and generate revenue, internal tensions over priorities have increasingly spilled into public view.

Several former staff members have openly questioned whether safety has taken a back seat to growth. Jan Leike, who led OpenAI’s now-dissolved safety team, resigned in May 2024 and used his departure to issue a blunt warning. In a post on X, he said the company had drifted away from its founding mission.

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote at the time. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Less than a week later, another OpenAI staffer announced their resignation, also citing safety concerns. Daniel Kokotajlo, a researcher who had focused on risks associated with artificial general intelligence, said in a May 2024 blog post that he left because he was “losing confidence that it would behave responsibly around the time of AGI.”

Kokotajlo later told Fortune that OpenAI initially had roughly 30 people dedicated to researching AGI-related safety issues. After a wave of departures, that number fell by nearly half, raising questions about whether the company had sufficient internal capacity to match the pace of its technological ambitions.

The head of preparedness role was previously held by Aleksander Madry, who moved into a different position within the company in July 2024. Filling the post now, with a significantly publicized compensation package, suggests OpenAI is attempting to reinforce its safety infrastructure at a time when scrutiny from regulators, researchers, and the public is intensifying.

But beyond mental health and cybersecurity, critics warn that advanced AI could accelerate job displacement, supercharge misinformation campaigns, enable malicious actors, and increase environmental costs through energy-hungry data centers. More broadly, there is growing unease about how much decision-making power humans may cede to systems they do not fully understand.

By putting a $555,000 price tag on preparedness, OpenAI is saying that managing those risks is no longer a peripheral concern but a core operational priority. Whether the role will be enough to rebuild confidence in the company’s safety culture, especially among skeptical former insiders, remains an open question.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here