Home Latest Insights | News OpenAI Hardware Lead Caitlin Kalinowski Resigns Over Controversial Pentagon AI Deal

OpenAI Hardware Lead Caitlin Kalinowski Resigns Over Controversial Pentagon AI Deal

OpenAI Hardware Lead Caitlin Kalinowski Resigns Over Controversial Pentagon AI Deal

OpenAI hardware lead Caitlin Kalinowski has resigned from the company following concerns over its recently announced agreement with the United States Department of Defense to supply artificial intelligence systems for classified use cases.

The deal, which was disclosed last week, quickly sparked public debate about the role of private AI firms in military operations. According to reports, the partnership triggered a wave of backlash among users of ChatGPT, including a surge in app uninstalls and negative reviews.

The controversy centers on fears that advanced AI technologies could be used for surveillance or autonomous military applications without sufficient oversight.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

Announcing her resignation in a post on LinkedIn, Kalinowski emphasized that her decision was based on principle rather than personal disagreements within the company.

She wrote,

I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call,” she wrote. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people.”

She added that she maintains deep respect for OpenAI CEO Sam Altman and the broader team, noting that she remains proud of the work accomplished during her tenure.

Kalinowski’s departure comes amid a growing debate over how far AI companies should go in supporting military uses of artificial intelligence. Critics have questioned how such technologies might be deployed in defense contexts and warned about the increasing influence of private technology firms in government security operations.

Data cited by TechCrunch shows that ChatGPT uninstalls surged by 295% over the weekend following news of the Pentagon partnership. At the same time, downloads of Claude developed by Anthropic rose by 51%.

Analytics firm Sensor Tower also reported a sharp spike in negative feedback for ChatGPT. One-star reviews jumped by 775% on Saturday and continued to rise by another 100% the following day, while five-star reviews dropped by roughly 50%. During the same period, Claude climbed to the top of the U.S. Apple App Store rankings.

Capitalizing on the moment, Anthropic introduced new features to Claude’s free tier, including context recall across conversations and a tool allowing users to import chat histories from competing chatbots such as ChatGPT.

Responding to the backlash, Altman acknowledged the criticism and admitted the company “shouldn’t have rushed” the defense agreement.

In a post on X, he shared what he described as an internal memo outlining planned revisions to the contract and reaffirming OpenAI’s principles regarding safety and surveillance.

According to Altman, the updated language will explicitly state that OpenAI’s AI systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

The memo also noted that the Defense Department understands the limitation, prohibiting deliberate tracking, monitoring, or surveillance of U.S. citizens through the procurement or use of commercially obtained personal data.

Altman further stated that the Pentagon confirmed OpenAI’s tools would not be used by intelligence agencies such as the National Security Agency.

“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” he said, adding that OpenAI plans to work closely with the Defense Department to implement stronger safeguards.

Outlook

The controversy highlights the growing tension between innovation and ethics as artificial intelligence becomes increasingly intertwined with national security.

As governments seek advanced technologies to strengthen defense capabilities, AI companies face mounting pressure to balance commercial opportunities with public trust and ethical responsibilities.

Moving forward, the debate over AI’s role in military applications is likely to intensify. For OpenAI, rebuilding user confidence while maintaining strategic government partnerships will be critical.

At the same time, rival firms like Anthropic may continue to capitalize on concerns around transparency and safety, potentially reshaping competition in the rapidly evolving AI industry.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here