OpenAI announced on Friday that it had identified a security issue related to a compromised third-party developer tool, prompting the company to rotate its macOS application signing certificates and require users to update their desktop apps to the latest versions.
The company said there is no evidence that user data was accessed, internal systems or intellectual property were compromised, or any OpenAI software was altered, framing the incident as a contained software supply-chain scare rather than a customer data breach.
The issue stems from Axios, a widely used open-source developer library that was compromised on March 31 as part of a broader supply-chain attack that cybersecurity researchers and media reports have linked to actors believed to be associated with North Korea.
According to OpenAI, one of its GitHub Actions workflows used in the macOS app-signing pipeline downloaded and executed the malicious Axios version, specifically version 1.14.1.
That workflow had access to highly sensitive materials used for Apple code signing and notarization, including the certificate that verifies OpenAI’s macOS applications as authentic software.
The affected products include ChatGPT Desktop, Codex, Codex CLI and Atlas. This is the most significant aspect of the incident.
The immediate risk was not theft of user chats, passwords, or API credentials. Rather, the greater concern was that if the signing certificate had been successfully exfiltrated, attackers could potentially use it to distribute fake macOS applications that appear to be legitimate OpenAI software.
Such apps could pass Apple’s trust checks and appear authentic to users, making them far more dangerous than ordinary phishing downloads. That is why OpenAI has moved quickly to revoke and rotate the certificate, even though its forensic review concluded the malicious payload likely did not successfully steal the signing credentials.
The company said this conclusion was based on the timing of the malicious code execution, sequencing of the CI job, and the way the certificate was injected into the workflow environment. Still, OpenAI is treating the certificate as potentially exposed “out of an abundance of caution,” a standard incident-response practice in software security.
Effective May 8, older versions of OpenAI’s macOS desktop applications will no longer receive updates or support and may stop functioning, the company said.
That move is designed to ensure users migrate to builds signed with the newly rotated certificate.
For users, the practical instruction is to update the macOS ChatGPT app immediately through the in-app updater or the official OpenAI download page. OpenAI also said that passwords and API keys were not affected, and that the root cause has been traced to a misconfiguration in the GitHub Actions workflow, which has since been fixed.
The broader impact of this incident goes well beyond OpenAI. It has been noted as a textbook example of a software supply-chain attack, where hackers compromise a trusted third-party dependency rather than attacking the target company directly. Because Axios is one of the most widely used JavaScript HTTP libraries in the world, with tens of millions of downloads weekly, the breach had industry-wide implications.
Security researchers said the malicious versions were live only briefly before being removed, but even a short exposure window can be enough to compromise automated build pipelines across major organizations. What makes this especially notable is that the attack appears to have targeted developer infrastructure rather than end users directly.
That mirrors a growing trend in cyber operations: attackers increasingly seek access to CI/CD pipelines, code-signing systems, and package registries, where a single compromise can cascade across multiple products and companies. The incident also highlights the rising cyber risks facing AI firms as they expand beyond models and APIs into full software ecosystems.
While much public attention around AI safety focuses on misuse of models, this incident is a reminder that traditional software security risks, including dependency poisoning and certificate compromise, remain just as critical.
In market and trust terms, OpenAI’s quick disclosure and certificate rotation are likely intended to reassure enterprise users and developers that the company’s response process is mature. So far, the evidence suggests this was a preventive containment exercise rather than a breach of customer systems.






