
OpenAI has launched a research preview of Codex, an advanced AI coding tool designed to assist developers with various software engineering tasks. The launch signals OpenAI’s deeper push into the competitive AI coding space, currently dominated by players like Amazon, Anthropic, and Google.
Trained with reinforcement learning on real-world coding tasks, the tool generates human-like code, adheres to instructions, and iteratively runs tests to ensure accuracy. Operating in cloud-based sandbox environments preloaded with a user’s repository, Codex supports parallel task execution.
Announcing the launch of the coding tool, OpenAI wrote,
Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
“Today we’re launching a research preview of Codex: a cloud-based software engineering agent that can work on many tasks in parallel. Codex can perform tasks for you such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review; each task runs in its own cloud sandbox environment, preloaded with your repository.”
Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. It was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and can iteratively run tests until it receives a passing result.
Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and users can monitor Codex’s progress in real time. Once Codex completes a task, it commits its changes in its environment. Also, it provides verifiable evidence of its actions through citations of terminal logs and test outputs, allowing users to trace each step taken during task completion.
Lauding the tool, OpenAI CEO Sam Altman said, “it is amazing and exciting how much software one person is going to be able to create with tools like this. “you can just do things” is one of my favorite memes; i didn’t think it would apply to AI itself, and its users, in such an important way so soon”.