OpenAI has introduced a new $100-per-month ChatGPT Pro subscription tier, sharply escalating the competition in the fast-growing AI coding assistant market as it moves to capture professional developers and power users who have outgrown standard usage limits.
The new plan, announced Wednesday, is designed primarily around heavier use of Codex, OpenAI’s software engineering assistant, and comes as the company faces mounting competition from Anthropic’s increasingly popular Claude Code.
According to the company, the new tier offers five times more Codex usage than the $20-per-month Plus plan, targeting developers engaged in what it described as “longer, high-effort Codex sessions.”
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“The Plus plan will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use,” the company wrote in a post on X.
This creates a new midpoint in OpenAI’s consumer pricing stack. Until now, the jump from the $20 Plus plan to the existing $200 Pro tier was substantial, leaving advanced users with limited upgrade flexibility. The new $100 option effectively plugs that pricing gap and broadens the monetization ladder for developer-centric workloads.
That matters because coding assistants have become one of the most commercially significant segments in generative AI. The coding assistant market is no longer an experimental category. It is rapidly becoming one of the most fiercely contested battlegrounds in enterprise and consumer AI.
Codex can automate code generation, debugging, test execution, feature building, and bug fixes, dramatically compressing software development cycles. Since its rollout, adoption has accelerated at a pace that suggests AI-assisted coding is moving into mainstream developer workflows.
OpenAI chief executive Sam Altman said this week that Codex has reached three million weekly users, underscoring the scale of demand.
The growth trajectory has been particularly indicative of its accelerating adoption. Codex’s annualized revenue run rate surpassed $2.5 billion in February, representing growth of more than 100% since the start of 2026, according to a CNBC report. That kind of expansion signals that coding agents are quickly becoming a major revenue engine for AI firms.
This is precisely where the competitive pressure from Anthropic becomes most visible. Anthropic’s Claude Code has emerged as one of the strongest rivals in the AI development tools space, particularly among engineers who value longer context windows, repository-scale reasoning, and sustained coding sessions. Its subscription structure already includes $100 and $200 premium tiers, branded as Max 5x and Max 20x, which offer elevated usage limits for heavy development workloads.
OpenAI’s move mirrors that framework almost directly. The AI coding market is beginning to resemble the early cloud-computing era, where vendors compete not only on model quality but on compute quotas, workflow integration, and pricing flexibility.
In effect, usage limits are becoming a commercial weapon. The introduction of the $100 tier suggests OpenAI is responding to a growing class of users whose consumption patterns fall between casual usage and enterprise-grade heavy deployment. The $20 Plus tier remains positioned for routine coding assistance and everyday prompts. The $100 plan is aimed at serious developers running extended debugging sessions, codebase-wide refactors, and parallel task flows. The $200 plan remains the premium option for the heaviest users.
This tiering strategy is also indicative of the economics of inference costs. Coding assistants tend to be computationally expensive because they require sustained reasoning over long contexts, multiple files, testing environments, and iterative revisions.
Unlike simple chatbot interactions, software engineering tasks often involve multi-step execution loops that can run for minutes or longer. OpenAI itself has highlighted that Codex tasks can take anywhere from one minute to 30 minutes, depending on complexity.
That makes granular pricing almost inevitable. The company has also been expanding Codex beyond subscriptions. Last week, OpenAI introduced pay-as-you-go Codex-only seats for ChatGPT Business and Enterprise customers, moving toward token-based pricing for teams and organizations.
This signals a broader shift in business model. Rather than relying solely on fixed subscription fees, OpenAI is increasingly aligning pricing with actual compute usage, similar to cloud infrastructure providers such as AWS and Microsoft Azure.
The logic is to monetize AI as infrastructure, which will likely intensify the competition with Anthropic.
Claude Code has built strong traction among developers who value deep code comprehension and long-form reasoning. OpenAI, meanwhile, is leveraging ChatGPT’s much larger installed user base and ecosystem familiarity.
The result is an emerging two-horse race in AI-assisted software engineering. What began as a feature inside chatbots is evolving into a distinct software layer for coding productivity.
However, AI coding tools are moving beyond autocomplete and into agentic software engineering, where systems can independently interpret technical documents, write code, run tests and suggest pull requests. This shifts AI from a passive assistant into an active participant in the development lifecycle.
For OpenAI, the new $100 tier is a direct response to rising demand, rising compute intensity, and rising pressure from Anthropic. In short, the battle for the future of AI-assisted software development is increasingly being fought through quotas, workflow depth, and developer retention rather than headline model launches alone.



