An Anthropic engineer says AI agents that can operate computers are advancing quickly and could disrupt nearly every internet-based job in the U.S., with software engineering roles potentially changing as early as 2026.
A senior engineer at Anthropic says a new class of artificial intelligence systems capable of operating computers like humans is developing fast enough to reshape nearly every internet-based job in the United States.
Boris Cherny, creator of Claude Code at Anthropic, made the remarks during an appearance on Lenny’s Podcast, hosted by Lenny Rachitsky. He argued that AI systems that can take direct action across workplace tools — rather than merely generate text — are improving at a pace that could significantly alter responsibilities for software engineers, product managers, designers and other knowledge workers.
“It’s going to expand to pretty much any kind of work that you can do on a computer,” Cherny said. “In the meantime, it’s going to be very disruptive. It’s going to be painful for a lot of people.”
Anthropic is widely known for its Claude chatbot, but Claude Code represents a strategic pivot toward what developers call “agentic AI.” Built on the company’s Claude models, Claude Code is designed as a coding agent capable of running terminal commands, editing files, navigating repositories, analyzing documents and executing multi-step tasks across applications.
The company released an updated version of its model suite, Opus 4.6, in early February, further enhancing Claude Code’s capabilities.
Unlike traditional chatbots that respond to prompts with text or images, AI agents can interact with digital systems directly. They can open software, manipulate files, generate reports, message collaborators, and deploy code — effectively functioning as a junior digital operator inside enterprise workflows.
Anthropic has said Claude Code has not yet reached the skill level of an experienced human engineer. However, Cherny described it as a breakthrough in accessibility, bringing agent-based AI into practical use for a broader audience.
“It’s the thing that I think brings agentic AI to people that haven’t really used it before,” he said. “People are starting to just get a sense of it for the first time.”
Productivity Gains and Role Redefinition
Cherny said his own team has already integrated AI tools deeply into its workflow and that productivity per engineer has increased sharply since Claude Code’s launch. While Anthropic has commercial incentives to promote its tools — the company sells access to enterprise customers — similar productivity claims have surfaced across the technology sector.
The core shift is from AI as a passive assistant to AI as an active executor. In software development, this could reduce the need for engineers to manually write and refactor large volumes of code. Instead, engineers may move toward defining architecture, validating outputs, designing systems, and supervising AI-generated work.
Cherny previously suggested on Y Combinator’s “Lightcone” podcast that the traditional job title “software engineer” could begin to “go away” in 2026. The implication is not necessarily that programming will vanish, but that its nature will change. Coding may become more about intent specification and system oversight than syntax-level craftsmanship.
The impact extends beyond engineering. Product managers could use agents to analyze user data and generate feature roadmaps. Designers might deploy AI to produce prototypes and conduct automated usability testing. Operations teams could rely on agents to reconcile data, generate compliance reports, and manage routine workflows.
If agents can navigate productivity suites, code repositories, customer support dashboards, and analytics platforms, the automation envelope expands across most internet-connected professions.
Tomorrow is Not Certain
The broader economic consequences remain uncertain. Cherny acknowledged that society has yet to determine how to manage the transition.
“As a society, this is a conversation we have to figure out together,” he said. “Anyone can just build software anytime.”
The prospect that “anyone” could generate functional software products through natural language prompts challenges long-standing labor market structures. Barriers to entry in software development may fall, enabling more entrepreneurs and small teams to launch products. At the same time, companies may reduce headcount if AI systems absorb a growing share of routine tasks.
Historically, automation has displaced certain roles while creating new categories of work. The difference with agentic AI is that it targets cognitive, digital, and creative tasks traditionally associated with white-collar employment. That could compress mid-level roles while increasing demand for high-level oversight, AI system governance, and interdisciplinary coordination.
Regulatory frameworks may also come under scrutiny. Enterprises deploying autonomous agents will need safeguards around data privacy, cybersecurity, audit trails, and accountability. Questions about liability — particularly if an AI agent executes flawed instructions or introduces security vulnerabilities — remain unresolved.
Technology firms are likely to adopt agentic systems rapidly to maintain efficiency and competitive advantage. Early adopters may see cost savings and faster product iteration cycles, pushing rivals to follow suit.
However, large-scale deployment will depend on reliability. AI agents that make mistakes in production environments — especially in financial, healthcare, or infrastructure systems — could expose companies to significant risk. Trust, therefore, becomes as important as capability.
Anthropic competes in a crowded field of AI developers racing to build more autonomous systems. The pace of model improvement, combined with falling compute costs and expanding enterprise integrations, suggests that experimentation with AI agents will intensify over the next year.
Preparing for the Transition
Cherny’s advice to workers is pragmatic: experiment with AI tools and understand how they function. Familiarity with agentic systems — including their limitations and failure modes — may become a foundational skill across industries.
Rather than eliminating work outright, agentic AI is likely to redefine it. The shift could mirror earlier technological inflection points in which tools first augment workers before altering organizational structures altogether.








