As AI continues to redefine how economies function, Silicon Valley has largely focused on the upside—new markets, supercharged productivity, and the rise of solo entrepreneurs running billion-dollar startups. But with that optimism has come mounting anxiety over a future where machines could replace swathes of the global workforce, especially in entry-level and white-collar sectors. Now, one of AI’s key players is trying to meet that tension head-on.
Anthropic, the San Francisco-based AI company behind Claude, on Friday launched its Economic Futures Program, a new initiative that aims to understand and prepare for the seismic labor and economic shifts that generative AI is expected to trigger. Unlike many industry efforts that focus on showcasing innovation, Anthropic’s program is explicitly focused on researching the real-world impact of AI on jobs, productivity, fiscal policy, and inequality.
“Everybody’s asking questions about what are the economic impacts [of AI], both positive and negative,” Sarah Heck, Anthropic’s Head of Policy Programs and Partnerships, told TechCrunch. “It’s really important to root these conversations in evidence and not have predetermined outcomes or views on what’s going to [happen].”
The program follows remarks by Anthropic CEO Dario Amodei, who warned in May that AI could eliminate up to 50% of entry-level white-collar jobs within the next five years, potentially pushing unemployment rates as high as 20%. That projection shared more candidly than most industry leaders are willing to admit, casts a shadow over the glowing productivity statistics that AI developers are touting to investors and governments.
Anthropic’s new initiative is part of a small but growing movement among AI firms to position themselves not just as builders of disruptive technologies but as stewards of social stability in the aftermath. The Economic Futures Program will:
- Fund empirical research on AI’s effect on labor markets and value creation.
- Convene policy symposiums in Washington, D.C. and Europe.
- Partner with academic institutions and nonprofits to build datasets tracking AI’s impact on economic systems.
Anthropic is kicking off the program with rapid grants of up to $50,000, available to individuals and teams who can deliver data-driven insights within six months. Unlike traditional research funding, peer review isn’t a requirement—what matters is speed and relevance. The company will also offer Claude API credits to assist researchers in analyzing and prototyping.
“We want to understand the transitions,” Heck said. “How are new jobs being created that nobody ever contemplated before? How are certain skills remaining valuable while others are not?”
While many policy efforts today focus narrowly on job displacement, Anthropic wants to broaden the aperture, including investigations into:
- Workflow redesign in AI-enhanced industries.
- Shifts in fiscal policy and tax structure as companies adopt AI.
- How governments should invest in education and training to build new value pipelines.
Heck emphasized that this is not a lobbying effort. Instead, the company is trying to inject real data into policy conversations that are currently filled with speculation and polarization.
Contrasts with OpenAI’s Blueprint
The move comes months after Anthropic’s chief competitor, OpenAI, released its Economic Blueprint in January. That document largely promotes adoption, infrastructure, and regional development—outlining frameworks for AI literacy, building AI economic zones, and scaling access to cloud computing. But it sidesteps the direct question of job loss, focusing instead on skilling up the workforce and creating new hubs for AI investment.
In contrast, Anthropic’s approach is more grounded in economic risk assessment. Its internal Economic Index, launched earlier this year, aggregates anonymized data to study labor shifts in real time—offering a level of transparency that few tech companies currently match.
While OpenAI’s Stargate project—a $100 billion plan to build cutting-edge data centers in partnership with Oracle and SoftBank—promises tens of thousands of construction jobs, it doesn’t directly address the fact that many of the same systems may soon automate entire departments across industries ranging from finance to marketing.
Anthropic’s Economic Futures Program also comes as part of a larger reputational recalibration happening across the tech sector. With governments from the U.S. to the EU scrutinizing AI’s social consequences, companies are increasingly trying to show that they’re not just disruptors, but also responsible stakeholders.
Earlier this week, Lyft launched a forum to gather input from drivers as it begins to roll out robotaxis—a rare case of a platform actually engaging workers likely to be displaced by automation.
Anthropic appears to be taking a similar stance, admitting that while AI might boost GDP, those gains won’t be equally distributed unless the consequences are managed intentionally and transparently.
“If there is job loss, then we should convene a collective group of thinkers to talk about mitigation,” Heck said. “If there will be huge GDP expansion, great. We should also convene policymakers to figure out what to do with that.”
With this initiative, Anthropic is betting that responsible AI will not just be an ethical imperative, but also a long-term strategic advantage—especially in an industry whose economic impact could rival the scale of the industrial revolution.