
A last-minute addition by House Republicans to a proposed federal budget bill could dramatically reshape the landscape of artificial intelligence (AI) governance in the United States. The controversial provision, buried within the reconciliation package, would prohibit any state or local government from enacting or enforcing laws that regulate AI systems or automated decision-making technologies for the next ten years, unless the law specifically facilitates their deployment or operation.
If passed, the legislation would not only invalidate state-level laws currently aimed at curbing algorithmic bias and discrimination, but it would also prevent states from introducing new rules until at least 2035. That would create a regulatory void at a time when AI is increasingly embedded in housing, hiring, healthcare, and policing decisions, often without transparency or public accountability.
The provision’s insertion—just 48 hours before the bill was marked up by the House Energy and Commerce Committee—has drawn sharp rebukes from consumer advocates, legal experts, and a bipartisan group of state attorneys general. But beyond the legal ramifications, the move underscores a deepening ideological rift in Washington over how to manage a rapidly advancing technology that is transforming everything from job applications to public benefits distribution.
Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
A Republican Vision: Deregulate, Deploy, Dominate
Supporters of the provision, mostly Republicans, argue that a patchwork of state-level AI regulations will stifle innovation, burden businesses, and threaten U.S. competitiveness in the global AI race. To them, federal preemption is a way to ensure a consistent, business-friendly national approach that allows tech firms to innovate without being slowed down by what they see as overzealous or ideologically driven state legislation.
The deregulatory stance is reminiscent of the Trump administration’s early actions to dismantle guardrails placed on AI. Upon returning to the office, President Donald Trump immediately revoked a Biden-era executive order that had established preliminary federal safety guidelines for AI development and deployment. The administration has made it clear that it sees AI regulation, particularly at the state level, as a roadblock, not a necessity.
That viewpoint is heavily influenced by corporate interests and key figures in the tech industry who have either backed Trump or aligned with his agenda. One of the most prominent is Elon Musk, the billionaire entrepreneur and early cofounder of OpenAI, who has publicly railed against what he calls “left-wing bias” in AI models.
Musk, now owner of X (formerly Twitter) and head of several companies heavily invested in AI, including xAI, has accused OpenAI’s ChatGPT and Google’s Gemini of being “woke” and serving a “leftist agenda.” He has repeatedly warned that these AI systems are being trained to push progressive values while censoring conservative viewpoints.
“The most important thing in training AI is that it is rigorously truthful. This is very, very important, essential,” Musk has said, claiming that the leading generative models reflect Silicon Valley’s liberal orthodoxy rather than political neutrality.
Musk’s criticisms have found a receptive audience among Republicans who are increasingly framing AI as another front in the broader culture war, arguing that without limits, AI tools risk becoming vehicles for “leftist indoctrination” rather than neutral technologies.
A Democratic Vision: Regulation, Transparency, Accountability
Democrats, by contrast, have pushed for stronger consumer protections, transparency, and bias mitigation mechanisms as AI systems proliferate. From algorithmic rent-setting tools to hiring software that filters applicants based on unverifiable data, Democrats argue that these systems can replicate and even amplify societal biases—unless regulators step in.
Several states governed by Democrats have already moved to impose guardrails. In California, laws now require companies to inform patients when generative AI is used in healthcare communications. In New York, employers using automated hiring tools must conduct annual bias audits. Illinois has legislation governing how facial recognition can be used in workplace surveillance.
But if the House reconciliation bill passes with the AI preemption clause intact, these state-level laws would become unenforceable, raising alarms among civil rights advocates and state officials.
“This bill is a sweeping and reckless attempt to shield some of the largest and most powerful corporations in the world—from big tech monopolies to RealPage, UnitedHealth Group and others—from any sort of accountability,” said Lee Hepner, senior legal counsel at the American Economic Liberties Project.
AI Harms Are Not Hypothetical
The real-world impact of unregulated AI systems is already being felt. A coalition of state attorneys general recently filed suit against RealPage, a property tech firm accused of colluding with landlords to artificially raise rents using an algorithmic pricing tool. Another company, SafeRent, recently settled a class-action suit filed by Black and Hispanic renters who say they were denied apartments based on secretive AI-generated scores.
Despite these concerns, House Republicans appear determined to block what they see as overreach by liberal states. The bill would not only block future regulations but erase many that are already in place, effectively freezing AI oversight at a time when the technology’s use is exploding across industries.
A Looming Senate Battle
The battle is far from over as the measure could run into resistance in the Senate, where Democrats hold a slim majority. Given that the provision was added to a reconciliation bill—a legislative vehicle reserved for fiscal issues—Senate rules may prohibit its inclusion. Under the Byrd Rule, non-budgetary provisions can be struck from reconciliation bills if they are deemed extraneous.
“I don’t know whether it will pass the Byrd Rule,” said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims.
“That sounds to me like a policy change. I’m not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it,” Cornyn said.
If the bill becomes law as written, it would create a federal freeze on AI regulation, just as experts say the next decade will determine whether AI serves the public interest or entrenches inequality. Many see the fight over this provision as a litmus test for how the U.S. government intends to manage technological change and who it is willing to protect in the process.