President Trump on Thursday signed an executive order that seeks to sharply curtail the power of U.S. states to regulate artificial intelligence, marking one of the most aggressive federal interventions yet in the rapidly expanding AI sector.
The order authorizes the U.S. attorney general to challenge and potentially overturn state laws deemed inconsistent with what the administration calls “the United States’ global A.I. dominance,” placing dozens of existing safety, consumer protection, and transparency measures in legal jeopardy.
Under the order, states that refuse to roll back targeted AI laws could also face financial pressure. Trump directed federal agencies to withhold funds tied to broadband expansion and other infrastructure programs from states that maintain regulations viewed as obstructive. The threat adds a fiscal lever to what is already shaping up as a major constitutional confrontation between federal authority and state police powers.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Trump framed the move as a necessary step to eliminate what he described as a confusing and burdensome regulatory landscape. Speaking in the Oval Office alongside senior officials, including David Sacks, the administration’s AI and crypto czar, Trump argued that innovation could not thrive under a fragmented system of state rules.
“It’s got to be one source,” he said. “You can’t go to 50 different sources.”
He also tied the order directly to geopolitical competition, repeatedly citing the need for the United States to stay ahead of China in artificial intelligence.
The executive action reflects Trump’s broader realignment toward Silicon Valley and the AI industry. Over the past year, his administration has issued multiple orders designed to ease regulatory scrutiny, expand private-sector access to federal data, streamline permitting for data centers and power infrastructure, and loosen restrictions on exporting advanced AI chips. Trump has also publicly praised leading technology executives and elevated Sacks — a venture capitalist with deep ties to the tech sector — into a central policy role with significant influence over AI governance.
However, the order has already sparked widespread bipartisan resistance, with legal experts warning that it may exceed the president’s constitutional authority. States and consumer advocacy groups are expected to challenge the measure in court, arguing that only Congress can preempt state laws on this scale.
Several legal scholars have noted that while federal agencies can set standards in specific domains, a blanket attempt to invalidate state statutes through executive action is likely to face serious judicial scrutiny.
Even some voices aligned with Trump’s ideological camp expressed concern. Wes Hodges, acting director of the Center for Technology and the Human Person at the Heritage Foundation, said that if the administration succeeds in undermining state rules, it has a responsibility to replace them with a robust national framework.
“Doing so before establishing commensurate national protections is a carve-out for Big Tech,” Hodges said, underscoring fears that the order prioritizes speed and scale over public safeguards.
The stakes are high because generative AI systems have moved rapidly from experimental tools to mass-market products. Technologies capable of generating realistic text, voices, images, and video are now embedded across finance, education, healthcare, marketing, and social media. At the same time, documented harms have multiplied, including deepfake political content, financial scams, data misuse, and cases in which chatbots have provided harmful advice to minors.
In the absence of comprehensive federal legislation, states have stepped in aggressively. According to the National Conference of State Legislatures, all 50 states and U.S. territories introduced AI-related bills this year, and 38 states enacted roughly 100 new laws. These measures vary widely but generally aim to impose transparency requirements, restrict certain uses of AI, and hold companies accountable for foreseeable harms.
California adopted one of the most consequential laws, requiring developers of the largest AI models — including OpenAI’s ChatGPT and Google’s Gemini — to conduct safety testing and disclose the results. South Dakota moved to curb election-related manipulation by banning AI-generated deepfake videos in political ads within months of an election. Utah, Illinois, and Nevada passed laws governing AI chatbots used in mental health contexts, mandating user disclosures and limiting how sensitive data can be collected and used.
Child safety has emerged as a particularly active area of state regulation. Several states have passed laws aimed at protecting minors from AI-powered chatbots and algorithm-driven platforms, especially where AI tools simulate emotional support or companionship. Trump’s executive order states that it will not pre-empt child-safety laws, but it does not define how that exemption will be applied, leaving advocates concerned that protections could still be weakened through litigation or narrow interpretations.
“Blocking state laws regulating A.I. is an unacceptable nightmare for parents and anyone who cares about protecting children online,” said Sarah Gardner, chief executive of Heat Initiative, a child-safety advocacy group.
She warned that states have become the primary line of defense as federal action has lagged.
The AI industry, for its part, has mounted an intense lobbying campaign against state-level regulation. Companies argue that complying with dozens of different legal regimes raises costs, slows product development, and discourages startups. Earlier this year, lawmakers attempted to include a ten-year moratorium on state AI laws in a major domestic policy bill, but the proposal was abandoned after strong bipartisan opposition. Venture capitalist Marc Andreessen captured industry sentiment in a social media post last month, calling the state-by-state approach “a startup killer.”
Trump’s order effectively revives that fight through executive authority, raising the prospect of prolonged legal battles that could inject uncertainty into the AI market. While the administration argues that centralization will accelerate innovation and strengthen U.S. competitiveness, critics counter that the absence of binding federal standards leaves consumers, workers, and children exposed at a time when AI systems are becoming more powerful and less transparent.
Beyond domestic policy, the order also signals how Trump views AI as a strategic asset. By tying deregulation to competition with China, the administration is framing AI governance not as a consumer-protection issue but as a national power contest. That framing may resonate with parts of Congress, but it also heightens tensions with states that see immediate local risks from unchecked AI deployment.
As the order moves toward inevitable court challenges, the outcome could reshape the balance of power over technology regulation in the United States. If Trump prevails, states may find their ability to respond quickly to emerging AI harms sharply reduced. If the courts strike the order down, pressure will mount on Congress to finally craft a national AI framework that balances innovation with enforceable safeguards.
Either way, the executive order marks a turning point. It crystallizes a growing divide between federal ambitions to dominate AI globally and state-level efforts to manage its risks locally.



