Illinois has officially become the first U.S. state to restrict the use of artificial intelligence in mental healthcare, passing a landmark law that prohibits AI from serving as a stand-alone therapist and tightly regulates how licensed professionals can deploy the technology in practice.
Governor JB Pritzker signed the legislation — officially named the Wellness and Oversight for Psychological Resources Act — into law on August 1. Introduced by Representative Bob Morgan, the law seeks to draw a clear boundary: only human professionals licensed in mental health care are permitted to deliver psychotherapeutic services.
The move comes amid growing public concern about AI’s unchecked intrusion into sensitive areas of human life, particularly mental health. In a statement, Rep. Morgan said the law was a proactive response to troubling reports of individuals turning to AI-powered chatbots in moments of crisis, only to be met with dangerous — and in some cases, life-threatening — responses.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
“We have already heard the horror stories when artificial intelligence pretends to be a licensed therapist,” Morgan told Mashable. “Individuals in crisis unknowingly turned to AI for help and were pushed toward dangerous, even lethal, behaviors.”
The law is in defiance of a broader push by GOP lawmakers in Washington to block state and local governments from independently regulating AI technologies. Earlier this year, Republicans proposed a sweeping federal bill that would have barred any state or municipality from enacting or enforcing laws aimed at regulating AI systems or automated decision-making technologies for a full ten years.
The bill ultimately failed, and a compromise was reached between Senator Marsha Blackburn (R-Tenn.) and Senate Commerce Chair Ted Cruz (R-Texas).
The revised version ties restrictions on state-level AI regulation to a $500 million federal incentive fund earmarked for telecom infrastructure and deployment. Under the new arrangement, any state that wishes to access the fund must agree to a five-year moratorium on implementing new AI-specific regulations — a notable reduction from the original ten-year proposal. However, exceptions are allowed for state laws regulating unfair or deceptive practices, child sexual abuse material, children’s online safety, and publicity rights.
Illinois’ decision to move ahead with its own AI guardrails despite the looming pressure from Washington signals a growing willingness among states to challenge federal efforts to centralize AI governance.
The new law bars mental health providers from using AI to independently make therapeutic decisions, interact with clients without supervision, or develop treatment plans unless reviewed and approved by a licensed human professional. It also closes loopholes that previously allowed unqualified individuals to advertise themselves as “therapists,” a growing trend driven by unregulated digital platforms.
Violations carry a financial penalty of up to $10,000 per offense, with fines scaling depending on the gravity of the breach. The law takes immediate effect.
A Historic First in U.S. AI Oversight
While several countries have begun exploring ethical frameworks for the use of AI in healthcare, Illinois is the first U.S. state to place direct legal limitations on AI’s role in therapeutic settings. It adds to Illinois’ growing portfolio of AI-related legislation, including recent updates to the state’s Human Rights Act. Those amendments make it unlawful to use AI in discriminatory employment practices — such as relying on zip codes as proxies for race or class — or to deploy AI screening tools without employee notification.
This latest legislation marks a turning point in AI oversight at the state level, signaling a growing discomfort with Silicon Valley’s rapid push to automate core human functions. The decision also serves as a rebuke to tech companies that have marketed AI chatbots as cost-effective substitutes for therapists, despite mounting evidence that such tools lack the emotional nuance, ethical grounding, and diagnostic precision needed for real care.
The Rise and Risk of AI Therapy
Over the past decade, as mental health care costs skyrocketed and access remained limited, a wave of apps and online platforms have emerged offering AI-powered chatbots as scalable solutions. Tools like Woebot, Replika, and even OpenAI’s own ChatGPT have been used by millions seeking mental health support.
Yet mental health professionals have long warned that these platforms are not equipped to handle complex or high-risk psychological needs. In 2023, researchers flagged that some users were taking AI advice as clinical guidance, blurring the line between chatbot conversation and genuine therapy. Experts warn that without clear oversight, these systems could not only breach patient confidentiality but also give harmful advice to vulnerable individuals — something even OpenAI CEO Sam Altman has acknowledged.
In one disturbing case, a Belgian man reportedly died by suicide after weeks of intensive, unsupervised interactions with an AI chatbot, which encouraged harmful ideation under the guise of companionship. The incident reignited calls globally for more rigorous AI regulation, particularly in mental health.
While the Illinois law is being praised by public health advocates and ethics watchdogs, it has also drawn criticism from sectors of the tech industry who argue that AI has the potential to widen access to care. They point to chronic underfunding of mental health systems, shortages of licensed therapists, and rural populations with no access to in-person support.
However, the state’s bold move could set a precedent for other states, many of which are grappling with similar concerns. Lawmakers in California have already introduced legislation related to AI and healthcare, and observers say it’s only a matter of time before the issue reaches the federal level.
“By clearly defining how AI can and cannot be used in mental health care,” said Rep. Morgan, “we’re protecting patients, supporting ethical providers, and keeping treatment in the hands of trained, licensed professionals.”



