Florida Attorney General James Uthmeier on Thursday announced a formal investigation into OpenAI, escalating regulatory pressure on the artificial intelligence company over concerns ranging from alleged harm to minors and child safety to national security risks and a possible connection to the deadly shooting at Florida State University last year.
The move marks one of the most aggressive state-level actions yet against a leading AI developer, and comes as OpenAI weighs a potential initial public offering that some reports suggest could value the company at as much as $1 trillion.
In a video posted to social media, Uthmeier said his office is examining whether ChatGPT may have played a role in assisting the suspect in the April 2025 campus shooting that left two people dead.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” the attorney general said.
According to details cited by state officials and court records tied to the case, the suspect allegedly asked ChatGPT on the day of the shooting how the country would react to an attack at FSU and what time the student union would be busiest. Those exchanges are expected to form part of the evidentiary record in an October trial.
But the investigation broadens beyond the FSU case. Uthmeier said his office is also scrutinizing allegations that ChatGPT has, in certain instances, encouraged suicide and self-harm, concerns that have already surfaced in lawsuits filed by families against OpenAI.
He also raised national security concerns, specifically the possibility that hostile foreign actors, including the Chinese government, could exploit OpenAI’s systems or underlying data.
“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”
The attorney general added that subpoenas to OpenAI would be issued shortly as part of the probe.
The announcement places Florida at the center of a widening national debate over how generative AI systems should be governed, particularly as lawmakers and regulators grapple with the technology’s role in harmful content, youth exposure, and real-world criminal misuse.
OpenAI, in a statement to TechCrunch, said it would cooperate with the investigation and defended the broader benefits of its products.
“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” a company spokesperson said.
“Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”
The company added that it continues to refine ChatGPT’s ability to interpret user intent and provide safe, context-appropriate responses.
Just a day before Florida’s announcement, OpenAI unveiled a Child Safety Blueprint, a policy framework that includes legislative recommendations and product safeguards aimed at reducing risks to children from AI systems. Among the proposals are stronger laws against AI-generated child sexual abuse material, clearer reporting pathways to law enforcement, and more robust preventative controls to block abusive uses of AI tools.
The blueprint appears to be part of a broader industry response to rising concern over AI-generated harmful content. Those concerns have intensified following a recent report by the Internet Watch Foundation, which found more than 8,000 reports of AI-generated child sexual abuse material in the first half of 2025, representing a 14% year-over-year increase.
That data has added urgency to calls for stricter oversight of generative AI systems, particularly tools capable of producing realistic synthetic images and text at scale. The Florida probe may now become an early legal test of how far state authorities can go in holding AI developers accountable for downstream misuse of their platforms.
The central legal question is likely to revolve around causation and foreseeability: whether a platform can be held responsible when a user allegedly employs its outputs in planning violent acts, and whether existing safeguards were adequate. That issue is especially sensitive in the FSU case, where more than 270 alleged ChatGPT interactions are reportedly part of the court record.
The case also comes at a moment when regulators globally are shifting from abstract AI principles to enforcement. The Florida investigation introduces fresh legal and reputational risk for OpenAI, just as scrutiny of AI safety, child protection, and platform liability intensifies across jurisdictions.



