A new bill introduced on Wednesday by Sen. John Curtis of Utah is set to reopen one of Washington’s most contentious debates: whether social media giants should finally face legal consequences for algorithm-driven recommendations that push users toward harmful content.
The proposal, called the Algorithm Accountability Act, would deliver the most consequential rewrite of Section 230 in decades and could expose the platforms to lawsuits if their recommendation systems help radicalize users or contribute to real-world violence.
Curtis said the law is long overdue. He argued that Section 230, created nearly 30 years ago, was written for a small and untested internet — not the sprawling world of algorithmic feeds designed to keep billions of users online.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
“Section 230 was written nearly 30 years ago for a very different internet,” he said in a statement. “What began as a commonsense protection for a fledgling industry has grown into a blanket immunity shield for some of the most powerful companies on the planet — companies that intentionally design algorithms that exploit user behavior, amplify dangerous content, and keep people online at any cost. Our bill will hold them accountable.”
At the core of the bill is a simple idea that if a platform knowingly uses an algorithm to push harmful content that sparks injury or death, it must “own” the consequences. Under the proposal, platforms could be sued directly by individuals who can prove the algorithm played a role. The measure would impose a duty of care, requiring companies to design, test, and operate their recommendation systems with safety in mind.
Curtis introduced the bill with Sen. Mark Kelly of Arizona, who said families have endured too much harm from addictive algorithmic systems designed to maximize revenue, not safety.
“Too many families have been hurt by social media algorithms designed with one goal: make money by getting people hooked,” Kelly said. “Over and over again, these companies refuse to take responsibility when their platforms contribute to violence, crime, or self-harm. We’re going to change that and finally allow Americans to hold companies accountable.”
The legislation makes clear that ordinary speech is not the target. It bars enforcement based on viewpoint, seeking to avoid any perception that the bill polices political expression. It also grants states the authority to pass similar or stronger laws, giving them flexibility to confront harms emerging from platforms within their borders.
Curtis said the issue has taken on new urgency following the killing of conservative activist Charlie Kirk, who was assassinated in September during an event at Utah Valley University. According to early FBI findings, the gunman spent extensive time in fringe online forums and had been drawn into extremist ideology.
Utah Governor Spencer Cox said the suspect had been “engulfed” by a radical left worldview. Cox welcomed the new bill and said national action is essential.
“Utah has led the nation in passing laws to protect children from the harms of social media, but these challenges don’t stop at state lines,” he said. “We need a national standard for accountability.”
Curtis has been building toward a measure like this for months, pressing tech executives in hearings and insisting they acknowledge how their products shape public behavior. In a Senate hearing last month, he told executives they must “own” the choices they make in their recommendation engines. He has also compared the current moment to the 1990s, when tobacco executives denied nicotine’s dangers until the evidence became overwhelming.
The proposal marks one of the most aggressive attempts yet to narrow the legal shield that has defined social media’s rise. Passed in 1996, Section 230 prevented platforms from being sued over user-generated posts, a protection that digital rights advocates say enabled the early internet to flourish. But Curtis and Kelly argue that the architecture of today’s technology — with personalized feeds, targeted engagement loops, and algorithmic steering — bears no resemblance to the online world Congress sought to protect nearly three decades ago.
Curtis put it bluntly during an interview with the Deseret News, noting: “If they’re responsible for something going out that caused harm, they are responsible. So think twice before you magnify. Why do these things need to be magnified at all?”
The bill arrives at a moment when lawmakers from both parties are increasingly skeptical of social media companies and are searching for ways to curb the influence of their algorithmic systems. The question now borders on whether Congress is ready to take on a reform that has eluded lawmakers for years — and whether the tech industry is prepared for what could be the most significant shift in internet liability since the 1990s.



