U.S. Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) have introduced a bipartisan bill aimed at confronting what they call a “historic theft” of intellectual property and personal data by artificial intelligence firms.
The AI Accountability and Personal Data Protection Act, introduced this week, would make it illegal for AI companies to train their models on copyrighted content or personal information without explicit consent, and grant individuals the right to sue for unauthorized use.
“AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse,” Hawley said. “It’s time for Congress to give the American worker their day in court.”
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The proposed law would significantly alter how generative AI firms such as OpenAI, Meta, Google, Anthropic, and others operate, requiring full disclosure about data usage, strict consent protocols, and legal pathways for creators and individuals to claim damages or block further misuse. The bill also mandates that firms identify which third parties receive data, if any, and allows for financial penalties and injunctive relief.
Blumenthal, co-sponsor of the legislation, said the law is urgently needed to halt the unchecked collection and monetization of people’s private and creative data.
“Tech companies must be held accountable — and liable legally — when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent,” he said.
Courts Siding with AI Companies Intensifies Demand for Legislation
The bipartisan proposal comes amid a growing wave of lawsuits against AI companies — and a growing pattern of court rulings that have largely sided with the tech firms on fair use grounds. Legal experts say the legislation responds to a widening frustration among authors, musicians, publishers, and other content creators who argue that the courts have been too lenient in interpreting copyright law in the age of machine learning.
In June 2024, a federal judge in San Francisco ruled that Anthropic’s use of copyrighted books to train its Claude AI models was “highly transformative,” meaning the firm could claim protection under the fair use doctrine. While the court acknowledged concerns about “direct infringement” in storing full copies of copyrighted books, it stopped short of penalizing Anthropic for the training process itself. The final judgment on damages and potential remedies is still pending.
Similarly, Meta has also found support in court. Authors, including Richard Kadrey and Christopher Golden, sued the company, alleging their books were used without consent to train Meta’s LLaMA models. In that case, the court also deemed the training process as transformative, making it likely to fall under fair use, though the judge left open the possibility that retaining full versions of copyrighted texts in a training dataset might still incur liability, depending on how they are used or stored.
These rulings have raised concerns in creative industries. Many believe that the fair use doctrine, as currently interpreted, was never intended to cover the mass ingestion of copyrighted materials to build commercial AI products — and that the courts are allowing tech companies to circumvent copyright protections that would apply in any other context.
Landmark Legal Precedents Fueling Debate
In one of the few legal victories for rights holders, Thomson Reuters sued Ross Intelligence, alleging that Ross used its proprietary Westlaw legal headnotes to build an AI legal research assistant. The federal court agreed, ruling that Ross had infringed on Thomson Reuters’ copyrighted material, a decision widely viewed as a watershed moment for legal AI accountability. That case is currently in the damages phase.
The New York Times, too, filed a landmark lawsuit in December 2023 against OpenAI and Microsoft, accusing the companies of using its archived journalism — including paywalled content — to train GPT-4 and other models. While the case is ongoing, early filings suggest OpenAI will also argue that its use of content is transformative and protected under fair use, continuing a trend that lawmakers say shows the urgent need for new rules.
Meanwhile, musicians, screenwriters, and visual artists have echoed similar concerns in suits and congressional hearings, pointing to the wholesale scraping of social media posts, lyrics, and images as raw material for AI model training — all without compensation.
Legislation as the Next Battlefield
Given the legal momentum favoring AI developers, some lawmakers from both sides of the aisle argue that legislation is now the only reliable path forward to protect American creators.
“This bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness,” Hawley said.
The bill is expected to face stiff opposition from the tech industry, which has long maintained that scraping public data for AI training is not only legal but essential for innovation and competitiveness. Companies are also likely to point to existing rulings as validation of their data practices.
Whether the Hawley-Blumenthal bill becomes law or serves as a catalyst for broader AI regulation, it signals a sharp turn in Washington’s posture toward Silicon Valley — one that places authors, journalists, artists, and everyday citizens back at the center of the conversation over ownership and control in the AI era.



