A federal judge has slowed approval of Anthropic’s proposed $1.5 billion copyright settlement with authors, accusing the artificial intelligence company of illegally using pirated books to train its chatbot, Claude.
At a hearing in San Francisco on Thursday, U.S. District Judge Araceli Martinez-Olguin stopped short of granting final approval to the deal and instead demanded additional information on several aspects of the proposed settlement, including attorneys’ fees and compensation for lead plaintiffs.
The agreement, which was initially approved on a preliminary basis last September by now-retired Judge William Alsup, is considered the largest known copyright settlement tied to generative artificial intelligence in the United States.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The case has become one of the most closely watched legal battles in the AI industry because it sits at the center of a growing conflict between technology companies racing to build large language models and copyright owners who argue their work has been exploited without permission or compensation.
Anthropic, backed by Amazon and Alphabet, is among several major AI firms facing lawsuits from authors, publishers, news organizations, and artists over how training data for AI systems is collected and used.
A Landmark AI Copyright Battle
The lawsuit against Anthropic was filed in 2024 by a group of authors who alleged the company used pirated versions of their books without authorization to train Claude, Anthropic’s flagship AI chatbot.
The plaintiffs argued the company copied and stored millions of copyrighted works in violation of U.S. copyright law as part of efforts to build increasingly sophisticated AI systems capable of generating human-like responses.
The litigation quickly evolved into one of the most consequential copyright cases confronting the AI sector.
Last June, Judge Alsup delivered a mixed ruling that was widely interpreted as an important early legal victory for AI developers, while still exposing Anthropic to potentially massive liability. Alsup ruled that Anthropic’s use of copyrighted books for AI model training qualified as “fair use” under U.S. copyright law, a finding that could have broad implications for the entire generative AI industry.
The fair-use doctrine allows limited use of copyrighted material without permission under certain circumstances, particularly when the use is deemed transformative.
The judge concluded that using books to train large language models was sufficiently transformative because the models were learning patterns and language relationships rather than reproducing the original works directly. However, Alsup simultaneously ruled that Anthropic may have violated copyright law by storing more than seven million pirated books inside what the court described as a “central library,” regardless of whether all the books were ultimately used in AI training.
That distinction became critical.
While the fair-use ruling reduced some legal risks for AI developers, the piracy-related claims still exposed Anthropic to potentially enormous financial damages.
A trial had been scheduled for December to determine liability and damages connected to the alleged storage and acquisition of pirated materials, with potential exposure reportedly reaching into the hundreds of billions of dollars.
The proposed settlement was intended to resolve those claims before trial.
Settlement Faces Growing Opposition
Although the agreement covers more than 480,000 works, opposition to the settlement has intensified among segments of the writing and publishing community. During Thursday’s hearing, attorneys representing the authors said claims had been filed covering more than 92% of the works included in the settlement class.
Still, several authors have objected to the deal, arguing the payout is inadequate given the scale of the alleged infringement and the enormous commercial value now being generated by AI companies. Others contend the settlement disproportionately benefits attorneys while offering insufficient compensation to writers whose works were allegedly used without consent.
Some critics have also argued that the settlement structure improperly excludes certain copyright holders or limits future legal recourse.
The judge’s request for additional information on legal fees and lead-plaintiff payments suggests the court is taking those objections seriously before granting final approval.
Rapid expansion of generative AI stirred tensions across the creative industries. Many authors, artists, and publishers fear that AI companies are building highly profitable products using copyrighted material gathered from the internet, digital libraries, and pirate repositories without licensing agreements or meaningful compensation.
Technology firms, meanwhile, argue that broad access to data is essential for developing competitive AI systems and that training models on copyrighted material constitutes lawful fair use. This has resulted in many lawsuits like Anthropic’s.
However, the proposed Anthropic settlement does not resolve all legal disputes surrounding the company. Several other lawsuits filed by authors and publishers remain active, with plaintiffs continuing to challenge Anthropic’s data practices and AI training methods.
In another sign of growing resistance to the settlement, more than 25 writers who opted out of the agreement filed a separate complaint against Anthropic in California on Wednesday. The group includes prominent authors such as Dave Eggers and Vendela Vida.
Their decision to pursue independent litigation indicates some copyright owners believe they may secure better outcomes through continued court battles rather than participating in the class settlement. The opt-out lawsuits also increase pressure on Anthropic because they preserve the possibility of additional claims even if the broader settlement is ultimately approved.
AI Industry Watches Closely
The legal battle is being watched closely across Silicon Valley because its implications extend far beyond Anthropic alone.
Virtually every major generative AI company faces similar allegations regarding the use of copyrighted material in model training.
OpenAI, Meta Platforms, Microsoft, and other technology firms are all confronting lawsuits from authors, publishers, musicians, visual artists, and news organizations.
The cases collectively could define the legal foundations of the AI economy.
If courts broadly uphold fair-use protections for AI training, technology companies may continue developing models using vast quantities of publicly available data with relatively limited licensing obligations. If courts ultimately narrow those protections or impose major financial penalties tied to copyrighted content acquisition, the economics of AI development could change dramatically.
The Anthropic case has become particularly important because it produced one of the first major judicial rulings distinguishing between AI training itself and the acquisition or storage of copyrighted materials.
That distinction may become increasingly central in future AI litigation.



