Meta scored a significant legal victory on Wednesday after a federal judge ruled that its use of copyrighted books to train AI systems falls under the fair use doctrine, delivering a second major win for AI developers in less than 48 hours.
The ruling comes on the heels of a similar decision in favor of Anthropic on Monday, and together, the two judgments are shaping what could become a foundational legal precedent for how U.S. copyright law applies to artificial intelligence training.
In both cases, federal judges emphasized that the use of copyrighted material to train large language models (LLMs) was transformative, allowing the AI systems to generate new, original content without directly reproducing or commercially displacing the authors’ works. These rulings are now being viewed as pivotal moments in the ongoing legal battle between content creators and tech firms racing to dominate the AI space.
Meta Case: Authors Lost on Arguments and Evidence
In the lawsuit against Meta, 13 authors, including comedian Sarah Silverman, alleged that the company had illegally used their books to train its AI models. But U.S. District Judge Vince Chhabria dismissed their claims on summary judgment, stating that the authors failed to make the right legal arguments or provide sufficient evidence.
“The plaintiffs presented no meaningful evidence on market dilution at all,” Chhabria wrote. He added that Meta’s use of the books was transformative because the AI systems did not reproduce the authors’ styles or core creative elements, nor did they undermine the commercial value of the books in the marketplace.
Importantly, Judge Chhabria emphasized that his ruling should not be seen as a blanket endorsement of all AI training practices.
“This does not mean that Meta’s use of copyrighted materials is lawful in every case,” he said, leaving the door open for future legal challenges where plaintiffs provide better-developed evidence on how AI models may harm content markets.
Anthropic Ruling: AI Learning Like a Human
Just two days earlier, Judge William Alsup of the Northern District of California issued a similarly detailed ruling in favor of Anthropic, the Amazon-backed AI company behind the Claude chatbot. Alsup held that the company’s use of copyrighted books to train Claude was “quintessentially transformative” and akin to how human writers read books to develop their own style and ideas.
“The purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative,” Alsup wrote. “Like any reader aspiring to be a writer.”
While Alsup acknowledged that Anthropic had once used pirated digital copies of books as part of a larger corpus—referred to as a “central library”—he determined that training Claude itself was fair use. However, he ordered a trial to determine if damages should be awarded for Anthropic’s initial unauthorized copying of the books, especially in cases where pirated versions were downloaded and later replaced with purchased copies.
Alsup’s nuanced approach signaled that fair use may protect the training process, but not necessarily the means through which the materials were obtained.
Wider Legal Implications for AI and Copyright
These two rulings, though limited in scope, represent early but important legal benchmarks in what is expected to be a long and complex judicial reckoning over AI training practices. Courts are increasingly being asked to balance the rights of creators with the innovation imperatives of AI developers.
The outcomes are particularly relevant as other high-profile lawsuits loom. The New York Times is currently suing OpenAI and Microsoft, alleging that their AI models used the newspaper’s articles without permission. Similarly, Disney, Universal, and other media companies have filed suits over the unauthorized use of TV shows and films to train generative models like Midjourney and Stability AI.
In his opinion, Judge Chhabria explicitly noted that future rulings may differ depending on the type of content involved.
“Markets for certain types of works (like news articles) might be even more vulnerable to indirect competition from AI outputs,” he wrote, hinting that the outcome in the Times case could swing differently.
Tech Industry Applauds, But Debate Far From Over
Anthropic and Meta both welcomed the rulings. A spokesperson for Anthropic said the decision “is consistent with copyright’s purpose in enabling creativity and fostering scientific progress.” Meta did not comment, but legal analysts say the back-to-back wins bolster the tech industry’s stance that training AI models on publicly available content can be legally permissible under U.S. law—at least when framed properly.
However, despite the momentum, legal experts and industry observers caution that these decisions are far from the final word. Fair use, by its nature, is a context-specific legal doctrine. Future lawsuits that better demonstrate market harm, unauthorized copying, or direct commercial substitution may well produce different outcomes.