Home Latest Insights | News Anthropic Announces Plans To Train AI Chatbot Claude With User Data, Reaches Tentative Settlement in AI Copyright Case

Anthropic Announces Plans To Train AI Chatbot Claude With User Data, Reaches Tentative Settlement in AI Copyright Case

Anthropic Announces Plans To Train AI Chatbot Claude With User Data, Reaches Tentative Settlement in AI Copyright Case

Anthropic has announced that it is changing its Consumer Terms and Privacy Policy, with plans to begin training its AI chatbot Claude with user data.

Under the updated terms, new users will be presented with the option to opt out at signup, while existing users will encounter a pop-up notification titled “Updates to Consumer Terms and Policies.” The notification includes a toggle labeled “You can help improve Claude.” Leaving it checked permits Anthropic to use conversations for training, while unchecking it prevents chats from being included.

Acceptance of the new terms will allow all new or resumed chats to be used by Anthropic. Users must make their choice—either opt in or opt out—by September 28, 2025, to continue accessing Claude.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

For those wishing to review or change their decision later, Anthropic is also making the feature accessible through Claude’s Settings under the Privacy option, where users can toggle off “Help improve Claude.”

Anthropic explained that the policy shift is designed to help it deliver “even more capable, useful AI models” while simultaneously reinforcing safeguards against misuse such as scams, disinformation, and abusive content. The updated terms will apply across all consumer-facing plans, including Claude Free, Pro, and Max. However, they will not extend to services governed by separate commercial contracts, such as Claude for Work or Claude for Education.

The new policy also includes a significant change in data retention. If users opt in to sharing conversations for training, Anthropic will now keep that data for five years. Deleted conversations will not be used for model training. For those who opt out, the company will maintain the current policy of storing data for 30 days for security and abuse-monitoring purposes.

Anthropic says that a “combination of tools and automated processes” will be deployed to filter sensitive information before it is used in training, and it emphasized that no user data will be sold or provided to third parties.

“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” Anthropic wrote in the blog post. “We do not sell users’ data to third-parties.”

Until now, Anthropic had not used user conversations for training Claude, except in cases where individuals explicitly submitted feedback. The new approach marks a major shift in strategy as the company positions Claude to better compete in the rapidly evolving AI marketplace.

At the same time, the move comes as Anthropic faces mounting legal pressure over copyright infringement. According to a court filing on Tuesday, the company has reached a preliminary settlement with a group of U.S. authors who accused it of unlawfully using their copyrighted works to train Claude. While the terms of the settlement have not been made public, analysts believe the decision reflects Anthropic’s effort to resolve disputes quietly and avoid protracted courtroom battles that could stall its growth.

Against the backdrop of growing legal challenges over copyright infringement, it is believed that the timing of both the settlement and the policy change underscores a dual-track strategy: strengthening Claude with more robust training data while preemptively reducing exposure to further legal entanglements.

Anthropic Reaches Tentative Settlement in Landmark AI Copyright Case With Authors

Anthropic, the Amazon-backed artificial intelligence company behind the Claude chatbot, has agreed to a preliminary settlement with a group of U.S. authors, averting what legal experts say could have been one of the most financially catastrophic copyright trials in history.

According to a court filing published Tuesday, the deal is expected to be finalized on September 3, though its terms remain confidential. Anthropic declined to comment, while the authors’ legal team described the resolution as “historic.”

“This settlement will benefit all class members,” said Justin Nelson, a lawyer representing the plaintiffs. “We look forward to announcing details in the coming weeks.”

The lawsuit, filed in 2024 by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, accused Anthropic of illegally training its AI models on their copyrighted books. Initially, Anthropic appeared to be on solid footing. In June, Judge William Alsup of the U.S. District Court in California ruled that the company’s use of the books for training was protected under the doctrine of fair use.

But Alsup also determined that Anthropic had acquired many of those works through “shadow libraries” like LibGen, widely known as hubs for pirated materials. That finding opened the door to a class-action trial on copyright infringement. With roughly 7 million works in question and statutory damages starting at $750 per title, Anthropic faced theoretical liabilities stretching into the trillions — a doomsday scenario for the startup.

“They had few defenses at trial, given how Judge Alsup ruled,” said Edward Lee, a Santa Clara University law professor. “So Anthropic was starting at the risk of statutory damages in ‘doomsday’ amounts.”

Why Anthropic Blinked

Observers say the settlement reflects both the scale of the legal risk and the broader uncertainty surrounding how U.S. courts will handle AI and copyright. Chris Buccafusco, a law professor at Duke University, told Reuters he was surprised by the move, given that Alsup’s fair use ruling gave Anthropic a foothold to defend itself.

“Given their willingness to settle, you have to imagine the dollar signs are flashing in the eyes of plaintiffs’ lawyers around the country,” Buccafusco said.

James Grimmelmann, a digital law professor at Cornell, added that Anthropic’s unique situation — a looming December trial and potentially astronomical damages — likely pushed the company toward compromise.

“It’s possible that this settlement could be a model for other cases, but it really depends on the details,” he said.

Authors Left Waiting

The settlement came as many authors were only just learning they could be part of the lawsuit. Earlier this month, the Authors Guild issued a notice alerting writers that they might qualify as claimants, with lawyers scheduled to submit a list of affected works to the court by September 1. That meant many writers had little visibility into the negotiations.

“The big question is whether there is a significant revolt from within the author class after the settlement terms are unveiled,” Grimmelmann said, calling author reactions a “barometer” of wider copyright sentiment.

Ripple Effects Across AI Copyright Battles

Anthropic is not out of legal disputes over copyright infringement. The company faces separate lawsuits from record labels, including Universal Music Group, which allege it pirated millions of song lyrics to train Claude. Plaintiffs in that case recently claimed Anthropic used BitTorrent to download music illegally.

Meanwhile, OpenAI, Microsoft, and Meta are fighting their own copyright battles, with courts just beginning to address how fair use applies in the age of generative AI. Legal experts say the Anthropic deal delays, rather than settles, the biggest unresolved question: whether large-scale ingestion of copyrighted materials without permission can be deemed lawful.

By settling, Anthropic avoids being the first test case on appeal. “This removes an early opportunity for a federal appeals court to weigh in on fair use,” said Grimmelmann. “That decision would have been binding on other cases and could have fast-tracked the issue to the Supreme Court.”

The confidential deal, for now, spares Anthropic a potentially ruinous verdict, but it also leaves other AI companies to fight the next round of legal battles without the clarity of precedent.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here