AI company Anthropic agreed to a $1.5 billion settlement on Friday, aimed at resolving a sweeping class-action lawsuit brought by authors who alleged the company used pirated copies of their books to train its chatbot, Claude.
According to the proposed agreement, which is subject to judicial approval, the settlement provides roughly $3,000 to each of the authors for the estimated 500,000 books covered. If approved, it would make this “the largest publicly reported copyright recovery in history.”
The Authors Guild, which represents thousands of writers, welcomed the outcome. On Friday, its CEO, Mary Rasenberger, called the settlement “an excellent result for authors, publishers, and rights-holders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”
The lawsuit was initiated last year by author Andrea Bartz and writers Charles Graeber and Kirk Wallace Johnson. They later came to represent a broader class of writers and publishers after Anthropic was accused of downloading millions of pirated books to train its AI models.
In June, U.S. District Judge William Alsup ruled that while training AI chatbots on copyrighted books was not in itself illegal, Anthropic had wrongfully obtained more than 7 million digitized works from piracy websites, including Books3, Library Genesis, and the Pirate Library Mirror.
Had the company gone to trial in December and lost, analysts said the financial blow could have been devastating. “We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst with Wolters Kluwer.
The deal comes amid heightened scrutiny of AI companies. Last month, X Corp and X.AI filed an antitrust lawsuit against Apple and OpenAI, accusing them of monopolistic practices in smartphones and generative AI chatbots. Additionally, in Texas, Attorney General Ken Paxton recently launched an investigation into Meta and Character.ai over whether their chatbots misled children with deceptive claims of providing therapeutic support, raising concerns about privacy and data exploitation.