Meta’s Llama has memorized huge portions of Harry Potter

by Alan North
0 comments


Meta’s Llama model has memorized Harry Potter and the Sorcerer’s Stone so well that it can reproduce verbatim excerpts from 42 percent of the book, according to a new study.

Researchers from Stanford, Cornell, and West Virginia University analyzed dozens of books from the now-infamous Books3 dataset, a collection of pirated books used to train Meta’s Llama models. Books3 is also at the center of a copyright infringement lawsuit against Meta, Kadrey v. Meta Platforms, Inc. The study’s authors say their findings could have major implications for AI companies facing similar lawsuits.

According to the research paper, the Llama 3.1 model “memorizes some books, like Harry Potter and 1984, almost entirely.” Specifically, the study found that Llama 3.1 has memorized 42 percent of the first Harry Potter book so well that it can reproduce verbatim excerpts at least 50 percent of the time. Overall, Llama 3.1 could reproduce excerpts from 91 percent of the book, though not as consistently.

“The extent of verbatim memorization of books from the Books3 dataset is more significant than previously described,” said the paper. But the researchers also discovered that “memorization varies widely from model to model and from book to book within each model, as well as varying in different parts of individual books.” For example, the study estimated that Llama 3.1 only memorized 0.13 percent of Sandman Slim by Richard Kadrey, one of the lead plaintiffs in the class action copyright suit against Meta.

So, while some of the paper’s findings seem damning, don’t call it a smoking gun for plaintiffs in AI copyright infringement cases.

Mashable Light Speed

“These results give everyone in the AI copyright debate something to latch on to,” wrote journalist Timothy B. Lee in his Understanding AI newsletter. “Divergent results like these could cast doubt on whether it makes sense to lump J.K. Rowling, Richard Kadrey, and thousands of other authors together in a single mass lawsuit. And that could work in Meta’s favor, since most authors lack the resources to file individual lawsuits.”

Why is Llama able to reproduce some books more than others? “I suspect that the difference is because Harry Potter is a much more famous book. It’s widely quoted and I’m sure that substantial excerpts from it on third-party websites found their way into the training data on the web,” said James Grimmelmann, a professor of digital and information law at Cornell University, who was cited in the paper.

What this also shows, Grimmelmann said, is that “AI companies can make choices that increase or reduce memorization. It’s not an inevitable feature of AI; they have control over it.”

Meta and other AI companies have argued that using copyrighted works to train their models is protected under fair use, a complex legal doctrine. However, the extent of memorization could complicate those arguments.

“Yes, I do think that the likelihood that LLMs are memorizing more than previously thought changes the copyright analysis,” Robert Brauneis, a professor with the George Washington University Law School, said in an email to Mashable. He concluded that the study’s findings could ultimately weaken Meta’s fair use argument.

We asked Meta for comment on the study’s findings, and we’ll update this article if we receive a response.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



Source link

Related Posts

Leave a Comment