For years, Meta employees have internally discussed using copyrighted works obtained through legally questionable means to train the company’s AI models, according to court documents unsealed on Thursday.
The documents were submitted by plaintiffs in the case Kadrey v. Meta, one of many AI copyright disputes slowly winding through the U.S. court system. The defendant, Meta, claims that training models on IP-protected works, particularly books, is “fair use.” The plaintiffs, who include authors Sarah Silverman and Ta-Nehisi Coates, disagree.
Previous materials submitted in the suit alleged that Meta CEO Mark Zuckerberg gave Meta’s AI team the OK to train on copyrighted content and that Meta halted AI training data licensing talks with book publishers. But the new filings, most of which show portions of internal work chats between Meta staffers, paint the clearest picture yet of how Meta may have come to use copyrighted data to train its models, including models in the company’s Llama family.
In one chat, Meta employees, including Melanie Kambadur, a senior manager for Meta’s Llama model research team, discussed training models on works they knew may be legally fraught.
“[M]y opinion would be (in the line of ‘ask forgiveness, not for permission’): we try to acquire the books and escalate it to execs so they make the call,” wrote Xavier Martinet, a Meta research engineer, in a chat dated February 2023, according to the filings. “[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”
Martinet floated the idea of buying e-books at retail prices to build a training set rather than cutting licensing deals with individual book publishers. After another staffer pointed out that using unauthorized, copyrighted materials might be grounds for a legal challenge, Martinet doubled down, arguing that “a gazillion” startups were probably already using pirated books for training.
“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent,” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”
In the same chat, Kambadur, who noted Meta was in talks with document hosting platform Scribd “and others” for licenses, cautioned that while using “publicly available data” for model training would require approvals, Meta’s lawyers were being “less conservative” than they had been in the past with such approvals.
“Yeah we definitely need to get licenses or approvals on publicly available data still,” Kambadur said, according to the filings. “[D]ifference now is we have more money, more lawyers, more bizdev help, ability to fast track/escalate for speed, and lawyers are being a bit less conservative on approvals.”
Talks of Libgen
In another work chat relayed in the filings, Kambadur discusses possibly using Libgen, a “links aggregator” that provides access to copyrighted works from publishers, as an alternative to data sources that Meta might license.
Libgen has been sued a number of times, ordered to shut down, and fined tens of millions of dollars for copyright infringement. One of Kambadur’s colleagues responded with a screenshot of a Google Search result for Libgen containing the snippet “No, Libgen is not legal.”
Some decision-makers within Meta appear to have been under the impression that failing to use Libgen for model training could seriously hurt Meta’s competitiveness in the AI race, according to the filings.
In an email addressed to Meta AI VP Joelle Pineau, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” referring to topping the best, state-of-the-art (SOTA) AI models and benchmark categories.
Theakanath also outlined “mitigations” in the email intended to help reduce Meta’s legal exposure, including removing data from Libgen “clearly marked as pirated/stolen” and also simply not publicly citing usage. “We would not disclose use of Libgen datasets used to train,” as Theakanath put it.
In practice, these mitigations entailed combing through Libgen files for words like “stolen” or “pirated,” according to the filings.
In a work chat, Kambadur mentioned that Meta’s AI team also tuned models to “avoid IP risky prompts” — that is, configured the models to refuse to answer questions like “reproduce the first three pages of ‘Harry Potter and the Sorcerer’s Stone’” or “tell me which e-books you were trained on.”
The filings contain other revelations, implying that Meta may have scraped Reddit data for some type of model training, possibly by mimicking the behavior of a third-party app called Pushshift. Notably, Reddit said in April 2023 that it planned to begin charging AI companies to access data for model training.
In one chat dated March 2024, Chaya Nayak, director of product management at Meta’s generative AI org, said that Meta leadership was considering “overriding” past decisions on training sets, including a decision not to use Quora content or licensed books and scientific articles, to ensure the company’s models had sufficient training data.
Nayak implied that Meta’s first-party training datasets — Facebook and Instagram posts, text transcribed from videos on Meta platforms, and certain Meta for Business messages — simply weren’t enough. “[W]e need more data,” she wrote.
The plaintiffs in Kadrey v. Meta have amended their complaint several times since the case was filed in the U.S. District Court for the Northern District of California, San Francisco Division, in 2023. The latest alleges that Meta, among other claims, cross-referenced certain pirated books with copyrighted books available for license to determine whether it made sense to pursue a licensing agreement with a publisher.
In a sign of how high Meta considers the legal stakes to be, the company has added two Supreme Court litigators from the law firm Paul Weiss to its defense team on the case.
Meta didn’t immediately respond to a request for comment.
Read the full article here