A closely watched lawsuit brought against OpenAI by several authors—including Sarah Silverman, Paul Tremblay, Christopher Golden, and Richard Kadrey—had almost all of its allegations dismissed by a federal judge.

Judge Araceli Martinez-Olguin of the Northern District of California, in court documents filed Monday, dismissed the plaintiffs’ claims of “vicarious copyright infringement,” saying there was not enough evidence to support them. The ruling was first reported by Bloomberg Law.

“Plaintiffs' allegation that ‘every output of the OpenAI Language Models is an infringing derivative work’ is insufficient,” Judge Martinez-Olguin said. “Plaintiffs fail to explain what the outputs entail or allege that any particular output is substantially similar — or similar at all — to their books."

“Accordingly, the Court dismissed the vicarious copyright infringement claim with leave to amend,” the court concluded.


Since the launch of GPT-4 in March, OpenAI has faced repeated accusations that its AI models were trained on copyrighted material. The New York Times sued OpenAI in December, claiming that the AI developer trained ChatGPT on its articles.

In September, Game of Thrones creator George RR Martin joined a lawsuit launched by the Authors Guild against OpenAI. Other authors joining Martin in the lawsuit include John Grisham, Jonathan Franzen, Jodi Picoult, and Michael Connelly. This lawsuit was followed by another in October by journalist and writer Ta-Nehisi Coates, Junot Diaz, and Andrew Sean Greer.

“While parts of that case were dismissed, one of the two remaining claims—the one about copyright infringement, which is the same as the claim in our suit—was not,” an Authors Guild spokesperson told Decrypt. “It’s a legally very solid argument.”

Nonetheless, Judge Martinez-Olguin said the lawsuit was short on details.


“Even if Plaintiffs provided facts showing Defendants' knowing removal of [copyright management information] from the books during the training process,” Martinez-Olguin wrote. “Plaintiffs have not shown how omitting CMI in the copies used in the training set gave Defendants reasonable grounds to know that ChatGPT's output would induce, enable, facilitate, or conceal infringement.”

According to the non-profit Copyright Alliance, copyright management information (CMI) is information about a copyrighted work, including the creator, owner, or “use of the work that is conveyed in connection with a copyrighted work.”

Judge Martinez-Olguin additionally dismissed the plaintiffs’ claim that OpenAI broke copyright law by generating ChatGPT responses without attribution. The court said omitting this information isn't a Digital Millennium Copyright Act (DMCA) violation by itself and that the claim again lacks details on how outputs use copyrighted material.

The court significantly pruned the authors’ case on a variety of other points, including claims of negligence, fraud, unjust enrichment, and unlawful business practices.

Judge Martinez-Olguin did allow the claim that OpenAI trained ChatGPT on copyrighted material without permission to move forward. The court also said plaintiffs can amend and refile the claims if they wish.

“Assuming the truth of Plaintiffs' allegations—that Defendants used Plaintiffs' copyrighted works to train their language models for commercial profit—the Court concludes that Defendants' conduct may constitute an unfair practice,” Judge Martinez-Olguin said. “Therefore, this portion of the UCL claim may proceed.”

Representatives for Silverman, Coates, and the Authors Guild did not immediately respond to Decrypt’s request for comment.

Authors are not the only artists pursuing lawsuits against OpenAI and other AI developers for copyright infringement. In October, a federal judge handed artists suing Midjourney and Deviant Art—including illustrator Sarah Anderson—a significant defeat after ruling that plaintiffs did not provide enough evidence to support their claim of copyright infringement.


“Plaintiffs fail to allege specific plausible facts that DeviantArt played any affirmative role in the scraping and using of Anderson’s and other’s registered works to create the training images,” Judge William Orrick wrote at the time. “The complaint, instead, admits that the scraping and creation of training images was done by LAION at the direction of Stability, and that Stability used the training images to train Stable Diffusion.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.