By Vismaya V
3 min read
Three Tennessee minors have sued Elon Musk's xAI in a federal class action, alleging Grok generated child sexual abuse material using their real photographs and that the company knowingly designed its AI chatbot without industry-standard safeguards, then profited from the result.
The lawsuit, filed Monday in the Northern District of California, claims Grok was used to create and distribute AI-generated child sexual abuse material (CSAM) using their real images.
The minors, identified as Jane Doe 1, 2, and 3, said the altered content was shared across platforms, including Discord, Telegram, and file-sharing sites, causing lasting emotional distress and reputational harm.
"xAI—and its founder Elon Musk—saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children," the lawsuit reads. "Knowing the type of harmful, illegal content that could—and would—be produced, xAI released Grok, a generative artificial intelligence model with image and video-making features that would respond to prompts to create sexual content with a person's real image or video.”
The alleged victims describe incidents between mid-2025 and early 2026, when their real photos were altered into explicit images and circulated online.
In one instance, one of the victims was alerted by an anonymous user who found folders of AI-generated content being traded among hundreds of users.
They allege a perpetrator accessed Grok through a third-party application that had licensed xAI's technology, a structure the filing says xAI deliberately used to distance itself from liability while continuing to profit from the underlying model.
At the height of public backlash in January, Musk wrote on X that he was "not aware of any naked underage images," adding that "when asked to generate images, it will refuse to produce anything illegal."
According to a finding by the Center for Countering Digital Hate, cited in the lawsuit, Grok produced an estimated 23,338 sexualized images of children between December 29, 2025, and January 9 of this year, roughly one every 41 seconds.
The alleged victims are seeking damages of at least $150,000 per violation under Masha’s Law, along with disgorgement of revenues, punitive damages, attorneys’ fees, and a permanent injunction, as well as restitution of profits under California’s Unfair Competition Law.
The lawsuit is one of the first to hold an AI company directly liable for the alleged production and distribution of AI-generated CSAM depicting identifiable minors, and arrives as Grok faces simultaneous investigations across the U.S., EU, UK, France, Ireland, and Australia.
"When a system is intentionally designed to manipulate real images into sexualized content, the downstream abuse is not an anomaly—it is a foreseeable outcome,” Even Alex Chandra, a partner at IGNOS Law Alliance, told Decrypt.
Chandra said courts may not accept a simple platform defense, noting a generative AI system could be “treated as a platform in terms of user interaction” but “evaluated as a product” when assessing safety design, with “particularly strict scrutiny” applied in CSAM cases due to heightened child protection obligations.
He also said courts will likely focus on safeguards, noting the company may be expected to show “risk assessments and safety-by-design measures before deployment,” along with guardrails that actively block harmful outputs.
Decrypt has reached out to Musk via xAI and SpaceX for comment.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.