While Meta works to align its fortunes with the AI gold rush—most recently introducing celebrity-inspired chatbots with whom users can interact—it remains on the hook for a more fundamental problem with its business model: kids love its platforms.

A coalition of 34 U.S. states is now suing the social media giant, alleging that Facebook and Instagram are inappropriately manipulating the children who use them.

Attorney generals from states including California, New York, Ohio, South Dakota, Virginia, and Louisiana accuse Meta of using its algorithms to encourage compulsive use and harm children’s mental health with features like the “Like” button, according to a report in Deadline.

The over 200-page lawsuit called Meta’s claims of a safe experience for young users false and misleading.

“Over the past decade, Meta—itself and through its flagship Social Media Platforms Facebook and Instagram (its Social Media Platforms or Platforms)—has profoundly altered the psychological and social realities of a generation of young Americans,” the legal filing reads. “Meta has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens.”

Meta has already harnessed AI to address trust and safety issues on its platforms, including moderating harmful content, preventing adults from harassing minors via private messages, and detecting and removing child exploitation content.

In a response provided to Decrypt, a Meta spokesperson said the company has the same concerns as the state attorneys general relating to safe, positive experiences for young people, saying it has "already introduced over 30 tools to support teens and their families."

"We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the spokesperson added.

Meta further asserted that research on social media's negative impact is inconclusive and does not support the suggestion that social media use causes teen mental health issues. The company also said it allows users to hide like counts so they don’t show others like counts on posts.

Meta said it had made several changes so “teens can express themselves in a safe environment,” including age verification, automatically setting teens' accounts to private, automated interventions that prevents suspicious adults from engaging with teens, and not allowing content that promotes suicide, self-harm, and eating disorders.

Nonetheless, the government plaintiffs are moving forward with litigation.

“This action is in the public interest of the filing states,” the attorneys general write. “Meta’s unlawful acts and practices affect a significant number of consumers in the filing states [and] hese acts and practices have caused and will continue to cause adverse effects to consumers in the filing states.”

The states’ attorneys demand damages, restitution, and compensation in varying amounts for each state listed in the filing, ranging from $5,000 to $25,000 per alleged incident.

Meta says the best path forward is to develop broad and universal policies that apply to all social platforms.

“The issues identified by the attorneys general lend themselves to cross-industry standards for young people and the need to work with companies across the industry in addressing these topics,” Meta said in an email.

Meta is one of several tech behemoths, including Microsoft and Google, investing heavily in generative artificial intelligence.

In September, during its annual Meta Connect conference, CEO Mark Zuckerberg announced several new AI-driven applications being added to its suite of social media platforms, including 28 AI characters for users to interact with played by Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.