By Jason Nelson
3 min read
A surge in new users to social media platform BlueSky has also brought a rise in “harmful content,” leading to a mass moderation campaign to purge images from the network, the platform said on Monday.
“We're experiencing a huge influx of users, and with that, a predictable uptick in harmful content posted to the network,” BlueSky’s Safety account said. “As a result, for some very high-severity policy areas like child safety, we recently made some short-term moderation choices to prioritize recall over precision.”
After President-elect Donald Trump’s victory earlier this month, millions of users abandoned X, the platform formerly known as Twitter in search of alternatives.
Many migrated to alternative social media platforms, with 35 million joining Meta’s Threads and 20 million flocking to BlueSky—the decentralized social media platform launched by former Twitter CEO Jack Dorsey—during the past three weeks alone.
The flood of new users added to the more than one million Brazilians who flocked to BlueSky after a judge in the South American nation banned X in September.
BlueSky saw another surge in October after X owner Elon Musk said tweets could be used to train the Grok AI.
However, along with its new users, earlier this month, BlueSky reported a surge in spam, scams, and “trolling activity,” alongside a troubling rise in child sexual abuse material.
According to a report by tech website Platformer, in 2023, BlueSky had two confirmed cases of child-oriented sexual content posted on the network. On Monday, there were eight confirmed cases.
“In the past 24 hours, we have received more than 42,000 reports (an all-time high for one day). We’re receiving about 3,000 reports/hour. To put that into context, in all of 2023, we received 360k reports,” BlueSky said.
BlueSky said that its mass moderation might have resulted in “over-enforcement” and account suspensions. Some of the wrongly suspended accounts were reinstated, while others could still file appeals.
“We’re expanding our moderation team as we grow to improve both the timeliness and accuracy of our moderation actions,” the company said.
To curb AI-generated deepfakes on its platform, BlueSky partnered with Los Angeles-headquartered nonprofit Thorn in January.
BlueSky deployed Thorn’s AI-powered Safer moderation technology, which detects child-oriented sexual content, including text-based material that "may indicate instances of threats that could lead to sexual harm against children."
While X does allow adult content, in May, the social media platform said it had also implemented Thorn’s Safer technology to combat child sexual abuse material on the site.
“We’ve learned a lot from our beta testing,” Thorn's VP of data science Rebecca Portnoff told Decrypt at the time.
“While we knew going in that child sexual abuse manifests in all types of content, including text, we saw concretely in this beta testing how machine learning/AI for text can have real-life impact at scale,” she said.
Edited by Sebastian Sinclair and Josh Quittner
Editor's note: Adds clarity over Thorn's status as a nonprofit.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.