In its latest bid to curb unauthorized AI-generated deepfakes, Google is taking new steps to remove and demote websites in searches that have been reported to contain illicit images, the technology and search giant said on Wednesday.
An AI deepfake is media created using generative AI to produce videos, pictures, or audio clips that appear real. Many of these fake images depict celebrities like actress Scarlett Johansson, politicians like U.S. President Joe Biden, and, more insidiously, children.
“For many years, people have been able to request the removal of non-consensual fake explicit imagery from Search under our policies,” Google said in a blog post. “We’ve now developed systems to make the process easier, helping people address this issue at scale.”
Such reports, a Google spokesperson further explained to Decrypt, will affect the visibility of a site in its search results.
“If we receive a high volume of removal requests from a site, under this policy, that's going to be used as a signal to our ranking systems that that site is not a high-quality site—we'll incorporate that in our ranking system to demote the site,” the spokesperson said. “Broadly speaking, that's not the only way that we can go about limiting the visibility of that content in search.”
New Senate Bill Targets AI Deepfakes, Calls for Content Watermarks
A the latest bid to curb AI-generated deepfakes, a bipartisan group of U.S. senators led by Washington Senator Maria Cantwell announced on Thursday the introduction of the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. The COPIED Act calls for a standardized method to watermark AI-generated content so that it can be easily detected. It also requires AI tool providers to allow creators to attach information detailing the origin or “provenance” of their conte...
With Google’s new update, when a request to remove non-consensual deepfake websites found in a search is received, Google will also work to filter similar search results that include the name of the person being impersonated.
“What that means is that when you remove a result from search under our policies, in addition, what we'll do is on any query that includes your name—or would be likely to surface that page from search—all explicit results will be filtered,” the spokesperson said. “So not all explicit results will be removed, but all explicit results will be filtered on those searches, which prevents them from appearing on searches where it would be likely to show up.”
In addition to filtering its search results, Google said it will demote sites that have received a “high volume of removals for fake explicit imagery.”
“These protections have already proven to be successful in addressing other types of non-consensual imagery, and we've now built the same capabilities for fake explicit images as well,” Google said. “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.”
AI Deepfakes Are a Threat to Businesses Too—Here's Why
As tech giants compete to bring artificial intelligence to the masses and own the burgeoning market, the AI arms race is fueling an increase in “deepfake” videos and audio—content that often looks or sounds convincingly legitimate, but is actually a fraudulent misrepresentation. And they’re impacting businesses too, according to a new report. Deepfakes are AI-generated creations like images, videos, and audio manipulated to deceive people. Scammers use deepfakes for fraud, extortion, or to damag...
A challenge of the new policy, Google acknowledged, is making sure that consensual or “real content,” like nude scenes in a film, are not taken down along with the illegal AI deepfakes.
“While differentiating between this content is a technical challenge for search engines, we're making ongoing improvements to better surface legitimate content and downrank explicit fake content,” Google said. In regards to CSAM, the Google spokesperson said the company takes this subject very seriously and has dedicated an entire team specially to combat this illegal content.
"We have hashing technologies, where we have the ability technologically to detect CSAM proactively," the spokesperson said. "That's something that's sort of an industry-wide standard, and we're able to block it from appearing in search."
In April, Google joined Meta, OpenAI, and other generative AI developers in pledging to enforce guardrails that would keep their respective AI models from generating child sexual abuse material (CSAM).
Even More Celebrities Battle Deepfakes of Themselves
Cybercriminals have stepped up using AI tools to create deepfakes of celebrities, commandeering their likenesses to dupe their fans out of their money and cryptocurrency—with one report claiming such content grew by 87 percent in the last year. On Monday, YouTube giant Mr. Beast notified his over 24 million Twitter followers that he had been the victim of one such scheme—and questioned whether tech companies were capable of stopping them. “Lots of people are getting this deepfake scam ad of me,"...
As Google works to remove and make deepfake websites harder to find, deepfake experts like Ben Clayton, CEO of audio forensics firm Media Medic, say the threat will remain as technology evolves.
“Combating deepfakes is a moving target,” Clayton told Decrypt. “While Google’s update is positive, it requires ongoing vigilance and improvements to its algorithms to prevent the spread of harmful content. Balancing this with the need for free expression is tricky, but it’s essential to protect vulnerable groups.”
Clayton said that while deep fakes impact privacy and security, the technology can also have implications in legal cases.
“Deepfakes could be used to fabricate evidence or mislead investigations, which is a serious concern for our legal clients,” he said. “The potential for deepfakes to interfere with justice is a critical issue, highlighting the importance of advanced detection technologies and ethical standards in media.”
New Senate Bill Targets AI Deepfakes, Calls for Content Watermarks
A the latest bid to curb AI-generated deepfakes, a bipartisan group of U.S. senators led by Washington Senator Maria Cantwell announced on Thursday the introduction of the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. The COPIED Act calls for a standardized method to watermark AI-generated content so that it can be easily detected. It also requires AI tool providers to allow creators to attach information detailing the origin or “provenance” of their conte...
Policymakers have also taken steps to combat deepfakes. In July, Sen. Maria Cantwell, D-Wash., introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act, which called for a standardized method of watermarking AI-generated content.
“Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else,” Sen. Chris Coons, D-Del., said in a statement. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness.”
Entertainment industry leaders and technology companies celebrated Google’s update to its policy.
SAG-AFTRA Applauds the Introduction of the NO FAKES Act
Read More: https://t.co/VA6nwXMqM1
— SAG-AFTRA NEWS (@sagaftranews) July 31, 2024
“The No Fakes Act is supported by the entire entertainment industry landscape, from studios and major record labels to unions and artist advocacy groups,” SAG-AFTRA said in a statement applauding the measure. “It is a milestone achievement to bring all these groups together for the same urgent goal.”
“Game over, A.I. fraudsters,” SAG-AFTRA President Fran Drescher added. “Enshrining protections against unauthorized digital replicas as a federal intellectual property right will keep us all protected in this brave new world.”
Edited by Ryan Ozawa.