If you believe what you see on the internet, you might think that President Biden has called for a national draft, and both men and women will be selected to fight the war in Ukraine. But it’s actually a lie perpetrated by a malicious deepfake video creator.
Ill-intended actors use artificial intelligence to create videos that impersonate politicians, celebrities, and notable figures. At its least harmful, we get goofy and obviously faked footage of what sounds like U.S. presidents playing Minecraft together—but at its worst, the technology could be a genuine threat to national security.
AI imagines what would happen if Biden declares and activates the Selective Service Act and begins drafting 20 years old to war pic.twitter.com/896Htrtteu
— The Post Millennial (@TPostMillennial) February 27, 2023
In 2018, comedian turned Oscar-winning director Jordan Peele issued a warning to the public in the form of one of the first deepfakes to go viral. The video showcased Barack Obama uttering phrases that lay far beyond his typical discourse, including the unexpected, "Stay woke, bitches!"
Yale University graduate Rijul Gupta came across the video and was instantly struck by the power of AI. Infinite possibilities started racing through his head, both positive and negative, he recalled in a recent interview with Decrypt—and he realized that the world needed a company with an "ethical backbone” that was paying attention to the space.
As a result, he and fellow Yale grad and linguistics expert Emma Brown co-founded DeepMedia—a company that’s dedicated to unmasking deepfake technology.
UN Warns of AI-Generated Deepfakes Fueling Hate and Misinformation Online
The United Nations has sounded the alarm over AI-generated deepfakes being used to spread hate and misinformation on social media. In a report released Monday, “Information Integrity on Digital Platforms,” the global organization underscored the need for responsible AI use. "While holding almost unimaginable potential to address global challenges, there are serious and urgent concerns about the equally powerful potential of recent advances in artificial intelligence—including image generators an...
DeepMedia has two products: DubSync, an AI translation and dubbing service, and DeepIdentify.AI, a deepfake detection service. The latter is their primary offering, which led to a recent $25 million three-year contract with the U.S. Department of Defense (DoD) as well as other undisclosed contracts with allied forces.
"We knew from day one that we wanted to work with governments, both domestic and foreign, [as well as] large, trusted institutions like the United Nations, of which we're now a partner,” Gupta told Decrypt. “[We wanted] to help make sure that—from the top down and the bottom up—ethics was built into the backbone of AI.”
Initially, DeepMedia was brought in by the DoD to educate the department on how the technology could be used to commit fraud and create political misinformation, said Gupta. Since then, DeepMedia has signed more contracts—such as with the United States Air Force—to incorporate its technology to help "keep American and international citizens safe."
How does it work?
DeepIdentity.AI looks at a photo or video, searching for inconsistencies or inaccuracies that indicate that the image or footage is a deepfake. Common telltale signs include things like an unnaturally quivering mouth, or shadows that would be physically impossible.
"We're able to track what a face should actually look and move like, and then we map it,” DeepMedia COO Emma Brown told Decrypt. “We're able to identify what's a normal movement for a face.”
A photo can be processed in five seconds, while a two-minute video would be "fully processed end-to-end with detailed analysis on faces, voices, [and] accents” in under five minutes, she explained. The end user will then be told how much manipulation was detected and where it is in the video or image.
Clip: DeepMedia's DeepID Deepfake Detection Tool at Work
A clip showcasing how DeepMedia's DeepID deepfake detection tool works. Courtesy: DeepMedia
"Do you remember the fake Pope one that went viral? We were able to check what parts of it were real Pope and what parts were fake," Brown said. "There were certain things that could have never happened in real life. And those were what really flagged our detectors, in terms of shadows and angles."
The model is able to detect with "over 95%" accuracy due to its large training data, she claimed. Of course, DeepMedia examines the AI deepfakes found online, but where its dataset really excels is in its synergy with DubSync—the translation and dubbing service.
"Our DubSync platform is essentially a deepfake generator. We have to build a generator in order to know what a good deepfake is," Brown explained. "And that's what feeds our deepfake detection."
AI Deepfakes Are a Threat to Businesses Too—Here's Why
As tech giants compete to bring artificial intelligence to the masses and own the burgeoning market, the AI arms race is fueling an increase in “deepfake” videos and audio—content that often looks or sounds convincingly legitimate, but is actually a fraudulent misrepresentation. And they’re impacting businesses too, according to a new report. Deepfakes are AI-generated creations like images, videos, and audio manipulated to deceive people. Scammers use deepfakes for fraud, extortion, or to damag...
Brown claims that DubSync deepfake generation stays “about six to 12 months ahead of anyone else," in order to ensure that the firm has the most cutting-edge data to train from. This is done with the aim of preventing bad actors from creating deepfakes that are more advanced than their AI can detect. But it’s a constant battle to keep that lead.
"It's a cat-and-mouse game, for sure. Anybody that pretends it's not is not in the space, in a true sense," Brown said. "We're confident that we're going to be able to continue to detect, especially with our continued research and generation side of things."
While DeepIdentify is available via DeepMedia’s API and other packages, when governments and financial institutions sign with the company, they require more advanced functionality.
"We have existing defect detection tools, but the United States government, the DoD—these are the best people in the world at what they do. They demand not only the best in the world, but beyond the best in the world," Gupta explained. "[We are] actually conducting new research and figuring out new tools and new solutions."
Why should we care?
In its current state, deepfake technology is best known for silly parody videos, unrealistic presidential impersonations, and harmful faux pornography. But the technology could also be used to cause real tension between states, however, potentially resulting in casualties, terror attacks, and other harmful or disruptive reactions.
"Deepfakes could be used as part of information operations to impact the course of wars, such as the March 2022 incident of a released deepfake where Ukraine's President Zelensky appeared to direct his troops to give up," U.S. national security lawyer and geopolitical analyst Irina Tsukerman told Decrypt. "He denied the authenticity of that video and provided evidence, and that video was not particularly well done—but future ones could be."
It Will Get Harder to Detect Deepfakes: Secta Labs CEO
As AI image generators become more advanced, spotting deepfakes is becoming more challenging than ever. Law enforcement and global leaders continue to sound the alarm about the dangers of AI-generated deepfakes on social media and in conflict zones. "We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, told Decrypt in an interview. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's o...
With the United States’ 2024 elections on the horizon, deepfake technology is the latest concern that could sway voting—similar to the Cambridge Analytica scandal of 2018.
"State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately," Tsukerman explained. "Such incidents corrode public trust, negatively impact public discourse, create divisions, and could be used to sway elections or other critical decision-making processes."
Interference might not be as straightforward, either. In fact, deepfakes could be used to deny reality and try to push back against factual information and events.
"There is also an issue of possible 'liar's dividend'—a form of disinformation by which deepfakes can be used to deny authenticity of original content, creating further obfuscation to the detriment of security concerns and discrediting friendly U.S. sources and assets, analysts, ally governments, or even officials," Tsukerman said.
This threat to national security is why the company says it is working with the United States DoD ecosystem, allied and partnered nations in Europe and Asia, as well as an unnamed “very large social media company" to help detect circulating deepfakes.
There’s been a rapid evolution of generative artificial intelligence technology and tools in just the last year alone. As deepfake tech evolves, it’s sure to become even more difficult to identify fraudulent clips with the naked eye—which is precisely where detection firms like DeepMedia aim to help keep people safe and informed.
Edited by Ryan Ozawa and Andrew Hayward.