As AI image generators become more advanced, spotting deepfakes is becoming more challenging than ever. Law enforcement and global leaders continue to sound the alarm about the dangers of AI-generated deepfakes on social media and in conflict zones.

"We're getting into an era where we can no longer believe what we see," Marko Jak, co-founder, and CEO of Secta Labs, told Decrypt in an interview. "Right now, it's easier because the deep fakes are not that good yet, and sometimes you can see it's obvious."

According to Jak, we are not that far—perhaps within a year—from the point when the ability to discern a faked image at first sight is no longer possible. And he should know: Jak is the CEO of an AI-image generator company.

AD

Jak co-founded Secta Labs in 2022; the Austin-based generative AI startup focuses on creating high-quality AI-generated images. Users can upload pictures of themselves and turn them into AI-generated headshots and avatars.

As Jak explains, Secta Labs view users as the owners of the AI models generated from their data, while the company is merely custodians aiding in creating images from these models. 

The potential misuse of more advanced AI models has led world leaders to call for immediate action on AI regulation and caused companies to decide not to release their advanced tools to the public.

Last week after announcing its new Voicebox AI-generated voice platform, Meta said it would not release the AI to the public.

AD

"While we believe it is important to be open with the AI community and to share our research to advance the state of the art in AI,” the Meta spokesperson told Decrypt in an email. “It’s also necessary to strike the right balance between openness with responsibility."

Earlier this month, the U.S. Federal Bureau of Investigation warned of AI deepfake extortion scams and criminals using photos and videos taken from social media to create fake content.

The answer in fighting deepfakes, Jak said, may not be in being able to spot a deepfake but in being able to expose a deepfake.

"AI is the first way you could spot [a deepfake]," Jak said. "There are people building artificial intelligence that can you can put an image into like a video and the AI can tell you if it was generated by AI."

Generative AI and the potential use of AI-generated images in film and television is a heated topic in the entertainment industry. SAG-AFTRA members voted before entering contract negotiations to authorize a strike, a significant concern, artificial intelligence.

Jak added that the challenge is the AI arms race unfolding as the technology gets more advanced and bad actors create more advanced deepfakes to counter technology designed to detect them.

Acknowledging that blockchain has been overused—some might say overhyped—as a solution for real-world problems, Jak said the technology and cryptography might solve the deepfake problem.

But while technology can solve many issues with deepfakes, Jak said a more low-tech solution, the wisdom of the crowd, might be the key.

AD

"One of the things I saw that Twitter did, which I think was a good idea is the community notes, which is where people can add some notes to give context to someone's tweet," Jak said. "A tweet can be misinformation just like a deepfake can be," he said. Jak added that it would benefit social media corporations to think of ways to leverage their communities to validate whether the circulated content is authentic.

"Blockchain can address specific issues, but cryptography could help authenticate an image's origin," he said. "This could be a practical solution, as it deals with the source verification rather than image content, regardless of how sophisticated the deepfake."

Stay on top of crypto news, get daily updates in your inbox.