Researchers using artificial intelligence have cracked one of the most widely-used CAPTCHA security systems, which are designed to keep bots off of websites by determining whether a user is human.
Using advanced machine learning methods, researchers from Switzerland-based university ETH Zurich solved 100% of captchas created by Google’s popular reCAPTCHAv2 product using a similar number of attempts as human users.
The results, published on Sept. 13, indicate that “current AI technologies can exploit image-based captchas,” the authors wrote.
“This has been coming for a while,” said Matthew Green, an associate professor of computer science at the Johns Hopkins Information Security Institute. “The entire idea of captchas was that humans are better at solving these puzzles than computers. We’re learning that’s not true.”
CAPTCHA stands for Completely Automated Public Turing Test, designed to tell computers and humans apart. The system used in the new study, Google’s reCAPTCHA v2, tests users by asking them to select images containing objects like traffic lights and crosswalks.
While the process the Swiss researchers used to defeat reCAPTCHAv2 was not fully automated and required human intervention, a fully automated process to bypass CAPTCHA systems could be right around the corner.
“I would not be surprised if that comes up in the near term,” Phillip Mak, a cybersecurity security operations center lead for a large government organization and an adjunct professor at New York University, told Decrypt.
OpenAI Launches Advanced Voice Mode, Minus the Scarlett Johansson Drama
OpenAI has begun rolling out its much-anticipated Advanced Voice Mode for ChatGPT Plus and Teams users, marking another step towards a more human-like AI interaction. The feature allows for real-time, fluid conversations powered by GPT-4o, OpenAI’s latest model, which combines text, vision, and audio to deliver faster responses. “Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week,” OpenAI said in an official tweet, “It can also say “Sorry I’m...
In response to bots’ improved ability to solve captchas, companies like Google, which released a third-generation reCAPTCHA product in 2018, are continually increasing the sophistication of their products.
“The bots are continually getting smarter,” said Forrester Principal Analyst Sandy Carielli. “What worked a few weeks ago might not work today.”
“The best players are continually evolving because they have to,” she said. “The evolution is in the detection models and putting forth the right responses in order to not just block bots but also make it so expensive for bots that they go elsewhere.”
Yet, introducing challenges that are trickier for bots to solve risks adding an additional layer of complexity to the puzzles, which can become more inconvenient for humans.
Average users may “need to spend more and more time solving captchas and eventually might just give up,” Mak said.
New AI Training Technique Is Drastically Faster, Says Google
Google's DeepMind researchers have unveiled a new method to accelerate AI training, significantly reducing the computational resources and time needed to do the work. This new approach to the typically energy-intensive process could make AI development both faster and cheaper, according to a recent research paper—and that could be good news for the environment. "Our approach—multimodal contrastive learning with joint example selection (JEST)—surpasses state-of-the-art models with up to 13 times...
While the future of CAPTCHA as a security technology remains uncertain, others, including Gene Tsudik, professor of computer science at the University of California, Irvine—are more pessimistic.
“reCAPTCHA and its descendants should just go away,” Tsudik said. “There are some other techniques that are still okay, or at least better, but not significantly. So it’s still going to be an arms race.”
If CAPTCHA does fade, there could be serious consequences for a broad range of internet stakeholders unless cybersecurity firms are able to come up with novel solutions, Green said.
“It’s a huge problem for advertisers and the people operating services if they don't know whether 50% of their users are real,” Green said. ”Fraud was a big problem when you had to hire people to do it, and it’s a worse problem now that you can get AI to do the fraud for you.”
Edited by Sebastian Sinclair