Advancements in artificial intelligence are a double-edged sword for cybersecurity companies that work in decentralized finance. 

Forta Network monitors more than $40B in on-chain assets for clients such as Polygon, Compound, Lido, ZenGo, and crypto lending platform Euler Finance—which lost $200M last month in a cyber attack that Forta saw coming.

“Many of our machine learning models in the Euler attack detected [it] even before the funds were stolen, giving the Euler team essentially a few minutes heads up that, ‘Hey, your protocol is about to be attacked, you should take some action,’” Christian Seifert, Forta Network researcher, told Decrypt.

“Blockchain lends itself very well to these machine learning approaches because the data is public,” Seifert explained. “We're able to see every single transaction, every single account, we're able to see how much money that is actually lost—and that is a great precursor to train some of these models.”

AD

Despite the fact that the Forta system recognized the malicious activity on Euler’s blockchain protocol and sent alerts to Euler, the company was not able to act quickly enough to shut its network down before funds were stolen. After negotiations with the hacker, however, customers were made whole.

“All of the recoverable funds taken from the Euler protocol on March 13 have now been have been successfully returned by the exploiter,” reads the post shared by Euler’s official Twitter account.

“Before exploitation, three critical Forta alerts were raised,” Forta said in a blog post. “Sadly in this case, the [Euler] attack still happened too fast for the standard manual response of a multisig to pausing the contract.”

Seifert joined Forta in April 2022 following 15 years at Microsoft where he was a Principal Group Manager overseeing the tech giant’s cyber security and threat detection team. Forta launched in 2021 with $23 million raised by Andreessen Horowitz, Coinbase Ventures, Blockchain Capital and others. 

AD

While Forta can leverage its own machine learning to identify malicious activity on blockchain, Seifert sees the downside of AI in potential manipulation of ChatGPT—the chatbot developed by OpenAI that’s received $10B in investment from his former employer. 

“There [are] two sides of the coin,” Seifert says. “I think a lot of AI technology can be used to create more customized and compelling social engineering attacks.

“I can probably feed your LinkedIn profile to ChatGPT and ask it to author an email that entices you to click on that link, and it's going to be highly customized,” he explained. “So I think the click-through rates will increase with the malicious usage of some of these models.”

“On the good side, machine learning is an integral part to threat detection,” Seifert noted.

A report earlier this month from Immunefi found hacks in the crypto industry increased 192% year-over-year from 25 to 73 this past quarter. Another significant crypto hack has seen $10 million in Ethereum stolen since December. 

Scott Gralnick is the director of channel partnerships at Halborn, a blockchain cybersecurity firm that’s raised $90M in funding and whose clients include Solana and Dapper Labs.

“New technology will always create a double edged sword,” Gralnick said. “So as people will be trying to harness AI to try new attack vectors, so will our white-hat hackers ethically trying to protect the ecosystems at large by utilizing this technology to strengthen our armory of tools to protect these companies and ecosystems.”

Microsoft recently launched Security Copilot, a chat service that lets cybersecurity personnel ask questions related to security incidents to receive AI-generated answers back for step-by-step instructions on how to mitigate risks. Seifert expects cybersecurity employees to use AI language models to their advantage through essentially dumbing protocols down.

AD

“What is new now is these large language models that are able to understand context quite well, they're able to understand code quite well,” Seifert says. “I think that will open the door primarily for incident responders.

“If you think about an incident responder that is faced with an alert and transaction in the web3 space, they might not know what to look at, and so can a large language model be used to transform this very technical data into natural language, such that it is more accessible to a broader audience?” he asked. “Can that person then ask natural language questions to guide the investigation?”

A recent Pew Research study of 11,004 US adults found 32% of Americans believe that over the next 20 years, artificial intelligence will have a mostly negative impact on workers, while just 13% said AI will help more than harm the workforce. 

Count Seifert in the minority.

“One thing that folks always talk about is, 'Oh, is AI going to replace humans?' I don't think that is the case,” he says. “I think AI is a tool that can augment and support humans, but you always will need a human in the loop for some of these decisions being made."

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.