In brief

  • Eliza Lab's CEO Shaw Walters says current AI systems already meet his definition of AGI.
  • He warns that autonomous agents introduce serious security risks, including prompt injection and wallet compromises.
  • Walters argues that fully decentralized AI does not yet exist and that local execution comes closest.

Artificial general intelligence may have already arrived.

That’s according to Eliza Labs’ founder Shaw Walters, who spoke with Decrypt last week during ETHDenver. Walters said current leading models already meet his definition of artificial general intelligence, better known as AGI.

“I think that we're at the inflection point where we have AGI,” he said. “I completely believe that this is general intelligence. It's nothing like us. It learns in a completely different way, but it is intelligent nonetheless, and it is very general.”

Originally launched in 2024 as ai16z, Walters founded Eliza Labs, which created the open-source ElizaOS, one of the first frameworks for creating autonomous AI agents for blockchains.

First coined in 1997 and later popularized by researchers, including SingularityNET founder Ben Goertzel, Artificial General Intelligence refers to a theoretical form of AI designed to match or exceed human cognitive abilities across a broad spectrum of tasks. 

While prominent AI developers, including OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei, warn that AGI could arrive within the next decade, Walters rejected the idea that it will emerge as a single dominant system.

“I just do not see it as the AI God,” he said. “There's never going to be one, because life loves variants.”

Walters said he first began working on AI agents during the GPT-3 era, when structured outputs were unreliable.

“It felt like most of the work I was doing was putting training wheels on a baby,” he said. “Just keeping it on, getting it to respond with the structure that I need to parse out what the action was. It was an enormous problem.”

Progress came with the launch of GPT-4 in 2023, which Walters said enabled more reliable responses.

“It was incredibly good at giving me a structured response, and now I could actually do action calling,” he said. “That was where we went from barely working at all to being able to make an agent that does things, but it was still very limited.”

AI agents have moved from experimental chatbots to persistent systems embedded across crypto and consumer platforms. 

In February, OpenClaw surged to roughly 147,000 GitHub stars and spawned projects including the AI “social media” platform Moltbook, while Coinbase launched “Agentic Wallets” on Base and Fetch.ai said its agents can complete purchases using Visa infrastructure.

However, as agents gained root access and wallet control, Walters said the initial excitement gave way to deep security concerns.

As developers at ETHDenver promoted the benefits of AI agents in crypto, Walters warned that as AI advances toward AGI, it behaves less like a predictable machine and more like a fallible human, making foolproof safeguards impossible to engineer.

“At the end of the day, you're dealing with something that's more like a human and less like a calculator,” he said. “It's gonna do stupid things sometimes, and there’s just no way to build a super secure system that's going to keep them from doing something dumb.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.