On Monday, Ethereum creator Vitalik Buterin reflected on his own take on “techno-optimism,” inspired by Marc Andreessen, who opined about AI in his Techno-Optimist Manifesto in October. While Buterin agreed with Andreessen’s positive outlook, Buterin also noted the importance of how AI is developed and the technology's future direction.

Buterin acknowledged the existential risk of artificial intelligence, including causing the extinction of the human race.

“This is an extreme claim: as much harm as the worst-case scenario of climate change, or an artificial pandemic or a nuclear war, might cause, there are many islands of civilization that would remain intact to pick up the pieces,” he said.

“But a superintelligent AI, if it decides to turn against us, may well leave no survivors and end humanity for good,” Buterin said. “Even Mars may not be safe.”

AD

Buterin pointed to a 2022 survey by AI Impacts, which said between 5% and 10% of participants believe humans face extinction from AI or from humans' failure to control AI, respectively. He said that a security-focused open-source movement is ideal for leading AI development rather than closed and proprietary corporations and venture capital funds.

“If we want a future that is both superintelligent and "human—one where human beings are not just pets, but actually retain meaningful agency over the world—then it feels like something like this is the most natural option," he said.

What’s needed, Buterin continued, is the active human intention to choose its direction and outcome. “The formula of 'maximize profit' will not arrive at them automatically,” he said.

Buterin said he loves technology because it expands human potential, pointing to the history of innovations from hand tools to smartphones.

AD

“I believe that these things are deeply good, and that expanding humanity's reach even further to the planets and stars is deeply good, because I believe humanity is deeply good,” Buterin said.

Buterin said that while he believes transformative technology will lead to a brighter future for humanity, he rejects the notion that the world should stay how it is today, only with less greed and more public healthcare.

“There are certain types of technology that much more reliably make the world better than other types of technology,” Buterin said. “There are certain types of technology that could, if developed, mitigate the negative impacts of other types of technology.”

Buterin cautioned about a rise in digital authoritarianism and surveillance technology used against those who defy or dissent against the government, controlled by a small cabal of technocrats. He said the majority of people would rather see highly advanced AI delayed by a decade rather than be monopolized by a single group.

“My basic fear is that the same kinds of managerial technologies that allow OpenAI to serve over a hundred million customers with 500 employees will also allow a 500-person political elite, or even a 5-person board, to maintain an iron fist over an entire country,” he said.

While Buterin said he is sympathetic to the effective acceleration (also known as "e/acc") movement, he has mixed feelings about its enthusiasm for military technology.

“Enthusiasm about modern military technology as a force for good seems to require believing that the dominant technological power will reliably be one of the good guys in most conflicts, now and in the future,” he said, citing the idea that military technology is good because it's being built and controlled by America and America is good.

“Does being an e/acc require being an America maximalist, betting everything on both the government's present and future morals and the country's future success?” he said.

AD

Buterin cautioned against giving “extreme and opaque power” to a small group of people with the hope they will use it wisely, preferring instead a philosophy of "d/acc"—or defense, decentralization, democracy, and differential. This mindset, he said, could adapt to effective altruists, libertarians, pluralists, blockchain advocates, and solar and lunar punks.

“A defense-favoring world is a better world, for many reasons,” Buterin said. “First of course is the direct benefit of safety: fewer people die, less economic value gets destroyed, less time is wasted on conflict.

"What is less appreciated though is that a defense-favoring world makes it easier for healthier, more open and more freedom-respecting forms of governance to thrive," he concluded.

While he emphasized the need to build and accelerate, Buterin said society should regularly ask what we are accelerating towards. Buterin suggested that the 21st century may be "the pivotal century" for humanity that could decide the fate of humanity for millennia.

“These are challenging problems,” Buterin said. “But I look forward to watching and participating in our species' grand collective effort to find the answers.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.