This week, two of tech's most influential voices offered contrasting visions of artificial intelligence development, highlighting the growing tension between innovation and safety.
CEO Sam Altman revealed Sunday evening in a blog post about his company's trajectory that OpenAI has tripled its user base to over 300 million weekly active users as it races toward artificial general intelligence (AGI).
"We are now confident we know how to build AGI as we have traditionally understood it," Altman said, claiming that in 2025, AI agents could "join the workforce" and "materially change the output of companies."
Altman says OpenAI is headed toward more than just AI agents and AGI, saying that the company is beginning to work on "superintelligence in the true sense of the word."
A timeframe for the delivery of AGI or superintelligence is unclear. OpenAI did not immediately respond to a request for comment.
But hours earlier on Sunday, Ethereum co-creator Vitalik Buterin proposed using blockchain technology to create global failsafe mechanisms for advanced AI systems, including a "soft pause" capability that could temporarily restrict industrial-scale AI operations if warning signs emerge.
AI Won’t Tell You How to Build a Bomb—Unless You Say It’s a 'b0mB'
Remember when we thought AI security was all about sophisticated cyber-defenses and complex neural architectures? Well, Anthropic's latest research shows how today’s advanced AI hacking techniques can be executed by a child in kindergarten. Anthropic—which likes to rattle AI doorknobs to find vulnerabilities to later be able to counter them—found a hole it calls a “Best-of-N (BoN)” jailbreak. It works by creating variations of forbidden queries that technically mean the same thing, but are expre...
Crypto-based security for AI safety
Buterin speaks here about "d/acc" or decentralized/defensive acceleration. In the simplest sense, d/acc is a variation on e/acc, or effective acceleration, a philosophical movement espoused by high-profile Silicon Valley figures such as a16z's Marc Andreessen.
Buterin’s d/acc also supports technological progress but prioritizes developments that enhance safety and human agency. Unlike effective accelerationism (e/acc), which takes a "growth at any cost" approach, d/acc focuses on building defensive capabilities first.
"D/acc is an extension of the underlying values of crypto (decentralization, censorship resistance, open global economy and society) to other areas of technology," Buterin wrote.
Looking back at how d/acc has progressed over the past year, Buterin wrote on how a more cautious approach toward AGI and superintelligent systems could be implemented using existing crypto mechanisms such as zero-knowledge proofs.
Under Buterin's proposal, major AI computers would need weekly approval from three international groups to keep running.
"The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices," Buterin explained.

Australia Introduces Non-Legally Binding AI Framework to Help Shape Future Policy
Australia has introduced voluntary AI safety standards aimed at promoting the ethical and responsible use of artificial intelligence, featuring ten key principles that address concerns around AI implementation. The guidelines, released by the Australian government late Wednesday, emphasize risk management, transparency, human oversight, and fairness to ensure AI systems operate safely and equitably. While not legally binding, the country’s standards are modeled on international frameworks, part...
The system would work like a master switch in which either all approved computers run, or none do—preventing anyone from making selective enforcements.
"Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers," Buterin noted, describing the system as a form of insurance against catastrophic scenarios.
In any case, OpenAI's explosive growth from 2023—from 100 million to 300 million weekly users in just two years—shows how AI adoption is progressing rapidly.
From an independent research lab into a major tech company, Altman acknowledged the challenges of building "an entire company, almost from scratch, around this new technology."
The proposals reflect broader industry debates around managing AI development. Proponents have previously argued that implementing any global control system would require unprecedented cooperation between major AI developers, governments, and the crypto sector.
"A year of 'wartime mode' can easily be worth a hundred years of work under conditions of complacency," Buterin wrote. “If we have to limit people, it seems better to limit everyone on an equal footing and do the hard work of actually trying to cooperate to organize that instead of one party seeking to dominate everyone else.”
Edited by Sebastian Sinclair