Brace yourselves: the advent of a superintelligent AI is nigh.

A blog post coauthored by the CEO of OpenAI, Sam Altman, OpenAI President Greg Brockman, and OpenAI Chief Scientist Ilya Sutskever warns that the development of artificial intelligence needs heavy regulation to prevent potentially catastrophic scenarios. 

"Now is a good time to start thinking about the governance of superintelligence," said Altman, acknowledging that future AI systems could significantly surpass AGI in terms of capability. “Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill levels in most domains, and carry out as much productive activity as one of today’s largest corporations.”

Echoing concerns Altman raised in his recent testimony before Congress, the trio outlined three pillars they deemed crucial for strategic future planning.


The “starting point”

First, OpenAI believes there must be a balance between control and innovation, and pushed for a social agreement “that allows us to both maintain safety and help smooth integration of these systems with society.”

Next, they championed the idea of an "international authority" tasked with system inspections, audit enforcement, safety standard compliance testing, and deployment and security restrictions. Drawing parallels to the International Atomic Energy Agency, they suggested what a worldwide AI regulatory body might look like.

Last, they emphasized the need for the "technical capability" to maintain control over superintelligence and keep it "safe." What this entails remains nebulous, even to OpenAI, but the post warned against onerous regulatory measures like licenses and audits for technology falling below the bar for superintelligence.

In essence, the idea is to keep the superintelligence aligned to its trainers’ intentions, preventing a “foom scenario”—a rapid, uncontrollable explosion in AI capabilities that outpaces human control.


OpenAI also warns of the potentially catastrophic impact that uncontrolled development of AI models could have on future societies. Other experts in the field have already raised similar concerns, from the godfather of AI to the founders of AI companies like Stability AI and even previous OpenAI workers involved with the training of the GPT LLM in the past. This urgent call for a proactive approach toward AI governance and regulation has caught the attention of regulators all around the world.

The Challenge of a “Safe” Superintelligence

OpenAI believes that once these points are addressed, the potential of AI can be more freely exploited for good: “This technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us,” they said.

The authors also explained that the space is currently growing at an accelerated pace, and that is not going to change. “Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” the blog reads.

Despite these challenges, OpenAI's leadership remains committed to exploring the question, "How can we ensure that the technical capability to keep a superintelligence safe is achieved?" The world doesn’t have an answer right now, but it definitely needs one—one that ChatGPT can’t provide.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.