Sam Altman started 2025 with a bold declaration: OpenAI has figured out how to create artificial general intelligence (AGI), a term commonly understood as the point where AI systems can comprehend, learn, and perform any intellectual task that a human can.

In a reflective blog post published over the weekend, he also said the first wave of AI agents could join the workforce this year, marking what he describes as a pivotal moment in technological history.

Altman painted a picture of OpenAI's journey from a quiet research lab to a company that claims to be on the brink of creating AGI.

The timeline seems ambitious—perhaps too ambitious—while ChatGPT celebrated its second birthday just over a month ago, Altman suggests the next paradigm of AI models capable of complex reasoning is already here.

From there, it’s all about integrating near-human AI into society until AI beats us at everything.

Wen AGI, Wen ASI?

Altman’s elaboration on what AGI implies remained vague, and his timeline predictions raised eyebrows among AI researchers and industry veterans.

“We are now confident we know how to build AGI as we have traditionally understood it," Altman wrote. "We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

Altman’s explanation is vague because there is no standardized definition of AGI. The bar has needed to be raised higher each time as AI models become more powerful but not necessarily more capable.

“When considering what Altman said about AGI-level AI agents, it's important to focus on how the definition of AGI has been evolving," Humayun Sheikh, CEO of Fetch.ai and Chairman of the ASI Alliance, told Decrypt.

"While these systems can already pass many of the traditional benchmarks associated with AGI, such as the Turing test, this doesn’t imply that they are sentient," Sheikh said. "AGI has not yet reached a level of true sentience, and I don’t believe it will for quite some time.”

The disconnect between Altman's optimism and expert consensus raises questions about what he means by "AGI." His elaboration on AI agents "joining the workforce" in 2025 sounds more like advanced automation than true artificial general intelligence.

“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” he wrote.

But is Altman correct when he says AGI or agent integration will be a thing in 2025? Not everyone is so sure.

“There are simply too many bugs and inconsistencies with existing AI models that must be ironed out first,” Charles Wayn, co-founder of decentralized super app Galxe told Decrypt. “That said, it’s likely a matter of years rather than decades before we see AGI-level AI agents.”

Some experts suspect Altman’s bold predictions might serve another purpose.

In any case, OpenAI has been burning through cash at an astronomical rate, requiring massive investments to keep its AI development on track.

Promising imminent breakthroughs could help maintain investor interest despite the company's substantial operating costs, according to some.

That's quite an asterisk for someone claiming to be on the verge of one of humanity's most significant technological breakthroughs.

Still, others are backing Altman's claims.

“If Sam Altman is saying that AGI is coming soon, then he probably has some data or business acumen to back up this claim,” Harrison Seletsky, director of business development at digital identity platform SPACE ID told Decrypt.

Seletsky said “broadly intelligent AI agents” may be a year or two away if Altman’s statements are true and tech keeps evolving in the same space.

The CEO of OpenAI hinted that AGI is not enough for him, and his company is aiming at ASI: a superior state of AI development in which models exceed human capacities at all tasks.

“We are beginning to turn our aim beyond that to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else,” Altman wrote in the blog.

While Altman didn’t elaborate on a timeframe for ASI, some expect that robots can substitute all humans by 2116.

Altman previously said ASI is only a matter of “a few thousand days,” yet experts from the Forecasting Institute give a 50% probability ASI will be achieved in at least 2060.

Knowing how to reach AGI is not the same as being able to reach it.

Yan Lecun, Meta’s chief AI researcher, said humanity is still far from reaching such a milestone due to limitations in the training technique or the hardware required to process such vast amounts of information.

Eliezer Yudkowsky, a pretty influential AI researcher and philosopher, has also argued that this may be a hype move to basically benefit OpenAI in the short term.

Human Workers vs AI Agents

So, agentic behavior is a thing—unlike AGI or ASI—and the quality and versatility of AI Agents are increasing faster than many expect.

Frameworks like Crew AI, Autogen, or LangChain made it possible to create systems of AI Agents with different capabilities, including the ability to work hand in hand with users.

What does it mean for the average Joe, and will this be a danger or a blessing for everyday workers?

Experts aren't too concerned.

“I don't believe we’ll see dramatic organizational changes overnight,” Fetch.ai's Sheikh said. “While there may be some reduction in human capital, particularly for repetitive tasks, these advancements might also address more sophisticated repetitive tasks that current Remotely Piloted Aircraft Systems cannot handle.

Seletsky also thinks Agents will most likely conduct repetitive tasks instead of those requiring some level of decision-making.

In other words, humans are safe if they can use their creativity and expertise to their advantage—and assume the consequences of their actions.

“I don’t think decision-making will necessarily be led by AI agents in the near future, because they can reason and analyze, but they don't have that human ingenuity yet,” he told Decrypt..

And there seems to be some degree of consensus, at least in the short term.

“The key distinction lies in the lack of “humanity” in AGI’s approach. It’s an objective, data-driven approach to financial research and investing. This can help rather than hinder financial decisions because it removes some emotional biases that often lead to rash decisions,” Galxe's Wayn said.

Experts are already aware of the possible social implications of adopting AI Agents.

Research from the City University of Hong Kong argues that Generative AI and agents in general must collaborate with humans instead of substituting them so society can achieve healthy and continuous growth.

“AI has created both challenges and opportunities in various fields, including technology, business, education, healthcare, as well as arts and humanities,” the research paper reads. “AI-human collaboration is the key to addressing challenges and seizing opportunities created by generative AI.”

Despite this push for human-AI collaboration, companies have started substituting human workers for AI agents with mixed results.

Generally speaking, they always need a human to handle tasks agents cannot do due to hallucinations, training limitations, or simply lack of context understanding.

As of 2024, nearly 25% of CEOs are excited by the idea of having their farm of digitally enslaved agents that do the same work humans do without labor costs involved.

Still, other experts argue that an AI agent can arguably do better for almost 80% of what a CEO does—so nobody is really safe.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.