The same week OpenAI commanded global attention for releasing its latest AI model, prominent executives Ilya Sutskever and Jan Leike announced that they were leaving the company. Coupled with the February departure of Andrej Karpathy, it appears the people most committed to safe, human-centered AI development have left the company.

Has the top name in AI lost its brake pedal in the intense race to dominate the massively disruptive industry?

The failed coup

It was early 2023 when Sutskever, co-founder and former chief scientist, was credited with being the mastermind behind a controversial move to oust CEO Sam Altman over alleged concerns that he was cavalier about AI safety protocols. His dramatic removal led to breathless headlines and widespread rumors, but Altman was restored to his post a week later.

AD

Sutskever soon issued an apology and resigned from OpenAI's board—and had not made any public statements or appearances since.

Indeed, when OpenAI presented its much hyped product update on Monday, Sutskever was notably absent.

Sutskever announced his official departure just two days later. His resignation statement was cordial, as was Altman’s public acknowledgement.

“After almost a decade, I have made the decision to leave OpenAI,” Sutskever wrote. ”The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of (Sam Altman, Greg Brockman and Mira Murati).

“It was an honor and a privilege to have worked together, and I will miss everyone dearly,” he continued, adding that he was moving on to focus on a personally meaningful project.

AD

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Sam Altman said, “OpenAI would not be what it is without him.” He also replied to Sutskever’s announcement with a nice short message

Shortly after, OpenAI announced that Jakub Panochi would be filling Sutskever’s position. Panochi previously served as Director of Research and had a more technical job, focusing mostly on scaling AI.

But there may be something more to the changing of the guard. When Sutskever announced his resignation, he only followed a few Twitter accounts, including OpenAI and a dachshund meme account. Someone named Adam Sulik, a self-described “AI centrist,” replied to the announcement, suggesting Sutskever ignore any non-disclosure agreements he may still be subject and share insider information in the name of humanity.

Sutskever followed Sulik after his comment... then unfollowed him a few hours later.

As it turned out, Sutskever wasn't alone in heading for the exit. Just hours later, Jan Leike—who co-founded OpenAI's "superalignment" team with Sutskever to ensure ethical, long-term AI development—resigned with a frosty tweet simply stating, "I resigned."

No pleasantries, no praise for OpenAI, and no reactions from OpenAI execs.

AD

Leike had worked at Google’s Deepmind before joining OpenAI. At the latter firm, the superalignment team focused on ensuring that cutting-edge systems in which AI’s are expected to meet or exceed human intelligence remain aligned with human values and intentions.

Not much is known about this group of AI safety sentries, however. Besides the two leaders, OpenAI has not provided any additional information about the unit other than the fact that there are other “researchers and engineers from [its] previous alignment team, as well as researchers from other teams across the company.” The research team included Yuri Burda, Adrien Ecoffet, Nat McAleese, Collin Burns, Bown Baker, Pavel Izmailov (resigned) and Leopold Aschenbrenner (also resigned).

Sulik also commented on Leike’s resignation, sharing his concern about the time of the events.

“Seeing Jan leave right after Ilya doesn’t bode well for humanity’s safe path forward,“ he tweeted.

OpenAI didn’t immediately respond to a request for comment from Decrypt.

These departures come months after OpenAI co-founder Andrej Karpathy left the company, saying in February that he wanted to pursue personal projects. Karpathy worked as research scientist at OpenAI and was involved in different projects from computer vision applications to AI assistants and key developments during the training of ChatGPT. He took a five-year break from the company between 2017 and 2022 to lead AI development at Tesla.

AD

With these three departures, OpenAI is left without some of the most important minds pursuing an ethical approach toward AGI.

Decels out, new deals in

There are signs that the failed ouster of Altman removed resistance to more lucrative but ethically cloudy opportunities.

Shortly after reinstating Altman, for example, OpenAI loosened restrictions on using its tech for potentially harmful applications like weapons development—guidelines that previously banned such activities outright.

In addition to striking the deal with the Pentagon, OpenAI also opened a plugin store enabling anyone to design and share personalized AI assistants, effectively diluting direct oversight. Most recently, the company began “exploring” the creation of adult content.

The diminishing strength of ethics advocates and guidelines extends beyond Open AI. Microsoft axed its entire ethics and society team in January, and Google sidelining its Responsible AI taskforce that same month. Meta, meanwhile, disbanding its ethical AI crew. All three tech giants are meanwhile in a mad dash to dominate the AI market.

There’s clear reason for concern. AI products are already massively mainstream, a social phenomenon with billions of interactions per day. There appears to be growing potential for unaligned AI to adversely impact the development of future generations in business, political, community, and even family affairs.

The rise of philosophical movements like "effective accelerationism"—which values rapid development over ethical worries—exacerbates these concerns.

For now, the main remaining bastion of caution appears to be a mix of open source development, voluntary participation in AI safety coalitions—such as the Frontier Model Forum or MLCommons, —and outright regulation by the government, from the AI Act in the UK to the G7’s code of AI conduct.

AD

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.