On Friday, the board of artificial intelligence behemoth OpenAI abruptly fired Sam Altman, the company’s prominent CEO, stating only that Altman “was not consistently candid in his communications” and that it “no longer has confidence in his ability to continue leading OpenAI.”

While the board also stated that a search is already underway to find Altman’s permanent successor, it appointed Mira Murati, OpenAI’s chief technical officer, as interim CEO. 

While Altman has constructed an established public image for himself at the myriad events and government hearings he’s attended since the debut of ChatGPT rocked the world last December, Murati is a much less well-known figure. Who is she, and how might she steer the world’s most prominent AI company at a critical juncture for the emergent technology? 

Murati, an Albanian engineer who moved to Canada as a teenager, is an engineer who until today was primarily responsible for the hands-on overseeing of the development of numerous OpenAI products, including ChatGPT, photo generator Dall-E, and coding generator Codex. Prior to joining OpenAI in 2018, she worked at several other tech startups including Elon Musk’s Tesla, where she helped develop the Model X crossover SUV.

It would appear that Murati shares a similar outlook to her predecessor Altman regarding the importance of meaningful AI regulation, and the potential for AI advancements to cause apocalyptic damage to humanity if not adequately contained.

There is a possibility for really horrible things, even catastrophic events,” Murati told Fortune in an interview last month. “There is also the existential threat that, you know, it’s basically the end of civilization.”

That said, Murati has also expressed a resistance to slowing OpenAI’s steady clip of development and product launches in the name of preventing such dire outcomes.

Earlier this year, when a group of over 1,100 prominent technologists and public figures including Musk, Steve Wozniak, and Andrew Yang published an open letter beseeching AI companies to agree to a six month pause on advanced development in the name of public safety, Murati pushed back on the ask, dismissing it as naive. 

“The idea of a pause assumes that we deployed these models without a lot of care and sort of irresponsibly, but that’s really just not the case,” Murati said regarding the letter, per a report in Fast Company. 

Ultimately, while Murati has appeared to be in lockstep with Altman regarding the importance of working with governments and regulators to create AI regulation, she has also emphasized her belief that the world-changing technologies she’s helped design can only reach their full potential if allowed to grow and interact with the public. 

“It is tough to build these technologies in a vacuum, without contact with the real world,” Murati said in April. Feedback, she said, is essential to “make the model more robust and safer.”

Edited by Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.