OpenAI, has unveiled a new "Safety and Security Committee" that it says will ensure that the company's cutting-edge artificial intelligence systems are designed and built in a safe and secure way. The company’s CEO, Sam Altman, is among the leaders of the committee, even as OpenAI’s overall direction comes increasingly under fire.

Announced less than a week after OpenAI made headlines for crossing Hollywood actress Scarlett Johansson, the committee effectively recreates the role of OpenAI’s “superalignment” team, which was intended to ensure the company remained true to its founding humanitarian vision. That team was disbanded amid the departures of some of the field’s most prominent specialists, including Jan Leike, who left the company alongside OpenAI cofounder Ilya Sutskever after saying the group “had been sailing against the wind.”

Yesterday, Leike announced that he was joining OpenAI competitor Anthropic—which was founded by former OpenAI researchers Daniela and Dario Amodei and other former OpenAI workers, many of whom had also left OpenAI over disagreements with Altman’s approach to AI safety.

AD

The committee has been tasked with evaluating and enhancing OpenAI’s safety and security processes. They will present their recommendations in 90 days to the full board, and OpenAI says it will provide a public update on adopted recommendations.

Can the new committee—ostensibly created to build guardrails around OpenAI’s aggressive AI development work—effectively mitigate risks with the company’s CEO among its leaders? Who are its members, and do they have experience in AI safety and security?

Out of alignment

Last year, Altman was briefly ousted as CEO by the company's board, which alleged a lack of transparency and a loss of confidence in his leadership. However, following an apparent employee revolt and pressure from investors, Altman was swiftly reinstated. That led to the departure of the three board members—including Sutskever—who initially voted for his removal.

With a reconstituted board firmly in Altman's corner, the company has clearly shifted to a more corporate, profit-driven approach, drifting from OpenAI's original mission to ensure AI “benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits.” Controversial moves of the past year include striking deals with the Pentagon for potential military applications, launching a marketplace for third-party chatbots, and even exploring the generation of adult content.

AD

The dissolution of the company's "Superalignment" team, dedicated to mitigating long-term risks associated with AGI, sparked especially strong concern. That team's co-founders, Sutskever and Leike, both resigned after the launch of GPT-4o earlier this month, with Leike accusing OpenAI of abandoning its commitment to AI safety in favor of "shiny products."

Enter the new Safety and Security Committee, which is designed to restore confidence in OpenAI's processes and safeguards.

Meet the new guard

In addition to Altman, the new committee is led by individuals with clear loyalties to him as CEO: Bret Taylor, who was appointed as OpenAI board member when Altman returned, and Adam D’Angelo, a member of the OpenAI board that supported Altman and stayed in the board after his return. Rounding out the upper echelon is Nicole Seligman, a corporate lawyer famous for representing Lt. Colonel Oliver North during the Iran-Contra hearings and former US President Bill Clinton during his impeachment trial.

The rest of the committee is made up of researchers tasked with making sure OpenAI products remain mission aligned. Here are some of its members—most of whom publicly backed Altman during his brief ouster:

Aleksander Madry, Head of Preparedness at OpenAI and MIT Professor, focuses on making AI more reliable and safe in his research. When Altman tweeted about his resignation, Madry shared the iconic heart emoji in support of him, and then tweeted a heart celebrating his return, with the phrase “So back.”

Lilian Weng, Head of Safety Systems at OpenAI, has a background in machine learning and deep learning. Weng joined OpenAI as a Research Scientist in February 2018. In January 2021, Weng was promoted to the position of Head of Applied AI Research at OpenAI and in June 2023, Weng was appointed as the Head of Safety Systems at OpenAI.

She also tweeted a heart in support of Altman’s return, and shared the rallying cry “OpenAI is nothing without its people,” which became highly popular during the pro-Altman movement, describing Altman’s return as a rebirth for the company.

AD

John Schulman, co-founder of OpenAI and lead architect behind the development of ChatGPT, focused on "post-training" the language models, which involved fine-tuning the models' behavior for specific tasks. This approach, combined with reinforcement learning, enabled ChatGPT to generate more human-like and contextually appropriate responses. Schulman also publicly supported Altman during his brief departure.

Matt Knight, Head of Security at OpenAI, has a background in security engineering and co-founded Agitator, a finalist in DARPA's Spectrum Collaboration Challenge. Knight also tweeted in support of Sam Altman, both with the heart emoji and the slogan.

Jakub Pachocki, Chief Scientist at OpenAI, was appointed to replace Sutskever and has been with the company since 2017. Pachocki focuses on large-scale reinforcement learning and deep learning optimization. He had resigned in solidarity with Sam Altman, and returned to the company with him when he was reinstated.

Watching the watchers

Amid its recent missteps, the announcement of the new safety committee is a way for OpenAI to assuage concerns over how aggressively its pursuing AI for safe AGI research and development, but beyond the team’s apparent expertise, things may potentially open ground for a conflict of interests. After all, How can a team led by the very individual it is meant to scrutinize effectively safeguard the development of potentially world-altering technology?

AD

The irony did not go unremarked upon by the AI and tech community on Twitter.

“Sam Altman (OpenAI board member) appointing himself to be a member of the Safety and Security Committee to mark his own homework as Sam Altman (CEO),” noted AI policy expert Tolga Bilge.

“Mr. Fox, could I trouble you to watch this henhouse for me please?” quipped Gartner analyst and venture capitalist Michael Gartenberg.

“OpenAI just created an oversight board that’s filled with its own executives and Altman himself,“ concluded tech journalist and author Parmy Olson. “This is a tried and tested approach to self-regulation in tech that does virtually nothing in the way of actual oversight.”

The ultimate effectiveness of the new committee, and whether it helps OpenAI regain the public’s confidence in its care and caution, will only become clear after the group presents its report to the main board of directors—including Altman—in late August.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.