Months after resigning from AI developer OpenAI, former chief scientist Ilya Sutskever’s new venture Safe Superintelligence (SSI) has raised $1 billion in funding, the company announced on Wednesday.
According to SSI, the funding round included investments from NFDG, a16z, Sequoia, DST Global, and SV Angel. Reuters, citing sources “close to the matter,” reported that SSI is already valued at $5 billion.
SSI is building a straight shot to safe superintelligence.
We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.
We’re hiring: https://t.co/DmFWnrc1Kr
— SSI Inc. (@ssi) September 4, 2024
“Mountain: identified,” Sutskever tweeted on Wednesday. “Time to climb.”
Safe Superintelligence did not immediately respond to a request for comment from Decrypt.
In May, Sutskever and Jan Leike resigned from OpenAI following the departure of Andrej Karpathy in February. In a post on Twitter, Leike cited a lack of resources and safety focus as the reason for his decision to leave the ChatGPT developer.
“Stepping away from this job has been one of the hardest things I have ever done,” Leike wrote. “Because we urgently need to figure out how to steer and control AI systems much smarter than us.”
Sutskever’s departure came, according to a report by The New York Times, after he led the OpenAI board and a handful of OpenAI executives to oust co-founder and CEO Sam Altman in November 2023. Altman was reinstated a week later.
OpenAI Exec Who Quit Says Safety 'Took a Backseat to Shiny Products'
Jan Leike, the former head of OpenAI’s alignment and "superalignment" initiatives, took to Twitter (aka X) on Friday to explain his reasoning for leaving the AI developer on Tuesday. In the tweet thread, Leike pointed to a lack of resources and safety focus as reasons for his decision to resign from the ChatGPT maker. OpenAI’s alignment or superalignment team is responsible for safety, and creating more human-centric AI models. Leike’s departure marks the third high-profile member of the OpenAI...
In June, Sutskever announced the launch of his new AI development company, Safe Superintelligence Inc., which was co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who also previously worked at OpenAI.
According to Reuters, Sutskever serves as SSI’s chief scientist, with Levy as principal scientist, and Gross handling computing power and fundraising.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” Safe Superintelligence wrote on Twitter in June. “Our team, investors, and business model are all aligned to achieve SSI.”
OpenAI and Anthropic Open Up to US AI Safety Institute
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has made a deal with top AI developers OpenAI and Anthropic to establish formal collaboration with the U.S. AI Safety Institute (AISI), the agency said on Thursday. The institute would “receive access to major new models from each company prior to and following their public release,” according to the announcement. Anthropic is developing Claude, while OpenAI offers ChatGPT. The arrangement will allow the Inst...
With generative AI becoming more ubiquitous, developers have looked for ways to assure consumers and regulators that their products are safe.
In August, OpenAI and Claude AI developer Anthropic announced agreements with the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to establish formal collaborations with the U.S. AI Safety Institute (AISI) that would give the agency access to major new AI models from both companies.
“We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models,” OpenAI co-founder and CEO Sam Altman wrote on Twitter. “For many reasons, we think it's important that this happens at the national level. [The] U.S. needs to continue to lead.”