Leading AI companies, including OpenAI, Google, Microsoft, and OpenAI, have committed to developing safe, secure, and transparent AI technology, the White House said on Friday.
"Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," the Biden Administration said, adding that the aim is to make the most of AI's potential and encourage the highest standards.
Additional companies committing to AI safety, the White House said, include Amazon, Anthropic, Meta, and Inflection.
AI Could Bring ‘The Biggest Bubble of All Time’, Says Stable Diffusion Creator
Emad Mostaque, CEO of Stability AI, warns that the rapidly-expanding artificial intelligence industry could be headed for a major bubble. "I think this will be the biggest bubble of all time," Mostaque predicted while speaking with UBS analysts last week. Similar to the dot-com bubble of the late 90s, he said he expects excessive hype around AI to inflate stock prices far beyond reason. "I call it the 'dot AI' bubble, and it hasn't even started yet," Mostaque warned, according to CNBC. With the...
“None of us can get AI right on our own,” Kent Walker, Google's President of Global Affairs, said. “We’re pleased to be joining other leading AI companies in endorsing these commitments, and we pledge to continue working together by sharing information and best practices.”
Some of the commitments include pre-release security testing of AI systems, sharing best practices for AI safety, investing in cybersecurity and insider threat safeguards, and facilitating third-party reporting of AI system vulnerabilities.
“Policymakers around the world are considering new laws for highly capable AI systems," Anna Makanju, OpenAI VP of Global Affairs, said in a statement. "Today’s commitments contribute specific and concrete practices to that ongoing discussion."
United States lawmakers introduced a bipartisan bill in June that seeks to establish a commission on AI and address questions raised by the rapidly growing industry.
The Biden Administration says it is also working with international partners, including Australia, Canada, France, Germany, India, Israel, Italy, Japan, Nigeria, the Philippines, and the UK, to establish a global framework.
"By endorsing all of the voluntary commitments presented by President Biden and independently committing to several others that support these critical goals, Microsoft is expanding its safe and responsible AI practices, working alongside other industry leaders," Microsoft President Brad Smith said.
ChatGPT's Performance Is Slipping, New Study Says
ChatGPT exploded onto the scene late last year, dazzling people with its human-like conversational abilities, and the release of latest version prompted a crypto rally and calls for a pause in development. But according to a new study, the leading AI bot's skills may actually be on the decline. Researchers at Stanford and UC Berkeley systematically analyzed different versions of ChatGPT from March and June 2022. They developed rigorous benchmarks to evaluate the model's competency in math, codin...
Global leaders, including the United National Secretary General, have already sounded the alarm on generative AI and deepfake technology and their potential misuse in conflict zones.
In May, the Biden administration met with artificial intelligence leaders to lay the groundwork for ethical AI development. The administration also announced a $140 million investment by the National Science Foundation towards AI research and development.
"These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI," the administration said.