OpenAI CEO Calls for New Regulatory Agency for AI

Meanwhile, IBM and NYU experts suggest focusing on the safety, transparency, and risks of artificial intelligence.

By Jason Nelson

3 min read

For the second time this month, OpenAI CEO Sam Altman went to Washington to discuss artificial intelligence with U.S. policymakers. Altman appeared before the U.S. Senate Committee on the Judiciary alongside IBM's Chief Privacy & Trust Officer Christina Montgomery and Gary Marcus, Professor Emeritus at New York University.

When Louisiana Senator John Kennedy asked how they should regulate A.I., Altman said a government office should be formed and be put in charge of setting standards.

"I would form a new agency that licenses any effort above a certain scale of capabilities, and that can take that license away and ensure compliance with safety standards," Altman said, adding that the would-be agency should require independent audits of any A.I. technology.

"Not just from the company or the agency, but experts who can say the model is or isn't in compliance with the state and safety thresholds and these percentages of performance on question X or Y," he said.

While Altman said the government should regulate the technology, he balked at the idea of leading the agency himself. "I love my current job," he said.

Using the FDA as an example, Professor Marcus said there should be a safety review for artificial intelligence that’s similar to how drugs are reviewed before being allowed to go to market.

"If you're going to introduce something to 100 million people, somebody has to have their eyeballs on it," Professor Marcus added.

The agency, he said, should be nimble and able to follow what's going on in the industry. It would pre-review the projects and also review them after they have been released to the world—with the authority to recall the tech if necessary.

"It comes back to transparency and explainability in A.I.," IBM's Montgomery added. "We need to define the highest risk usage, [and] requiring things like impact assessments and transparency, requiring companies to show their work, and protecting data used to train AI in the first place."

Governments worldwide continue to grapple with the proliferation of artificial intelligence in the mainstream. In December, the European Union passed an AI act to promote regulatory sandboxes established by public authorities to test AI before its release.

"To ensure a human-centric and ethical development of artificial intelligence in Europe, MEPs endorsed new transparency and risk-management rules for A.I. systems," the European Parliament wrote.

In March, citing privacy concerns, Italy banned OpenAI's ChatGPT. The ban was lifted in April after OpenAI made changes to its privacy settings to allow users to opt out of having their data used to train the chatbot and to turn off their chat history.

"To the question of whether we need an independent agency, I think we don't want to slow down regulation to address real risks right now," Montgomery continued, adding that regulatory authorities already exist that can regulate in their respective domains.

She did acknowledge that those regulatory bodies are under-resourced and lack the needed powers.

"AI should be regulated at the point of risk, essentially," Montgomery said. "And that's the point at which technology meets society."

 

Get crypto news straight to your inbox--

sign up for the Decrypt Daily below. (It’s free).

Recommended News