A former key researcher at OpenAI believes there is a decent chance that artificial intelligence will take control of humanity and destroy it.

"I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, " Paul Christiano, who ran the language model alignment team at OpenAI, said on the Bankless podcast. "I take it quite seriously."

Christiano, who now heads the Alignment Research Center, a non-profit aimed at aligning AIs and machine learning systems with “human interests,” said that he’s particularly worried about what happens when AIs reach the logical and creative capacity of a human being. "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level," he said.

Christiano is in good company. Recently scores of scientists around the world signed an online letter urging that OpenAI and other companies racing to build faster, smarter AIs, hit the pause button on development. Big wigs from Bill Gates to Elon Musk have expressed concern that, left unchecked, AI represents an obvious, existential danger to people.

AD

Don't be evil

Why would AI become evil? Fundamentally, for the same reason that a person does: training and life experience.

Like a baby, AI is trained by receiving mountains of data without really knowing what to do with it. It learns by trying to achieve certain goals with random actions and zeroes in on “correct” results, as defined by training.

So far, by immersing itself in data accrued on the internet, machine learning has enabled AIs to make huge leaps in stringing together well-structured, coherent responses to human queries. At the same time, the underlying computer processing that powers machine learning is getting faster, better, and more specialized. Some scientists believe that within a decade, that processing power, combined with artificial intelligence, will allow these machines to become sentient, like humans, and have a sense of self.

That’s when things get hairy. And it’s why many researchers argue that we need to figure out how to impose guardrails now, rather than later. As long as AI behavior is monitored, it can be controlled.

AD

But if the coin lands on the other side, even OpenAI’s co-founder says that things could get very, very bad.

Foomsday?

This topic has been on the table for years. One of the most famous debates on the subject took place 11 years ago between AI researcher Eliezer Yudkowsky and the economist Robin Hanson. The two discussed the possibility of reaching “foom”—which apparently stands for “Fast Onset of Overwhelming Mastery”—the point at which AI becomes exponentially smarter than humans and capable of self improvement. (The derivation of the term “foom” is debatable.)

“Eliezer and his acolytes believe it’s inevitable AIs will go 'foom' without warning, meaning, one day you build an AGI [artificial general intelligence] and hours or days later the thing has recursively self-improved into godlike intelligence and then eats the world. Is this realistic?" Perry Metzger, a computer scientist active in the AI community, tweeted recently.

Metzger argued that even when computer systems reach a level of human intelligence, there’s still plenty of time to head off any bad outcomes. “Is 'foom' logically possible? Maybe. I’m not convinced," he said. "Is it real world possible? I’m pretty sure no. Is long term deeply superhuman AI going to be a thing? Yes, but not a ‘foom’”

Another prominent figure, Yann Le Cun, also raised his voice, claiming it is "utterly impossible," for humanity to experience an AI takeover.” Let’s hope so.

Stay on top of crypto news, get daily updates in your inbox.