Geoffrey Hinton, an artificial intelligence pioneer known as one of the "godfathers of AI" resigned from his position at Google so that he could openly express his concerns about how A.I. could cause significant harm to the world.
Hinton admitted in a New York Times interview that he now partly regrets his life's work. Despite the beneficial uses of A.I., Hinton fears that the technology could be used irresponsibly, unleashing unintended consequences.
Hinton is worried that competition between tech giants like Google and Microsoft to create the most advanced A.I. will result in a global race that will not stop without some form of worldwide regulation. However, he was also emphatic in pointing out that he thought that Google has acted responsibly in its research:
Hinton is known for popularizing the theoretical development of neural networks in 1986 and for creating one capable of recognizing images in 2012. His work was crucial to the development of current generative art models like Stable Diffusion and MidJourney, and laid the groundwork for OpenAI's upcoming efforts to make GPT-4 capable of interacting with images.
His potentially belated move has many comparing him to J. Robert Oppenheimer, a physics professor credited with creating the atomic bomb.
The Risks of AI
One of the immediate problems Hinton highlights is the proliferation of fake images, videos, and text online, which could make the truth increasingly difficult to discern for the average person. As generative A.I. continues to improve, creators of fake and manipulative content could use these tools to deceive and confuse people.
Hinton is also concerned about how A.I. could affect jobs in the future. While chatbots like ChatGPT currently complement human workers, they could ultimately replace those who handle routine tasks, such as personal assistants, accountants, and translators. Although AI may alleviate some monotonous work, it could also eliminate more jobs than anticipated, disrupting social balance.
In the long term, Hinton fears that future versions of the technology pose a threat to humanity due to the unexpected behavior they may learn from the large volumes of data they analyze. This becomes a problem when A.I. systems are allowed to generate and execute their own code.
This long-term view also gained particular relevance when other key figures in the A.I. field began to warn about the possibility of a "foom" scenario—in which AI far outpaces human intelligence—and the impact it could have on societal development.
Hinton is just one of thousands of tech leaders and researchers alarmed by the exponential advancement of AI developments for various fields (from erotic chats to medical diagnostics). Last month, an open letter gained popularity in which leaders called for a pause in AI development until adequate controls are established. Hinton did not sign it.
The evolution of Hinton's position on A.I. reflects a growing awareness of the risks and challenges associated with rapidly evolving technology. For Hinton, resigning from his life's work was important to prevent a scenario that he says seems to be getting closer every day.
"Look at how it was five years ago and how it is now," he told The New York Times. "Take the difference and propagate it forwards. That's scary."