Yann LeCun, the chief AI scientist at tech giant Meta, recently offered a tempered view on the future of artificial intelligence and quantum computing—a striking departure from the hyper-optimistic (and hyper-pessimistic) forecasts often proclaimed across the tech world.
During Meta's Fundamental AI Research team's 10-year anniversary gathering, LeCun addressed the current state and future of AI, and went against the conventional wisdom.
“Train a system on the equivalent of 20,000 years of reading material, and they still don’t understand that if A is the same as B, then B is the same as A,” he said.
LeCun emphasized the significant gap between today's AI capabilities and the prospect of achieving human-level intelligence. Some may think AI will save or doom the world, but for LeCun, we’re more likely to merely have “cat-level” or “dog-level” AIs in the upcoming years.
True intelligence requires a vast amount of data that exceeds text and other audiovisual inputs available today, he said.
The Singularity Is Less Than 10 Years Away, Says AI Veteran
Generative AI has become firmly entrenched in the culture zeitgeist, and everyone from computer scientists to social media mavens is tuned in, looking ahead to the next great leap: the singularity, the moment when artificial intelligence surpasses human intelligence—and escapes human control. Before the mainstream adoption of generative AI, and broadening worries about the dangers of the technology, experts and theorists have speculated that the singularity is decades away, giving humans time to...
LeCun has always been the type of researcher who prefers to keep expectations as low as possible—without losing sight of the big picture. Speaking at the World Science Festival two weeks ago, he said that the amount of power required to achieve human levels of intelligence ”cannot (be) reproduced today with the kind of computers we have.”
Even so, he conceded that AGI could be achievable in the future—just not as soon as many think.
“There is absolutely no question that at some point in the future, perhaps decades from now, we'll have AI systems that are as smart as humans in all the domains where humans are smart,” he assured.
Amid Mainstream Boom, AI Skeptics Say the Tech is Overhyped
Artificial intelligence tools like ChatGPT have “unequivocally gone mainstream” in 2023, according to a new survey of over 1,500 professionals working in technology and related fields. Yet, despite rapid adoption, most respondents still view AI as overhyped. Just under a quarter called it “fairly rated.” The survey, conducted by Retool, found broad enthusiasm tempered by skepticism. Retool, founded in June 2017 and part of the Y Combinator accelerator program, offers a platform for building int...
Quantum shmantum
LeCun also expressed doubts about the immediate utility of quantum computing, a field drawing significant investment from tech giants including Nvidia, Google, and IBM. He argued that most problems believed to require quantum computing could be more efficiently solved using classical computers—a view shared by Meta's former tech chief Mike Schroepfer.
“Quantum computing is a fascinating scientific topic,” LeCun said, but added that the “practical relevance and the possibility of actually fabricating quantum computers that are actually useful” are still questionable.
Quantum computing is a type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. This approach is fundamentally different from classical computing, which relies on bits that are in states of 0 or 1.
If the technology is properly developed, quantum computers would be able to solve problems in seconds that would take thousands of years with the most powerful supercomputers currently available. That would mean instantly cracking cryptographic codes, high-fidelity real-time simulations, and even ultrafast AI training.
LeCun's cautious stance signals a more balanced approach to AI and quantum computing in a field often full of revolutionary narratives. While progress is being made, he cautions that the path to mature AI is longer and more complex than we think.
Edited by Ryan Ozawa.