In brief
- Douglas Rushkoff argues AI utopianism masks labor exploitation and environmental costs.
- Economists say AI boosts productivity but concentrates displacement, especially at the entry level.
- Experts push back on claims of deliberate deception, warning against oversimplified narratives.
For media theorist Douglas Rushkoff, the glossy promises of a silicon-powered utopia are little more than a smokescreen for an elitist exit strategy.
Rushkoff, a professor of media theory and digital economics at Queens College/CUNY, and the author of Survival of the Richest and Team Human, made the remarks during a recent interview on the Repatterning Podcast with host Arden Leigh. In the interview, he offered a scathing critique of the tech billionaire class, arguing that those evangelizing artificial intelligence are less interested in “saving the world” than in surviving its potential collapse brought on by the technology they unleashed.
“The billionaires are afraid of being hoisted on their own petard,” Rushkoff said. “They are afraid of having to deal with the repercussions of their actions.”
He pointed to tech titans, including Mark Zuckerberg and Sam Altman, reportedly investing in bunker construction, while at the same time SpaceX CEO Elon Musk preaches space colonization, betraying their public optimism, and secretly they expect social and environmental collapse rather than a technological golden age.
“What they’ve done by building their bunkers and revealing their various space plans is they’ve exposed the fact that they do not believe that the things they are making are going to save the world,” Rushkoff said. “They believe that the things they’re making could save them and that the rest of us are going down.”
Rushkoff also challenged the notion that AI is reducing human labor. Rather, he said, the technology shifts work into less visible and more exploitative forms rather than eliminating it.
“We’re not actually seeing a reduction in labor because of AI,” Rushkoff said. “What we’re seeing is a downskilling of labor.”
While technologists, including Robinhood CEO Vladimir Tenev, argue that AI will fuel a surge of new jobs and industries. Rushkoff said the global infrastructure required to sustain AI systems, from mining to data preparation, is a core contradiction in claims about the benefits automation will bring.
“You need lots of slaves to get rare earth medals, and you need lots of people in China and Pakistan to tag all this data,” Rushkoff said. “There are thousands and thousands of people behind AI. We’re going to have to have people building power plants and figuring out new energy sources and digging up more coal and getting more oil. So far, there are lots and lots of jobs—just not jobs that we want to have.”
Rushkoff argued that this hidden labor undercuts promises of a post-work future, even as creative and professional workers face displacement. The result, he said, is not liberation but a redistribution of harm.
He also criticized the ideology driving elite AI narratives, describing it as a form of transhumanism that treats most people as disposable.
“They have a kind of religion,” Rushkoff said. “Where they look at you and me as being in the larval stage of humanity.”
In that worldview, he said, wealthy technologists imagine themselves escaping biological limits through machines while the rest of humanity becomes expendable.
“They’re the ones that are sprouting wings and getting off the planet or uploading to the cloud,” Rushkoff said, while “the rest of us are only matter, fuel for their escape.”
Others in the computer science and technology field rejected the idea that Silicon Valley leaders are knowingly concealing a collapse.
“I would avoid extremes, because probably the truth is in the middle,” David Bray told Decrypt.
Chair of the Accelerator and a distinguished fellow at the Stimson Center, a nonpartisan think tank focused on security, governance, and emerging tech, Bray pushed back on the idea that tech leaders are knowingly using utopian AI narratives to hide an impending collapse, warning that such interpretations risk “discarding an overly hopeful message for an overly dire message.”
Bray did, however, acknowledge that many optimistic claims about AI oversimplify what is required to manage large-scale technological change.
“When I hear people give a utopian vision, on the one hand, I celebrate that it’s not fear mongering,” he said. “But I do worry that it is missing the fact that there are things that need to go in place beyond just the tech itself.”
Bray echoed Rushkoff’s warning that the costs of AI are often obscured, pointing to the environmental damage and human exploitation embedded in the supply chains that make advanced technologies possible.
“We are increasingly in an interconnected world, and we need to be aware of what I would call a farm-to-table view,” he said.
Bray framed the AI transition as disruptive but familiar, tracing a line back to the 1890s, railroads, telegraph machines, and the industrial revolution. “We’ve been here before,” he said. “We will get through this, but there will be a period of upheaval.”
According to Lisa Simon, chief economist at workforce intelligence company Revelio Labs, labor market data already reflects parts of that upheaval.
“The most highly exposed occupations have seen the biggest fall in demand, especially in entry-level roles,” Simon told Decrypt, noting that the effect is concentrated where workers have the least leverage.
At the lower end of the wage spectrum, Simon said the dynamics look closer to direct displacement, and as workers use AI tools to increase output, employers may simply need fewer people.
“We’re seeing this mostly in low wage work, where the complexity of tasks is a little lower and the ability to replace entire chunks of an occupation through automation is a given,” she said, adding that those roles are also seeing some of the weakest wage growth.
Simon also said many of the costs tied to AI infrastructure remain poorly accounted for. “I don’t think the environmental cost to these massive data centers is fully appreciated,” she said.
While Simon said she remains broadly optimistic about AI’s long-term potential, she framed the current moment as one that demands policy intervention. To preserve social cohesion amid displacement and uneven gains, she said, governments may need to consider “more redistributionary policies like universal basic income.
“I don’t think it’s one way or the other that things will be utopian or dystopian,” NYU professor Vasant Dhar told Decrypt.
Dhar, who teaches at the Stern School of Business and the Center for Data Science, said AI is likely to produce uneven outcomes rather than a clean post-work future. He warned of what he called a “bifurcation of humanity,” where the technology “amplifies some people” and “turbo charges productivity,” while others become disempowered, using AI “as a crutch as opposed to an amplifier.”
He said those gains also carry displacement risks. “I think we’ll see a lot of job destruction,” Dhar said, adding that it remains unclear what kinds of new jobs will emerge to replace those losses.
Ultimately, Dhar said outcomes will depend on governance rather than technology alone. “The outcomes will depend on the choices we make,” he said, asking, “Will we govern AI, or will they govern us?”

