In brief
- Transhumanism was labeled a “death cult” by critics who argued it misunderstood what it means to be human.
- Advocate Zoltan Istvan defended the movement as a humanitarian effort to end suffering, aging, and death through technology.
- Philosophers and AI researchers warned that promises of digital immortality were flawed and raised unresolved ethical risks.
Transhumanism, a movement that seeks to defeat aging and death through technology, was sharply criticized during a recent debate between philosophers, scientists, and transhumanist advocates, who rejected the accusation as misguided and reactionary.
The exchange took place Dec. 4 at the UK-based Institute of Art and Ideas’ “World’s Most Dangerous Idea” event, where neuroscientist and philosopher Àlex Gómez-Marín argued that the movement functions as a pseudo-religion—one that aims to eliminate the human condition rather than preserve it.
“I think transhumanism is a death cult,” Gómez-Marín said. “I think transhumanism is a pseudo-religion dressed in techno-scientific language whose goal is to extinct the human condition and tell everyone that we should cheer and clap as this happens.”
The debate has circulated among technologists, philosophers, and ethicists for decades, but has taken on renewed urgency as artificial intelligence, biotechnology, and longevity research advance. While advocates argue technology can save humanity from death, critics warn the movement is based on fantasies of immortality.
More recently, a report by the Galileo Commission warned that transhumanist efforts to merge humans and machines could reduce human life to a technical system and sideline questions of meaning, identity, and agency.
The term “transhumanism” was coined in the mid-20th century and later developed by thinkers including Julian Huxley, Max More, Natasha Vita-More, Ben Goertzel, Nick Bostrom, and Ray Kurzweil. Supporters such as biohacker Bryan Johnson and tech billionaire Peter Thiel have argued that technology could be used to transcend biological limits such as aging and disease. Critics have countered that the movement’s aims would only benefit the ultra-wealthy, and blur the line between science and religion.
Dear humanity,
I am building a religion.
Wait a second, I know what you’re going to say. Hold that knee-jerk reaction and let me explain.
First, here’s what’s going to happen:
+ Don’t Die becomes history's fastest-growing ideology.
+ It saves the human race.
+ And ushers in… pic.twitter.com/MJcrU9uXNf— Bryan Johnson (@bryan_johnson) March 7, 2025
Joining Gómez-Marín in the discussion were philosopher Susan Schneider, AI researcher Adam Goldstein, and Zoltan Istvan, a transhumanist author and political candidate who is currently running for governor of California, rejected Gómez-Marín’s characterization and described transhumanism as an effort to reduce suffering rooted in biology.
The participants offered competing visions of whether transhumanist ideas represented humanitarian progress, philosophical confusion, or an ethical misstep.
“Most transhumanists such as myself believe that aging is a disease, and we would like to overcome that disease so that you don’t have to die, and that the loved ones you have don’t have to die,” Istvan said, tying the view to personal loss.
“I lost my father about seven years ago,” he said. “Death we have all accepted as a natural way of life, but transhumanists don’t accept that.”
Gómez-Marín said the greater risk lay not in specific technologies but in the worldview guiding their development, particularly among technology leaders who, he argued, know about technology but don’t know humanity.
“They know a lot about technology, but they know very little about anthropology,” he said.
For her part, philosopher Susan Schneider told the audience that she once identified as a transhumanist, and drew a distinction between using technology to improve health and endorsing more radical claims such as uploading consciousness to the cloud.
“There’s this claim that we will upload the brain,” Schneider said. “I don’t think you or I will be able to achieve digital immortality, even if the technology is there—because you would be killing yourself, and another digital copy of you would be created.”
Schneider also warned that transhumanist language was increasingly used to deflect attention from immediate policy questions, including data privacy, regulation, and access to emerging technologies.
Adam Goldstein, an AI researcher, told the audience that the debate should focus less on predictions of salvation or catastrophe and more on choices already being made about how technology is designed and governed.
“I think if we want to be constructive, we need to think about which of these futures we actually want to build,” he said. “Instead of taking it as a given that the future is going to be like this or like that, we can ask what would be a good future.”
The central issue, Goldstein said, was whether humans chose to design a cooperative future with artificial intelligence or approached it from fear and control, which could shape the future of humanity once AI systems surpassed human intelligence.
“I think we have good evidence for what a good future is from the ways we’ve navigated differences with other human beings,” he said. “We’ve figured out political systems, at least some of the time, that work to help us bridge differences and achieve a peaceful settlement of our needs. And there’s no reason I can see why the future can’t be like that with AI also.”

