In brief

  • In a private session at the Institute for Advanced Study, physicists and astronomers acknowledged that agentic AI systems are already outperforming humans in coding and analytical reasoning.
  • Columbia astrophysicist David Kipping said some scientists have fully integrated AI into their workflows—granting access to emails, files, and calendars—arguing that the competitive advantage now outweighs privacy, ethical, and professional risks.
  • The discussion exposed deep anxiety inside elite institutions.

Leading researchers at an elite Princeton institute recently acknowledged behind closed doors that artificial intelligence now outperforms them at much of the work that defines scientific prestige.

The admission surfaced during a closed-door session at the Institute for Advanced Study, according to David Kipping, a Columbia University astrophysicist who attended the meeting and described it this week on his Cool Worlds podcast.

Kipping said senior faculty members demonstrated how agentic AI systems—fed only a handful of prompts—are now generating sophisticated code, analyses, and research outputs that would once have occupied scientists for weeks. The scientists acknowledged that agentic AI tools now perform up to 90% of the intellectual labor behind modern research, often delivering publishable results with minimal human direction.

"This wasn't just... the voices in my own head," Kipping said in a clip that's garnered over 675,000 views. "Everybody was saying the same thing."

According to Kipping, the lead presenter at the meeting highlighted AI's "complete coding supremacy over humans" and its growing edge in analytic reasoning. One physicist had fully integrated AI into his workflow, granting it access to emails, file systems, and calendars, dismissing privacy concerns because "the advantage... is so outsized."

Kipping noted the consensus that competitiveness in science now requires such adoption, even as it raises ethical questions. The discussion extended to broader implications, including the risk of skill atrophy among researchers—likened to reliance on GPS diminishing navigation abilities—and the possibility of AI delivering breakthroughs in fields like fusion energy, drug development, and theoretical physics that humans might not comprehend.

"Maybe no human being will understand how this fusion machine works," Kipping said. "That frightens me a little bit. I don't know that I want to live in a world where everything around me is just magic."

A professor of Astronomy at Columbia University, Kipping leads research on planets outside our solar system, planetary habitability, and astrophysical data analysis. He's generally considered by peers to be a non-sensationalist communicator, curious about speculative ideas but explicit about uncertainty and limits. He is not known for hype or doom-mongering.

The professor emphasized that the concerns he and other scientists have raised are not isolated speculation: Elite institutions are convening emergency internal meetings, with the world's top minds viewing AI as a "threat to their intellectual supremacy." That said, he noted that he's personally used AI for coding, debugging, and literature searches in his research for years, seeing it as a tool for progress despite public backlash against AI-generated content.

Kipping warned of a potential "tsunami" of AI-assisted papers, but highlighted AI's role in democratizing science by enabling broader participation. The full podcast episode, which runs about an hour, frames this as a historic transitional period in science, urging adaptation while preserving human oversight to verify AI outputs and mitigate hallucinations.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.