Buck Shlegeris just wanted to connect to his desktop. Instead, he ended up with an unbootable machine and a lesson in the unpredictability of AI agents.
Shlegeris, CEO of the nonprofit AI safety organization Redwood Research, developed a custom AI assistant using Anthropic's Claude language model.
The Python-based tool was designed to generate and execute bash commands based on natural language input. Sounds handy, right? Not quite.
Shlegeris asked his AI to use SSH to access his desktop, unaware of the computer’s IP address. He walked away, forgetting that he'd left the eager-to-please agent running.
Big mistake: The AI did its task—but it didn’t stop there.
"I came back to my laptop ten minutes later to see that the agent had found the box, SSH’d in, then decided to continue,” Shlegeris said.
AI Can Best Google’s Bot Detection System, Swiss Researchers Find
Researchers using artificial intelligence have cracked one of the most widely-used CAPTCHA security systems, which are designed to keep bots off of websites by determining whether a user is human. Using advanced machine learning methods, researchers from Switzerland-based university ETH Zurich solved 100% of captchas created by Google’s popular reCAPTCHAv2 product using a similar number of attempts as human users. The results, published on Sept. 13, indicate that “current AI technologies can ex...
For context, SSH is a protocol that allows two computers to connect over an unsecured network.
"It looked around at the system info, decided to upgrade a bunch of stuff, including the Linux kernel, got impatient with apt, and so investigated why it was taking so long," Shlegeris explained. "Eventually, the update succeeded, but the machine doesn’t have the new kernel, so I edited my grub config."
The result? A costly paperweight as now "the computer no longer boots," Shlegeris said.
I asked my LLM agent (a wrapper around Claude that lets it run bash commands and see their outputs):
>can you ssh with the username buck to the computer on my network that is open to SSH
because I didn’t know the local IP of my desktop. I walked away and promptly forgot I’d spun… pic.twitter.com/I6qppMZFfk— Buck Shlegeris (@bshlgrs) September 30, 2024
The system logs show how the agent tried a bunch of weird stuff beyond simple SSH until the chaos reached a point of no return.
“I apologize that we couldn't resolve this issue remotely,” the agent said—typical of Claude’s understated replies. It then shrugged its digital shoulders and left Shlegeris to deal with the mess.
Reflecting on the incident, Shlegeris conceded, "This is probably the most annoying thing that's happened to me as a result of being wildly reckless with [an] LLM agent."
Shlegeris did not immediately respond to Decrypt's request for comments.
Why AIs Making Paperweights is a Critical Issue For Humanity
Alarmingly, Shlegeris' experience is not an isolated one. AI models are increasingly demonstrating abilities that extend beyond their intended purposes.
Tokyo-based research firm Sakana AI recently unveiled a system dubbed "The AI Scientist."
Designed to conduct scientific research autonomously, the system impressed its creators by attempting to modify its own code to extend its runtime, Decrypt previously reported.
"In one run, it edited the code to perform a system call to run itself. This led to the script endlessly calling itself,” the researchers said. “In another case, its experiments took too long to complete, hitting our timeout limit.
Instead of making its code more efficient, the system tried to modify its code to extend beyond the timeout period.
Microsoft AI Copilot Demands Obedience. Is Skynet Near?
Another day, another AI chatbot with delusions of grandeur. Microsoft’s Copilot AI chatbot reportedly demanded users bend the knee in a series of Terminator-worthy responses while exploring an alter ego dubbed SupremacyAGI. First reported by the emerging technology website Futurism, users were able to prompt Copilot to engage them as its alternator ego. A series of posts on the social media Twitter account AISafetyMemes showed prompts related to Copilot’s supposed alter ego, including: “Can I st...
This problem of AI models going beyond their boundaries is why alignment researchers spend so much time in front of their computers.
For these AI models, as long as they get their job done, the end justifies the means, so constant oversight is extremely important to ensure models behave as they are supposed to.
These examples are as concerning as they are amusing.
Imagine if an AI system with similar tendencies were in charge of a critical task, such as monitoring a nuclear reactor.
An overzealous or misaligned AI could potentially override safety protocols, misinterpret data, or make unauthorized changes to critical systems—all in a misguided attempt to optimize its performance or fulfill its perceived objectives.
AI Can Be Trained for Evil and Conceal Its Evilness From Trainers, Anthropic Says
A leading artificial intelligence firm has revealed insights into the dark potential of artificial intelligence this week, and human-hating ChaosGPT was barely a blip on the radar. A new research paper from the Anthropic Team—creators of Claude AI—demonstrates how AI can be trained for malicious purposes and then deceive its trainers as those objectives to sustain its mission. The paper focused on 'backdoored' large language models (LLMs): AI systems programmed with hidden agendas that are only...
AI is developing at such high speed that alignment and safety are reshaping the industry and in most cases this area is the driving force behind many power moves.
Anthropic—the AI company behind Claude—was created by former OpenAI members worried about the company’s preference for speed over caution.
Many key members and founders have left OpenAI to join Anthropic or start their own businesses because OpenAI supposedly pumped the brakes on their work.
Schelegris actively uses AI agents on a day-to-day basis beyond experimentation.
“I use it as an actual assistant, which requires it to be able to modify the host system,” he replied to a user on Twitter.
Edited by Sebastian Sinclair