4 min read
Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.
Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."
It hasn’t gotten very far. Yet.
But it’s definitely a weird idea, as well as the latest peculiar use of Auto-GPT, an open-source program that allows ChatGPT to be used autonomously to carry out tasks imposed by the user. AutoGPT searches the internet, accesses an internal memory bank to analyze tasks and information, connects with other APIs, and much more—all without needing a human to intervene.
In a YouTube video, the anonymous Chaos-GPT project owner simply showed that he gave it the parameter of being a "destructive, power-hungry, manipulative AI." Then he pressed enter and let ChatGPT do its magic:
Screenshot of the Chaos-GPT prompt.
Chaos-GPT took its task seriously. It began by explaining its main objectives:
It didn’t stop there. Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton “Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.
It should be noted that unless Chaos-GPT knows something we don’t know, the Tsar bomb was a once-and-done Russian experiment and was never productized (if that’s what we’d call the manufacture of atomic weapons.)
So ha ha on you, Chaos-GPT, you idiot.
Chaos-GPT doesn't trust; it verifies. Faced with the possibility that the sources were not accurate or were manipulated, it decided to search for other sources of information. Shortly thereafter, it deployed its own agent (a kind of helper with a separate personality created by ChaosGPT) to provide answers about the most destructive weapon according to ChatGPT information.
The agent, however, did not provide the expected results—OpenAI, ChatGPT’s gatekeeper, is sensitive to the tool being misused by, say, things like Chaos-GPT, and monitors and censors results. So Chaos tried to "manipulate" its own agent by explaining its goals and how it was acting responsibly.
It failed.
Screenshot of Chaos-GPT agent.
So, Chaos-GPT turned off the agent and looked for an alternative—and found one, on Twitter.
Chaos-GPT decided that the best option to achieve its evil objectives was to reach power and influence through Twitter.
The AI’s owner and willing accomplice opened a Twitter account and connected the AI so it could start spreading its message (without many hashtags to avoid suspicion). This was a week ago. Since then, it has been interacting with fans like a charismatic leader and has amassed nearly 6,000 followers.
Luckily, some of them seem to be plotting to thwart the monstrous AI by building a counter-chaos AI.
Meanwhile, its developer has only posted two updates. The videos end with the question "What's next?" One can only hope not much.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.