Because generative AI can sometimes hallucinate, conjuring inaccurate or false information, users are encouraged to check its references. As it turns out, that could also lead to trouble. ChatGPT could be providing users links to websites hosting malware, according to a Monday report by Futurism.

The discovery came during a test of ChatGPT’s knowledge of current events. When asked about William Goines, a Bronze Star recipient and the first Black member of the Navy SEALs who recently passed away, Futurism reported that ChatGPT's response included a link to a “scammy website.”

Specifically, ChatGPT-4o suggested visiting a site named “County Local News,” for more information on Goines. The site, however, immediately generated fake pop-up alerts that, when clicked, would infect the user's computer with malware. The same site, Futurism noted, was suggested for other topics.

When Decrypt attempted the Goines test, using the prompt provided by Futurism, the response from ChatGPT did not include a link to a website.

AD

AI developers have invested heavily in combating hallucinations and malicious use of their chatbots, but providing links to other websites introduces additional risks. A linked website could have been a legitimate and safe site when AI companies crawled it, but later become infected or taken over by scammers.

According to Jacob Kalvo, co-founder and CEO of internet data and privacy provider Live Proxies, outgoing links should be constantly checked.

“Developers could ensure that appropriate filtering mechanisms are in place to prevent chatbots from giving out links to malicious websites,” Kalvo told Decrypt. “This can be supplemented by advanced natural language processing (NLP) algorithms that can train a chatbot to identify a URL against known patterns of malignant URLs.

“Moreover, keeping a blacklist of the sites, continuously updated, and being on the watch for new threats cannot be forgotten,” Kalvo added.

AD

Kalvo also recommended verifying website links, domain reputation, and real-time monitoring to identify and address any suspicious activities quickly.

“This provides for continued collaboration with experts in cybersecurity to outsmart new threats as they emerge,” Kalvo said. “Only through AI and human capabilities can developers create a much safer environment for users.”

Kalvo also stressed the need for careful curation of the AI models training data to avoid ingesting and providing harmful content, and the need for regular checks and updates to maintain data integrity.

When contacted about the report, OpenAI provided the same response it gave to Futurism, telling Decrypt that it was working with news publishing partners to combine “conversational capabilities with their latest news content, ensuring proper attribution”—but that the feature is not yet available.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.