Several world governments are growing increasingly jittery about the Pandora’s box of advanced artificial intelligence that was cracked wide open with the public release of ChatGPT by OpenAI. Even as they mull possible regulations, it’s unclear whether the genie can even be forced back into the bottle.

On Tuesday, the Canadian privacy commissioner said he was investigating ChatGPT, joining colleagues in a growing list of countries—including Germany, France, and Sweden—that have expressed concerns about the popular chatbot after Italy banned it entirely on Sunday.

“A.I. technology and its effects on privacy is a priority for my office," Philippe Dufresne, the Privacy Commissioner of Canada, said in a statement. “We need to keep up with—and stay ahead of—fast-moving technological advances, and that is one of my key focus areas as Commissioner.”

Italy’s ban stemmed from a March 20 incident in which OpenAI acknowledged a bug in the system that exposed users’ payment information and chat history. OpenAI briefly took ChatGPT offline to fix the bug.

“We do not need a ban on AI applications, but rather ways to ensure values such as democracy and transparency,” a spokesperson for the German Ministry of the Interior told German news outlet Handelsblatt on Monday.

But is banning software and artificial intelligence even possible in a world where virtual private networks (VPNs) exist?

A VPN is a service that allows users to securely and privately access the internet by creating an encrypted connection between their device and a remote server. This connection masks the user’s home IP address, making it appear like they are accessing the internet from the remote server’s location rather than their actual location.

Furthermore, “an A.I. ban may not be realistic because there are already many A.I. models in use and more are being developed,” Jake Maymar, vice president of Innovations at A.I. Consulting firm the Glimpse Group, told Decrypt. “The only way to enforce an A.I. ban would be to prohibit access to computers and cloud technology, which is not a practical solution.”

Italy’s attempt at banning ChatGPT comes amid growing apprehension about the impact artificial intelligence will have on privacy and data security, and its potential misuse.

An A.I. think tank, the Center for A.I. and Digital Policy, filed a formal complaint with the U.S. Federal Trade Commission last month, accusing OpenAI of deceptive and unfair practices after the emergence of an open letter signed by several high-profile members of the tech community that called for a slowing of development of artificial intelligence.

OpenAI attempted to address these concerns in an April 5 blog post on AI safety that outlined the firm’s commitment to long-term safety research and cooperation with the A.I. community.

OpenAI said it aims to improve factual accuracy, reducing the likelihood of “hallucinations,” while protecting user privacy and children, including looking into age verification options. “We also recognize that, like any technology, these tools come with real risks—so we work to ensure safety is built into our system at all levels,” the company wrote.

OpenAI’s message did not sit well with some, who called it PR window dressing that did not address the existential risk posed by AI."

While some sound the alarm on ChatGPT, others say the chatbot isn’t the problem, rather a broader issue of society’s intended use of it.

"What this moment does provide is a chance to consider what sort of society we want to be—what rules we want to apply to everyone equally, AI powered or not—and what kind of economic rules serve society best," Barath Raghavan, Associate Professor of Computer Science at USC Viterbi, told Decrypt. "The best policy responses will not be ones that target specific technological mechanisms of today's A.I. (which will quickly be out of date) but behaviors and rules we'd like to see apply universally."

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.