The rapid proliferation of artificial intelligence platforms since the launch of ChatGPT in November has prompted policymakers worldwide to scramble to catch up with technology and establish regulations and guardrails for the emerging technology. During a forum discussion at Google Cloud Next, business leaders discussed potential rules and how they may affect companies that have already leapt into AI.

Global bodies, including the United Nations, have sounded the alarm on generative AI and are trying to corral the increasingly mainstream tech. The conflict between adequately regulating AI without stifling innovation remains. As seen in the debate around cryptocurrency, the elephant in the room is the hesitance to impose regulations that could kill the AI business boom.

“I strongly believe that this hype and discussion around AI is somewhat different from what we have seen, for example, in the crypto space," Deutsche Bank Chief Strategy Officer of Technology and Data Christoph Rabenseifner said. "Because artificial intelligence and all the generative AI tools we are talking about today will influence more or less every aspect of business.”

Rabenseifner said he couldn't think of a job sector where generative AI would not help and make something easier, faster, or better.

"Because of that, I think regulators globally, and there are very different approaches around the globe, will not cut it off to an entirety,” he said.

Some business leaders, including Rabenseifner, see the AI arms race itself as a safeguard against harmful policies.

Companies will have to explain to the public and policymakers how AI models work and do their best to make the public comfortable with the technology, he said, observing, “We are all on the journey together."

“We'll do what it takes in order to make everyone comfortable so that we can use the technology going forward,” he added.

Echoing Rabenseifner’s call for transparency, SAP CTO Juergen Mueller pointed to platforms like Google’s Vertex AI and SynthID, which can indicate when generative AI is behind an image posted online. Google announced the launch of the new watermark technology at Google Cloud Next in San Francisco on Tuesday.

Despite this transparency, Mueller said companies should also have a backup plan in case of service failure or if they have to disable their AI tools for legal or regulatory reasons, pointing to the companies building driverless, AI-controlled vehicles.

“We need to build reliable systems,” Mueller said. “So if that service will be down for whatever reason or will not be allowed anymore, we don't want these trucks just to stand there and not be able to get [their work] done.”

"Plans are simple. If the government tells us to stop, we’ll stop,” Estée Lauder Companies Executive Vice President Gibu Thomas added. “We've been around for 75 years and want to be around for the next 75. And that will only happen if we don't compromise consumer trust and follow the applicable laws and regulations.”

Thomas highlighted the importance of being proactive in identifying potential misuse of AI technology. This proactive approach extends to Estée Lauder's principles, particularly concerning representations of beauty standards.

These principles, Thomas continued, are not just guidelines but are actively used in how the company applies technology.

“We don't want to have digital misrepresentations that are aesthetic models that give people the wrong idea about what authentic beauty means,” Thomas said. “So there are principles that we use to make sure that as we apply this technology in the ways in which we engage with our consumers and use it in our business, and we're doing it in a very thoughtful and responsible way.”

Earlier this month, a report by the Center for Countering Digital Hate accused generative AI tools of creating "harmful content," including text and images related to eating disorders.

Chatbot developers OpenAI, Google, and Stability AI defended their technology after Decrypt reported on the CCDH's findings.

Agreeing with his fellow panelists, Cohere CTO Saurabh Baji said the company’s ultimate goal is to serve its customers, and added that if there is a hard stop on AI development for any company, it matters to Cohere.

“Knowing where to draw boundaries, how to put in safeguards, and how to really have the most options for configurability—for control on the customer side—is critical,” Baji said.

Baji emphasized the importance of a partnership between the tech industry and regulators to ensure that the most up-to-date information on AI technology is available for regulatory scrutiny, noting that regulation usually lags a few steps behind technological advancements.

“We've seen this with AI in the past as well, [and] with any new technology,” Baji said. “So the risks are certainly not completely new."

Baji said that the policymakers putting the brakes on AI development would be a mistake and something he does not see happening.

"Ensuring that we are actually able to open up and show the latest and greatest or what's coming to regulators also matters,” Baji said.

For A121 Labs CEO Ori Goshen, there is no going back when it comes to AI’s mainstream adoption.

“It's the reality,” Goshen said. “We'll see a lot of research around in the next few years."

Goshen linked interpretability to the idea of transparency, suggesting that providing more information on how these models make decisions will help gain public trust instead of treating these technologies as "black boxes," or systems that provide output without any easily understandable explanation of how they arrived at any particular outcome.

“Something which is more interpretable is probably critical for the trust we have with these models,” he said.

Wrapping up the discussion on AI policy, Google Cloud Director Aashima Gupta emphasized the need for bold but responsible development of AI technologies but acknowledged that AI is too profound and important not to have regulatory oversight as well, giving as an example the use of AI in healthcare.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.