Looking to bolster the security of its popular AI chatbot, OpenAI is turning to outside cybersecurity and penetration experts, also known as "red teams," to find holes in the AI platform.

The company says it is looking for experts across various fields, including cognitive and computer science, economics, healthcare, and cybersecurity. The aim, OpenAI says, is to improve the safety and ethics of AI models.

The open invitation comes as the US Federal Trade Commission launches an investigation into OpenAI’s data collection and security practices, and policymakers and corporations are questioning how safe using ChatGPT is.

"[It's] crowdsourcing volunteers to jump in and do fun security stuff," Halborn Co-founder & CISO Steven Walbroehl told Decrypt. "It's a networking opportunity, and a chance to be [on] the frontline of tech."

AD

"Hackers—the best ones—like to hack the newest emerging tech," Walbroehl added.

To sweeten the deal, OpenAI says red team members will be compensated, and no prior experience with AI is necessary—only a willingness to contribute diverse perspectives.

“We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts,” OpenAI wrote. “We are looking for experts from various fields to collaborate with us in rigorously evaluating and red-teaming our AI models.”

Red teams refer to cybersecurity professionals who are experts at attacking—also known as penetration testing or pen-testing—systems and exposing vulnerabilities. In contrast, blue teams describe cybersecurity professionals who defend systems against attacks.

AD

“Beyond joining the network, there are other collaborative opportunities to contribute to AI safety,” OpenAI continued. “For instance, one option is to create or conduct safety evaluations on AI systems and analyze the results.”

Launched in 2015, OpenAI entered the public eye late last year with the public launch of ChatGPT and the more advanced GPT-4 in March, taking the tech world by storm and ushering generative AI into the mainstream.

In July, OpenAI joined Google, Microsoft, and others in pledging to commit to developing safe and secure AI tools.

While generative AI tools like ChatGPT have revolutionized how people create content and consume information, AI chatbots have not been without controversy, drawing claims of bias, racism, lying (hallucinating), and lacking transparency regarding how and where user data is stored.

Concerns over user privacy led several countries, including Italy, Russia, China, North Korea, Cuba, Iran, and Syria, to implement bans on using ChatGPT within their borders. In response, OpenAI updated ChatGPT to include a delete chat history function to boost user privacy.

The Red Team program is the latest play by OpenAI to attract top security professionals to help evaluate its technology. In June, OpenAI pledged $1 million towards cybersecurity measures and initiatives that use artificial intelligence.

While the company said researchers are not restricted from publishing their findings or pursuing other opportunities, OpenAI noted that members of the program should be aware that involvement in red teaming and other projects are often subject to Non-Disclosure Agreements (NDAs) or "must remain confidential for an indefinite period.”

“We encourage creativity and experimentation in evaluating AI systems,” OpenAI concluded. “Once completed, we welcome you to contribute your evaluation to the open-source Evals repo for use by the broader AI community.”

AD

OpenAI did not immediately return Decrypt’s request for comment.

Stay on top of crypto news, get daily updates in your inbox.