By Jason Nelson
3 min read
ChatGPT developer OpenAI said it teamed up with top investor Microsoft to thwart five “state-affiliated” cyber attacks.
The cyber attacks, OpenAI said on Wednesday, came from two China-affiliated groups—Charcoal Typhoon and Salmon Typhoon—as well as from Iran-affiliated Crimson Sandstorm, North Korea-affiliated Emerald Sleet, and the Russia-affiliated Forest Blizzard.
The groups attempted to use GPT-4 for company and cybersecurity tool research, code debugging, script generation, phishing campaigns, translating technical papers, malware detection evasion, and satellite communication and radar technology research, OpenAI said. The accounts were terminated after they were identified.’’
“We have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” OpenAI said in a blog post, which also shared the firm's “approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”
OpenAI and Microsoft did not immediately respond to Decrypt’s request for comment.
“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” OpenAI said. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”
Microsoft Threat Intelligence, a Microsoft spokesperson told Decrypt, uses a security graph to identify, cluster, and track threats, enabling swift detection of emerging dangers.
“Across our global visibility, Microsoft leverages trillions of signals to empower our threat intelligence efforts,” the spokesperson said. “These signals contain relevant security information, which is collected, processed, and enriched to form an intelligent security graph to enable our defenders to protect customers.”
While OpenAI successfully stopped these instances, the company acknowledged the impossibility of stopping every misuse.
Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers. In September, OpenAI announced an initiative to beef up the cybersecurity surrounding its AI models, including turning to third-party “red teams” to find holes in OpenAI security.
Despite OpenAI's investment in cybersecurity and implementing measures to stop ChatGPT from replying with malicious, racist, or hazardous responses, hackers have figured out ways to jailbreak the program and make the chatbot do just that. In October, researchers at Brown University discovered that using less common languages like Zulu and Gaelic could bypass ChatGPT's restrictions.
OpenAI emphasized the need to stay ahead of evolving threats and highlighted their approach to securing its AI models, including transparency, working with other AI developers, and learning from real-world cyber attacks.
Last week, over 200 organizations, including OpenAI, Microsoft, Anthropic, and Google, joined with the Biden Administration to form the AI Safety Institute and U.S. AI Safety Institute Consortium (AISIC), aimed at developing artificial intelligence safely, fighting AI-generated deepfakes, and addressing cybersecurity concerns.
“By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,” OpenAI said.
Edited by Ryan Ozawa. Updated to add a comment from Microsoft.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.