OpenAI says it's investigating after a hacker claimed to have swiped login credentials for 20 million of the AI firm's user accounts—and put them up for sale on a dark web forum.
The pseudonymous breacher posted a cryptic message in Russian advertising "more than 20 million access codes to OpenAI accounts," calling it "a goldmine" and offering potential buyers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being offered for sale “for just a few dollars.”
Image: Gbhackers
“I have over 20 million access codes for OpenAI accounts,” emirking wrote Thursday, according to a translated screenshot. “If you're interested, reach out—this is a goldmine, and Jesus agrees.”
If legitimate, this would be the third major security incident for the AI company since the release of ChatGPT to the public. Last year, a hacker got access to the company’s internal Slack messaging system. According to The New York Times, the hacker “stole details about the design of the company’s A.I. technologies.”
ChatGPT developer OpenAI has plugged a hole that prompted its flagship chatbot to reveal internal company data. The leading AI firm has classified the hack—prompting ChatGPT to repeat a word over and over, indefinitely—as spamming the service, and a violation of its terms of service.
Amazon’s much newer AI agent, Q, has also been flagged for sharing too much.
Researchers from the University of Washington, Carnegie Mellon University, Cornell University, UC Berkeley, ETH Zurich, and Google DeepMin...
This time, however, security researchers aren’t even sure a hack occurred. Daily Dot reporter Mikael Thalan wrote on X that he found invalid email addresses in the supposed sample data: "No evidence (suggests) this alleged OpenAI breach is legitimate. At least two addresses were invalid. The user's only other post on the forum is for a stealer log. Thread has since been deleted as well."
No evidence this alleged OpenAI breach is legitimate.
Contacted every email address from the purported sample of login credentials.
At least 2 addresses were invalid. The user's only other post on the forum is for a stealer log. Thread has since been deleted as well. https://t.co/yKpmxKQhsP
In a statement shared with Decrypt, an OpenAI spokesperson acknowledged the situation while maintaining that the company's systems appeared secure.
"We take these claims seriously," the spokesperson said, adding: "We have not seen any evidence that this is connected to a compromise of OpenAI systems to date."
The scope of the alleged breach sparked concerns due to OpenAI's massive user base. Millions of users worldwide rely on the company's tools like ChatGPT for business operations, educational purposes, and content generation. A legitimate breach could expose private conversations, commercial projects, and other sensitive data.
OpenAI's new Deep Research agent promises to transform how users collate data online by autonomously browsing the internet, analyzing responses, and delivering comprehensive documents on any topic.
The AI company showed off its capabilities by tackling everything from ski purchase recommendations to advanced biology papers.
But it’s not for the poor. OpenAI limits access to Pro users who shell out $200 monthly for the privilege.
There’s a reason behind the high price: “It is very compute-intensi...
Until there’s a final report, some preventive measures are always advisable:
Go to the “Configurations” tab, log out from all connected devices, and enable two-factor authentication or 2FA. This makes it virtually impossible for a hacker to gain access to the account, even if the login and passwords are compromised.
If your bank supports it, then create a virtual card number to manage OpenAI subscriptions. This way, it is easier to spot and prevent fraud.
Always keep an eye on the conversations stored in the chatbot’s memory, and be aware of any phishing attempts. OpenAI does not ask for any personal information, and any payment update is always handled through the official OpenAI.com link.
Mistral Medium 3 dropped yesterday, positioning the model as a direct challenge to the economics of enterprise AI deployment.
The Paris-based startup, founded in 2023 by former Google DeepMind and Meta AI researchers, released what it claims delivers frontier performance at one-eighth the operational cost of comparable models.
"Mistral Medium 3 delivers frontier performance while being an order of magnitude less expensive," the company said.
The model represents Mistral AI’s most powerful propri...
Behind the scenes in one of Reddit's largest communities, something creepy has been brewing. For four months, AI-powered bots masqueraded as humans, swaying opinions and earning thousands of upvotes.
The experiment appeared to be working—until everyone found out.
Reddit announced plans earlier this week to tighten user verification after learning that researchers from the University of Zurich conducted an unauthorized experiment on the r/changemyview subreddit, using AI bots to manipulate users...
Google's recently launched Gemini 2.5 Pro has risen to the top spot on coding leaderboards, beating Claude in the famous WebDev Arena—a non-denominational ranking site akin to the LLM arena, but focused specifically on measuring how good AI models are at coding. The achievement comes amid Google's push to position its flagship AI model as a leader in both coding and reasoning tasks.
Released earlier this year Gemini 2.5 Pro ranks first across several categories, including coding, style control,...