3 min read
OpenAI’s ChatGPT has quickly become a friend to many coders, but for cybersecurity researchers, it apparently is not reliable enough to catch the dangerous bugs out there.
In a recent report by Immunefi, the web security company found that many security researchers are making use of ChatGPT as part of their everyday workflow. According to its survey, about 76% of white hat researchers—those probing systems and code for weaknesses to fix—regularly use ChatGPT, compared to just over 23% who do not.
However, the report says that many researchers find ChatGPT to be wanting in the areas where it counts. Above all other concerns, Immunefi found that about 64% of respondents said ChatGPT provided "limited accuracy" in identifying security vulnerabilities, and approximately 61% said it lacked the specialized knowledge for identifying exploits that hackers can abuse.
Jonah Michaels, communications lead at Immunefi, told Decrypt that the report shows that white hats remain “surprisingly bullish” about ChatGPT’s potential, especially for educational purposes, but said this was not a sentiment his company shared for its work.
“The white hats see a broader use for it,” said Michaels. "We see a more limited use of it, because we see it being used to submit essentially garbage bug reports."
Immunefi, which specializes in bug bounty programs in the Web3 space, has banned users from submitting bug reports using ChatGPT since it first became publicly available. One tweet the company posted included a screenshot of a prompt asking ChatGPT itself why not to use it for bug reporting, to which that chatbot responded that its outputs "may not be accurate or relevant.”
For this reason, Michaels said Immunefi immediately bans users who submit bug reports based on ChatGPT. The reason, he said, is that they often look well written enough to be convincing from a “3,000 foot view,” but they are typically riddled with flaws based on functions that simply don’t exist.
Since its release last November, ChatGPT has been dogged by the inconsistent accuracy of some of the content it produces, from false sexual assault allegations to citing legal precedents that do not exist in a court documents.
OpenAI warns against users blindly trusting GPT because of its propensity to provide misleading or completely inaccurate information, typically called “hallucinations.” A spokesperson for OpenAI did not return Decrypt’s request for comment for this story.
In the Immunefi report, the white hat community expressed a view that ChatGPT models will require more training in diagnosing cyber threats or conducting audits, because it currently lacks that specialized knowledge.
Micheals said that the chatbot suffers from not having the right datasets today, and developers for now should rely on manually crafted code to be on the safe side. However, he added that there could be a day in the future where ChatGPT or other generative AI tools like it can do these tasks in a more reliable way.
“Is it possible for ChatGPT to improve and to be specifically trained on project repositories and much more in the blockchain world? I think so,” Michaels told Decrypt. “But I don't think I can recommend that now with how high the stakes are, and how new the field is."
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.