As cyberattacks become more sophisticated and complex, tech companies are turning to artificial intelligence to help detect and prevent attacks in real-time. But some cybersecurity experts are skeptical about its capabilities.
On Tuesday, global software giant Microsoft announced the launch of Security Copilot, a new tool that uses generative AI. Generative AI is a type of artificial intelligence that uses large datasets and language models to generate patterns and content like images, text, and video. ChatGPT is the best-known example.
Microsoft 365 Copilot, an AI engine built to power a suite of Office apps, was launched earlier this month. Security Copilot, the first specialized Copilot tool, will allow IT and security administrators to rapidly analyze vast amounts of data and spot signs of a cyber threat.
WOW!!! What an exciting announcement 😲 Microsoft Security Copilot aka. GAME CHANGER! https://t.co/HzLTww5ynJ #securitycopilot #microsoftsecure #cybersecurity #GPT4
— Heike Ritter (@HeikeRitter) March 28, 2023
“In a world where there are 1,287 password attacks per second, fragmented tools and infrastructure have not been enough to stop attackers,” Microsoft said in a press release. “And although attacks have increased 67% over the past five years, the security industry has not been able to hire enough cyberrisk professionals to keep pace.”
Like other generative AI implementations, Security Copilot is triggered by a query or prompt from a user and responds using the "latest large language model capabilities," and Microsoft says its tool is "unique to a security use-case."
"Our cyber-trained model adds a learning system to create and tune new skills [to] help catch what other approaches might miss and augment an analyst’s work," Microsoft explained. "In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture."
But Microsoft itself, as well as outside computer security experts, said that it will take a while for the tool to get up to speed.
“AI is not yet advanced enough to detect flaws in business logic or smart contracts. This is because AI is based on training data, which it uses to learn and adapt,” Steve Walbroehl, co-founder and CTO at blockchain security firm Halborn, told Decrypt in an interview. “Obtaining sufficient training data can be difficult, and AI may not be able to fully replace the human mind in identifying security vulnerabilities.”
Microsoft is asking for patience: as Copilot learns from user interactions, the company will adjust its responses to create more coherent, relevant, and valuable answers.
“Security Copilot doesn’t always get everything right. AI-generated content can contain mistakes,” the company said. “But Security Copilot is a closed-loop learning system, which means it’s continually learning from users and allowing them to give explicit feedback with the feedback feature built directly into the tool.”