When Microsoft announced its new Recall feature during its annual developer conference on Monday, the news made waves beyond the AI industry. Cybersecurity and privacy experts also took notice—and they were concerned.

During the presentation, Microsoft said new Copilot-enabled computers will remember and later be able to find anything displayed on screen—including emails, websites, and applications—through AI-indexed snapshots stored on-device. Reaction to the announcement was mixed, prompting some security experts to call the feature spyware and a natural target for cybercriminals.

“This is the company that wants to record literally everything you ever do on your computer,” Geometric Intelligence founder and CEO Gary Marcus wrote on Twitter. “If you don’t think Microsoft Recall, local or no, will be one of the biggest cyber targets in history, you aren’t paying attention.”

“I’m so glad Microsoft is out here helping me recall why I don’t use Windows if I can help it, and when I do, I disable every ‘smart’ feature they add,” writer at Linus Tech Tips Emily Young wrote.

“Back in my day, we called this spyware,” Software Engineer and crypto researcher and critic Molly White wrote.

With Recall, verything that appears on screen is captured—including passwords, if the app or form doesn't automatically obscure it.

AD

Although a simple premise, the implementation of Recall requires a lot of care, cybersecurity expert Katelyn Bowden told Decrypt.

“Recall appears to function like existing search engines, with an expanded scope of what it records,” Bowden said, noting that Recall only functions on PCs with specific hardware configurations. “You already have a browser history and file contents index on your PC, and this is functionally no different. And unlike many of those files, Microsoft says Recall data is encrypted.

“If Microsoft were to start off-loading processing of the training set, or collecting data for use in recommendation engines or off-machine models, user privacy could be compromised,” she noted.

Bowden is a member of the hacker collective known as the Cult of the Dead Cow and also serves as the chief marketing officer at the open-source privacy-focused Veilid Project. She said more transparency around how AI tools are used is essential.

“I always feel more comfortable when companies who develop AI products are transparent about what datasets were used to train the software,” Bowden said. “Microsoft’s lack of transparency surrounding that concerns me. If people don’t know what data was used to train the model, they shouldn’t submit to it.”

With OpenAI, Google, and Microsoft pushing hard to bring generative AI products to market, several projects and groups are also offering decentralized and open-source alternatives, including Venice AI, FLock, PolkaBot AI, and the Superintelligence Alliance.

Ethereum co-founder Vitalik Buterin wrote earlier today that open-source AI is the best way to avoid a future where “most human thought becomes read and mediated by a few central servers controlled by a few people.”

“People should assume that everything they write to OpenAI is going to them and that they have it forever,” Venice AI founder and CEO Erik Voorhees previously told Decrypt. “The only way to resolve that is by using a service where the information does not go to a central repository at all in the first place.”

AD

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.