Say goodbye to online privacy. A New York-based artificial intelligence company's database of over three billion images, scraped from social media to create advanced facial recognition software, has raised concerns that AI is too advanced to be contained by regulation.

Six hundred law enforcement agencies, including both the FBI and the Department of Homeland Security, use Clearview’s facial recognition software to match pictures of suspects with their profiles on social networks such as Facebook, Twitter, and Instagram, according to a New York Times exposé published on Saturday.

Clearview scandal is part of a wider problem
The FBI has been using Clearview AI to identify US citizens, according to reports. Image: Shutterstock.

Leaders in the blockchain industry, a technology which champions privacy, have spoken out against the revelations.

“We are facing a privacy crisis,” Tor Bair, head of growth at Enigma, told Decrypt, adding, “With new technologies creating unprecedented threats, we need new technological solutions to help protect individuals, their data, and their identities.

“Regulation alone won’t be enough—and as Clearview shows, the agencies we would trust to protect us may already be breaking that trust,” he said.

Clearview isn’t the only app causing questions about government surveillance. On the news that the FBI stopped Apple from encrypting backups of its users’ data, Telegram CEO Pavel Durov said, on Telegram, “iCloud is now officially a surveillance tool. Apps that are relying on it to store your private messages (such as WhatsApp) are part of the problem.” Telegram is a messaging app popular in the crypto community for its privacy features.

But Clearview, with such a broad surveillance reach that could include photos of anyone you know—and with the dystopian implications that it brings—has touched a nerve. Jo O’Reilly, deputy editor of ProPrivacy.com, told Decrypt the technology heralds the “death of privacy” and is complicit in creating an “authoritarian surveillance state.”

“While we are often given the impression that the West is more protective of privacy values, the reality is that Western governments are quickly emulating those totalitarian practices as a means of exerting content control over the populace,” she added.

AD

It must be noted that there are some benefits of the app. It has been used to solve murder and child sexual exploitation cases, according to the report, but it also hands a lot of oversight to the government—sacrificing privacy as a result.

What can we learn from Clearview AI?

While the app has shocked those who thought their old Facebook photos were gone forever, it is not just the everyday person who needs to think twice about the implications. Taylor Monahan, CEO of MyCrypto, says that app developers need to take responsibility for the tools that they build.

“Beyond the huge incentives to harm other humans for your personal and/or financial benefit, there is this pervasive notion in tech that the building of a thing is separate from the outcome of that thing,” she told Decrypt.

Monohan suggested that app founders should instil the correct values into the engineers who are actually building the tool. “If they can foresee and understand and feel accountable for the outcome of the things they build, we may have a shot at a brighter future,” she said.

Living in a dystopian future

Since the Cambridge Analytica scandal, users have come to realise that data is a commodity and that they essentially pay with their own data to use sites like Facebook. But with Clearview AI taking this data and using it for mass surveillance, things are even worse than we realized.

“The dystopian future many fear is already here. Clearview AI is only one company that is making this publicly, but it is unlikely they are the only ones doing it. The worst part is, there is no real protection,” Fernando Gutierrez, CMO of Dash Core Group, said.

In fact, this technological determinism might be inevitable. Borys Pikalov, Head of Analytics at blockchain company Stobox, told Decrypt, “In a few years we will be living in a world where everything is connected to the Internet and sensors are everywhere. It is unreasonable to expect that the collection of data could be avoided.”

Some companies, like Twitter, are considering using decentralized technology—such as blockchain—to help fight against the erosion of online privacy. Last month, Dorsey announced plans to create an open standard for decentralized social media, the idea being that users would be able to control their own data and choose who they share it with.

AD

Like any new technology, however, blockchain has its own risks. Blockchain analytics companies observe what’s happening on various blockchains and use that data to try to watch people’s financial dealings.

For LocalMonero co-founder Alex—who keeps his last name private—such tools “pose an existential threat to the supposed anonymity of transparent blockchains.”

“Perhaps it's only a matter of time before there'll be a public website where you can paste a Bitcoin address and get all the real life identities linked to it through chain analysis and machine learning AI,” he said.

On the other hand, blockchain companies like the Electric Coin Company are building privacy into blockchain. But it’s a challenging build and it might already be too late.

Stay on top of crypto news, get daily updates in your inbox.