In early September 2023 U.S. Securities and Exchange Commission Chair Gary Gensler said that deepfakes pose a “real risk” to markets. Deepfakes, fake videos or images generated by artificial intelligence (AI) but appearing at first glance to be authentic, can be made to represent high-profile investors and even regulators like Gensler, seeming to show these influential figures saying things that are likely to sway elements of financial markets. Creators of the deepfakes in these cases stand to benefit when they successfully turn the market with this deception.

While the potential for market turmoil is significant, the threat of deepfakes extends well beyond just that. Global accounting firm KPMG has pointed to a sharp increase in scams targeting businesses of all kinds with deepfake materials. These and other risks have led cybersecurity researchers on a frantic search for ways to stop—or at least slow down—malicious actors armed with these powerful tools. Deepfakers have created falsified videos of celebrities, politicians, and many others—often for fun, but also frequently to spread misinformation and worse.

Perhaps the greatest negative impact of deepfakes early in the nascent development of this technology, however, has been on individuals targeted by this technology. Extortion scams are proliferating in a host of different areas and with various strategies. A significant proportion of these scams involve the use of deepfake technology to create sexually explicit images or video of unwilling targets. Scammers can then demand a payment from the real-life target, with the threat of disseminating the fake content looming in case that person does not comply. But the threats associated with deepfakes and explicit content extend much farther.

AD

For many in the areas of cybersecurity, social justice, privacy law, and other fields, deepfake pornography is one of the greatest threats to emerge from the AI era. By 2019, 96% of all deepfakes online were pornography. Below, we take a closer look.

A History of Image Manipulation

Deepfake is not the first technology making it possible to manipulate images of others without their consent. Photoshop has long been an omnipresent technology, and the practice of falsifying images dates back decades before that software was invented. Deepfake technology itself extends back more than 25 years, although it is only in the last several years that rapidly developing AI has significantly reduced the time it takes to create a deepfake while simultaneously growing much closer to undetectable to the average observer.

Did you know?

As of February 2023, only three U.S. states had laws specifically addressing deepfake pornographic content.

The ease of misusing deepfake technology to create pornographic content—a growing number of tools used to create deepfakes are freely available online—has helped to dramatically exacerbate the problem. A search online reveals plentiful stories about individuals who have been targeted in this way. Many of the people targeted by deepfake pornographers are female streaming personalities that don’t create or share explicit content.

Earlier this year, prominent streamer QTCinderella discovered that her likeness had been used in AI-generated explicit content without her awareness or consent. Another well-known streamer, Atrioc, admitted to having viewed the content and shared information about the website where it was posted. In the time since, QTCinderella has worked with a prominent esports lawyer to have the website removed, and Atrioc has issued multiple statements indicating his intention to work toward removing this type of content more broadly.

AD

Issues of Consent

Many have argued that deepfake pornography is the latest iteration of non-consensual sexualization, following in a long trend although better positioned for widespread dissemination owing both to the power of deepfake technology and its ease of use. Following from this, someone who creates deepfake explicit images of someone else without that person’s consent is committing an act of sexual violence against that person.

Stories from survivors of these attacks—almost entirely women—support this classification. It is already well-documented that victims of deepfake porn regularly experience feelings of humiliation, dehumanization, fear, anxiety, and more. The ramifications can be physical as well, with many stories existing of hospital visits, trauma responses, and even suicidal ideation spurred by deepfakes. Victims have lost jobs, livelihoods, friends, families, and more, all because a deepfake that seemed real was shared.

For many, the problems of deepfake porn represent perhaps the worst of a much larger problem with AI in general: because generative AI is trained using data which contains a host of biases, prejudices, and generalizations, the content these AI systems produce also shares those negative traits. It has long been recognized, for example, that AI tools are often predisposed to creating racist content. Similarly, generative AI even on its own is susceptible to creating highly sexualized content as well. When combined with malicious actors seeking to harm others or simply putting their own gratification over the privacy and well-being of others, the situation becomes quite dangerous.

With some deepfake content, there is a double violation of consent. One way of creating deepfake explicit content is to utilize pre-existing pornographic material and to superimpose the face or other elements of the likeness of an unwitting victim into that material. Besides harming the latter person, the deepfake also violates the privacy of the original adult performer, as it does not seek that person’s consent either. That performer’s work is also being duplicated and distributed without compensation, recognition, or attribution. It has often been argued that adult performers in these contexts are exploited—literally digitally decapitated—and further objectified in an industry in which such practices are already rampant.

Some, however, have expressed their views that consent is irrelevant when it comes to deepfakes of all kinds, including pornographic content. Those making this argument frequently suggest that individuals do not, in fact, own their own likenesses. “I can take a photograph of you and do anything I want with it, so why can’t I use this new technology to effectively do the same thing?” is a common argument. 

Laws and Regulations

As with much of the AI space, technology in the deepfake industry is developing much more quickly than the laws that govern these tools. As of February 2023, only three U.S. states had laws specifically addressing deepfake pornographic content. Companies developing these technologies have done little to limit the usage of deepfake tools for generating explicit content. That’s not to say that this is the case with all such tools. Dall-E, the popular image generating AI system, comes with a number of protections, for instance: OpenAI, the company that developed Dall-E, limited the use of nude images in the tool’s learning process; users are prohibited from entering certain requests; outputs are scanned before being revealed to the user. But opponents of deepfake porn say that these protections are not sufficient and that determined bad actors can easily find workarounds.

The U.K. is an example of a country that has worked quickly to criminalize aspects of the burgeoning deepfake porn industry. In recent months the country has moved to make it illegal to share deepfake intimate images. As of yet, the U.S. federal government has passed no such legislation. This means that, as of yet, most victims of deepfake porn do not have recourse to fix the problem or to receive damages.

Besides the obvious issues of consent and sexual violence, the attack perpetrated on an adult performer whose likeness is used in the creation of deepfake explicit content could provide another avenue to address this problem from a legal standpoint. After all, if a deepfake creator is using an adult performer’s image without consent, attribution, or compensation, it could be argued that the creator is stealing the performer’s work and exploiting that person’s labor.

AD

Deepfake pornography bears a resemblance to another recent phenomenon involving non-consensual explicit content: revenge pornography. The ways that legislators and companies have worked to combat this phenomenon could point to a way forward in the battle against deepfake porn as well. As of 2020, 48 states and Washington, D.C. had criminalized revenge pornography. Major tech companies including Meta Platforms and Google have enacted policies to clamp down on those distributing or hosting revenge porn content. To be sure, revenge porn remains a significant problem in the U.S. and abroad. But the widespread effort to slow down its spread could indicate that similar efforts will be made to reduce the problem of deepfakes as well.

One promising tool in the battle against AI-generated porn is AI itself. Technology exists to detect images that have been digitally manipulated with 96% accuracy. If, the thinking goes, this technology could be put to work scanning, identifying, and ultimately helping to remove AI-based explicit content, it could help to dramatically reduce the distribution of this material.

Stay on top of crypto news, get daily updates in your inbox.