Explainer: Combatting deepfake porn with the SHIELD Act
Firmbee / Pixabay
Explainer: Combatting deepfake porn with the SHIELD Act

What do Taylor Swift, Gal Gadot, and Kristen Bell have in common? Aside from their celebrity, they have all been victims of deepfake pornography. Deepfake technology is used to manipulate media so that the original face in a video is replaced with a face of the user’s choice. While that can make for a funny e-card or Snapchat video, it has also proven to be severely damaging to individuals whose images were used without their consent in deepfake porn. The bipartisan SHIELD Act, first introduced in the US House of Representatives in 2019, is being reintroduced now as an amendment to the Violence Against Women Reauthorization Act of 2021 to protect victims of nonconsensual, and in turn, deepfake, porn.

What are deepfakes?

Named by a Reddit user of the same name, deepfakes are forms of media that use machine learning to seamlessly superimpose one face onto another or alter audio, typically in videos, gifs, and other less static media. A user can create deepfake videos by feeding artificial intelligence images and videos of the face they would like to use, in addition to the original video they would like to alter. The more images used to feed the machine, the more accurate the superimposition is. 

How are deepfakes harmful?

This neural network-based media manipulation started out with meme-worthy superimpositions of actor Nicolas Cage’s face on other actors’ and actresses’ bodies. However, the technology’s primary use soon took a dark turn, resulting in the rise of deepfake pornography. To an untrained eye, distracted viewer, or user who wishes to suspend reality for a few minutes, the actresses’ faces that were used in adult content wouldn’t have seemed manipulated; for all intents and purposes, the footage is real. It is this belief, even in that moment, that makes deepfake porn damaging. Viewers believe they are watching nonconsensual, or nonconsensually filmed or released, porn. 

These manipulated videos initially involved the faces of actresses and other celebrities, but soon became yet another platform to exert control over women of all walks of life. At present, some 96% of deepfakes on the Internet depict nonconsensual pornography, most of it using women’s faces. The technology to create deepfakes soon became so ubiquitous that anyone with access to a video clip and a handful of their target’s images could create one without any technical knowledge. With how easy it is to create deepfake porn, virtually no consequences linked to its creators, and the long-lasting personal and professional effects felt by the victims, the manipulated media has been likened to digital rape by legal academics like Boston University law professor Danielle Citron.

And the damage isn’t. limited to those whose faces are superimposed; less widely discussed but still problematic, the adult actresses who feature in the original media clips typically lose credit for their work, and can no longer monetize what is essentially their work product. Ultimately, they become just another interchangeable body for the user to control.

What can be done?

There are a few ways victims of deepfake porn can seek justice. Recently, technologists across the world have introduced defensive technical solutions in the form of deepfake-detection tools. University students in Nagpur, India have recently engineered an artificial intelligence solution that identifies manipulated videos, images, and audio at a 96% succes rate. Researchers at the University of Buffalo have created software that detects manipulated media at a 94% success rate by looking for a reflection, or lack thereof, in the depicted person’s eyes. Microsoft, a larger player in the field, created a tool that detects pixels at the edges of superimposed faces to determine if the frame is manipulated, and tested its efficacy against a public database of around 1,000 deepfake videos and a second, larger database provided by Facebook. In addition to these defensive solutions, individuals can consider offensive technical solutions like unlisting their social media profiles from the Internet or making their profiles private. These solutions take just over a minute to implement, and ensure that the individual’s images can’t be fed to artificial intelligence by people who don’t already have access to their photos. 

There are more litigious routes individuals can take as well. In the United States, some states, including Illinois, Texas, Washington, New York, and Arkansas, have biometric privacy laws that allow residents to take private action against the nonconsensual use of their faceprints, facial mapping, or identifiable images in general. European Union member-country and California state residents are similarly protected from user-generated data that includes images that contain their likeness through the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), respectively.

There is also existing legislation in the 2020 National Defense Authorization Act (NDAA) that forbids the malicious use of deepfake technology. However, this $738 billion defense policy bill only addresses deepfakes related to foreign influence and weaponization. While the political aspect of deepfakes is important to our democracy, it is not what led to the explosive growth of deepfakes, nor what comprises 96% of deepfakes online.

The SHIELD Act, which provided protections for victims of nonconsensual and revenge porn, was introduced in the House of Representatives in 2019. Though introduced as a bipartisan bill, the SHIELD Act never became a law. If passed, it would have established “privacy protections to prohibit the widespread distribution of nonconsensual pornography,” given the Department of Justice a framework to address those privacy protections, and most importantly, established federal criminal liability for individuals who shared nude or explicit content without consent. In order for a victim of nonconsensual porn to make their case, they would have had to prove that a reasonable person would not have considered the media a matter of public concern, and that the content uploader or distributor was aware that the person in the image had a reasonable expectation of privacy in selectively sharing that image with intended recipients. For a deepfake, that would mean consent from the person whose image is being used, which would remove the problematic nature of the content.

How effective are the available solutions?

The defensive technical solutions are not readily available to the mass market, require further time to fine-tune, and, especially for the students and researchers, need a stable source of funding to complete. Once usable, it would be up to video-, gif-, and image-hosting platforms to purchase these tools and implement them. Even if these solutions were available today, there would be no incentive for platforms to use them. It is true that social media platforms have already published policies against certain types of deepfakes. However, due to the liability shield provided by Section 230 of the Communication Decency Act, social media and other Internet platforms are not held liable for content “published” by its users, which disincentivizes implementing software solutions platform-wide. After all, the software is not free, and the content that would be flagged for removal is exactly the kind of viral media that directly contributes to these platforms’ revenue. 

Unlike the defensive solutions, the offensive technical solutions are quick and easy to do. However, they eerily mirror other real-world solutions like holding keys in a “Wolverine” fashion when walking at night; the solutions put the onus on the potential victim to always be prepared for an attack with a tool that will not really help them defend themselves. Keeping in mind that over 80% of rapes are committed by an attacker known to the victim, we must also consider that deepfake porn can be created by someone who has knows the victim and already has access to their photos. 

The state biometric laws provide some remedies through private action or complaints to the respective state Attorney General, but are tedious and can become costly. Private action through GDPR and CCPA can be similarly limiting, as both of those laws only protect specific residents and apply to businesses with specific characteristics. Virginia just enacted the Virginia Consumer Data Protection Act, but a close read eliminates it as a potential venue for relief for deepfake porn victims who reside in the state; its definition of biometric data excludes “a physical or digital photograph, [and] a video or audio recording or data generated therefrom,” which specifically excludes deepfakes. 

As for state and federal laws that deal with nonconsensual porn, the NDAA only regulates political deepfakes, the SHIELD Act was never enacted, and the 46 states that have enacted relevant statutes have tended to use language that is inconsistent, does not provide full individual protection, or does not impose liability on the producer or distributor of the content outside of specific circumstances. In some cases, this lets deepfake and other nonconsensual porn creators continue to harm the victims of their content without facing any consequences. If the creators make deepfake porn for reasons other than revenge, like for money, attention, or even clout, it is highly likely that their state’s nonconsensual porn statute does not apply to them. There is even further concern for the lack of accountability when considering the lack of resources states have to track anonymous usernames to real individuals. 

Where do we go from here?

It is not always necessary to consider both sides of an argument, especially when the topic of the argument is a technology that is almost entirely used to harm, embarrass, and control other people. Deepfakes have creative, nonharmful uses like creating believable movie scenes for a smaller portion of the production’s budget. However, we must reiterate that 96% of deepfakes online are of nonconsensual pornography. This means that other topics of concern like misinformation and foreign interference, and nonharmful media like Nicolas Cage memes and audience-submissions of Star Wars scene corrections, comprise only 4% of deepfakes. The issue of potential regulation abridging the First Amendment right to free speech is not an issue at all. There is precedent in defamation law and federal cyberharassment statutes that bolster the argument that criminalizing nonconsensual porn, including deepfake porn, would not result in a First Amendment violation when that kind of “free speech” has damaging, provable outcomes.

The technology itself is not to blame. People will find a way to use technology for their own purposes, whether for the public good or otherwise. The issue is how to regulate its use when deepfake technology is so new, harmful deepfake content is so widespread, and the creators of that content are so difficult to hold accountable. 

The Violence Against Women Act (VAWA) is a federal law that protects and provides remedies for women who face violence from domestic abuse, sexual assault, and stalking. Its 2021 reauthorization passed in the House, and is awaiting review from the Senate. In March 2021, seven Representatives from both sides of the aisle resubmitted the SHIELD Act as an amendment to VAWA that would criminalize nonconsensual pornography and impose penalties on those who share such content. Considering the bipartisan nature of the initial Act and current amendment, the current political makeup of Congress, and the post-MeToo landscape, there is hope yet for victims of deepfake pornography.

JURIST contributor Anokhy Desai is a law student at the University of Pittsburgh and an Information Security Policy masters student at Carnegie Mellon University.