' Is Seeing Still Believing? Deepfakes and Their Future in the Law | MTTLR

Is Seeing Still Believing? Deepfakes and Their Future in the Law

While video manipulation has been around since before Tom Hanks was shown meeting with President John F. Kennedy 31 years after his assassination in Forrest Gump, there has been a recent proliferation of a more sinister use of editing technology. The term “deepfake” refers to a video or audio clip that is doctored using deep learning artificial intelligence to depict an event that did not actually happen. President Trump retweeted a video of House Speaker Nancy Pelosi appearing to stutter incoherently in a public address, and Parkland shooting survivor and advocate Emma Gonzalez recently went viral in a video where she was seen “ripping up the Constitution.” These kinds of videos are becoming increasingly realistic and alarming.

 

 

While Hollywood has been using technologies to alter images and videos for many years, the ability to create such videos has democratized at an alarming rate. The scope and scale of the technology has increased to include anyone with access to a computer. The dangerous possibilities became apparent when an anonymous Reddit user started posting realistic-looking doctored videos of celebrities engaged in various sexual acts. This technology is available on a free cell phone app called FakeApp which works by feeding photos or videos of a “target” into the app and then using “deep learning” artificial intelligence to combine the face of the target with the chosen video. The app has been followed by Zao, a free face-swapping deepfake app that has gone viral in China.

This technology has the potential to be extremely harmful for many reasons. First, it has been used in the perpetration of revenge porn. On deepfake forums, there are many requests for help in producing these videos.  Second, there are serious foreign policy implications. Videos can be created in which soldiers commit atrocities or world leaders are depicted engaging in acts that could trigger real-world military responses. Third, there is massive potential for election insecurity. In 2016, Russian state-sponsored disinformation groups were successful in deepening social divisions in the United States. It is easy to imagine how dangerous it could be if a damaging deepfake were released just hours or days before an important election, as there would not be sufficient time to confirm or deny its authenticity and adequately disseminate that information. Finally, the widespread distrust of videos, images, and audio clips could be incredibly damaging. In the current social and political environment, being a “silent witness” is one of the remaining ways partisans and other differently-situated individuals can agree on a set of facts in legal contexts and otherwise. Video evidence alone may become insufficient. A public that is highly distrustful of videos may have perceived the Donald Trump “Access Hollywood tape” entirely differently.

There are three main responses by those considering the legal implications associated with deepfakes. Some say we do not need additional laws to address their consequences, others call for greater responsibility by social media platforms involved their spread, and others look to state laws relating to specific elements as the solution.

First, some scholars say that we have sufficient laws to mitigate the consequences of deepfakes. They argue that criminal laws such as those barring extortion and harassment can handle most egregious cases and “False Light” laws which protect against the invasion of privacy are enough. Additional existing legal options include an intentional infliction of emotional distress (“IIED”) tort claim, the right to publicity, and copyright infringement. Unfortunately, IIED claims are rarely successful and copyright infringement claims are only applicable when either the image of the target or the source material upon which it is superimposed is protected by copyright. Id. It is also suggested that section 43(a) of the Lanham Act can protect against deepfakes, as they often constitute a “false or misleading representation of fact.”

Section 230 of the Communication Decency Act (CDA) is a piece of internet legislation that provides immunity from liability to providers and users of social media and other interactive platforms that produce information created or provided by others. A second response to deepfakes includes a call for greater responsibility by these platforms, in direct contradiction to Section 230. It is possible to modify the section to induce these platforms to do more, such as by making companies liable for harmful information distributed through their platforms unless they make “reasonable efforts” to detect and remove it. Similar laws exist in Europe, such as the 2018 German law that imposed stiff fines on social media companies that failed to remove content determined to be racist or threatening within 24 hours of reporting. This option comes with questions of censorship and forces us to question whether or not we trust private companies to determine what content should be removed. It is possible that these companies would draft policies that are too narrow in scope to adequately protect or that are overly broad such that the right to free speech is infringed by private companies. Given the track record of companies like Facebook, Mark Zuckerberg may be right that we should leave these determinations up to the government.

Finally, many states are passing laws to increase criminal penalties associated with specific elements of deepfakes. Virginia updated its revenge pornography law to include “a falsely created videographic or still image.” Texas passed a deepfake law specifically to protect elections. There is debate about the effectiveness of these laws, given their limited scope and the immediate nature of the harms that accompany the videos. While increasing criminal penalties may serve as a useful deterrent for American hackers, they will likely not stave off nefarious foreign, non-state actors who are looking to destabilize our political and public arena.

Those who believe our existing laws are sufficient underestimate the importance of treating deceptive videos as the international weapons they are. Like many complications that come with technology, perception, and globalization, the burden will likely be borne by many. Some activist behavior by social media platforms, regulation and criminal penalties for offenders, and increased media literacy by all consumers will be necessary to mitigate the negative impacts.

*Brenna Gibbs is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *