' Understanding Deepfakes: Technology, Threats, and Legal Reform | MTLR

Understanding Deepfakes: Technology, Threats, and Legal Reform

Deepfakes use artificial intelligence to generate or alter an image or video, typically of a person, and most often a well-known public figure. Deepfakes typically conduct a “face swap,” where a subject’s face, expressions, body movement, and voice are mapped onto an existing person in a source video. Traditionally, the creation of a deepfake was time and resource intensive, involving the gradual training of a neural network using hours of real video footage of the subject.

However, the advancement of artificial intelligence has made deepfakes more realistic and easier to create. Today, deepfakes are commonly created using a type of neural network called a generative adversarial network (GAN). These networks are comprised of two main algorithms: a generator and a discriminator algorithm. The generator analyzes data from media samples and creates an initial representation of the subject. The discriminator algorithm then compares this representation with existing media samples to refine the image or video and gradually attempt to eliminate any inconsistencies.

This process is repeated until the discriminator algorithm can no longer find any discrepancies between a real image of the subject and the machine-generated version. As a result, the created deepfake is highly realistic and hard to distinguish from a real image or video. For example, in a recent study of two thousand residents in the United States and United Kingdom, the vast majority of participants could not distinguish between real and deepfake content. According to one survey, 70% of respondents said that they were not confident in their ability to distinguish between a real and deepfaked voice.

Unfortunately, deepfakes are not simply experimental or made for humorous or harmless purposes. Deepfakes have been used for a wide range of troubling purposes, such as interference with elections and spreading of misinformation, use in scams and false advertising, and creation of AI-generated sexual content.

For example, in 2023, a deepfaked video of Tom Hanks appeared to show him promoting a dental plan, causing Hanks to clarify that he had nothing to do with it. Steve Harvey—host of the popular “Family Feud” television show—was deepfaked in scam videos claiming that he had received $6,400 in government funds. And in January 2023, deepfaked voicemails mimicking the voice of then-President Biden told New Hampshire residents not to vote in the state primaries.

Current remedies for victims of deepfakes include defamation or right of publicity lawsuits. Unfortunately, these options have limitations. For example, defamation suits require plaintiffs to show that the defendant acted with “actual malice,” a high bar to meet. In publicity rights claims, plaintiffs must show that the defendant used their identity for a commercial purpose. Therefore, deepfakes not used for commercial purposes—such as those used in election fraud—can escape enforcement.

The need for additional legal tools to use against deepfakes has prompted legislators to propose new laws that specifically address their creation and use. The No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (No AI FRAUD Act)—proposed in January of 2024—would create a federal intellectual property right to individual characteristics such as voice and appearance.

Under the No AI FRAUD Act, creating or distributing a deepfaked audiovisual representation of someone’s likeness or voice without their consent would violate the statute. Moreover, making technological tools or devices whose primary purpose is to create deepfakes would also constitute a violation. If statutory harm occurs, victims would be entitled to remedies ranging from $5,000 to $50,000 per violation, plus punitive damages and clawback of any profits derived from the deepfake. Statutory harms include financial injury, severe emotional distress, and deception of the public or a court of law. Notably, any deepfake that depicts an individual in a sexually explicit manner inflicts per se statutory harm.

Legislation targeting deepfakes might be difficult to enforce in practice. Tracking down the identities of deepfake creators or distributors on the Internet could be prohibitively time-consuming or costly, especially given the sheer number of AI-generated images and videos in circulation. In the meantime, however, deepfake detection technologies are being developed, which may be a promising, complementary solution.

While the effectiveness of legislation combating the abuse of deepfakes remains to be seen, the No AI FRAUD Act is a step in the right direction and acknowledges the profound risks posed by deepfakes. As deepfake and artificial intelligence technology continues to evolve, so too should the laws that govern their use. Without such safeguards, the line between truth and misinformation may become dangerously blurred.

Matthew Chang is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *