The right of publicity protects individuals from commercial misappropriation of their name, likeness, and other aspects of identity. Roughly 35 U.S. states recognize this right through statute or case law, and claims for violation of the right of publicity are most commonly brought in New York or California.
This right has traditionally been invoked in cases involving the unauthorized use of a celebrity’s image in advertisements or merchandise. In a notable case, Bette Midler v. Ford Motor Co., Ford hired a sound-alike singer to imitate Midler’s voice in a commercial after she refused to participate. The court ruled in Midler’s favor, emphasizing that an artist’s distinctive identity could not be commercially exploited without consent. However, as technology advances, new challenges are emerging in protecting individuals’ identities.
While the current laws provide some protection, they were largely designed for more traditional media and are ill-equipped to handle AI-generated content. More comprehensive laws are needed to keep up with evolving technology. Specifically, a federal right of publicity ought to be established to resolve any issues relating to variance between state laws.. Deepfakes pose a significant threat to the current system and the ability of public figures to control the commercial use of their likeness.
In general, to establish a cause of action for right of publicity, an individual must show: (1) the validity of the individual’s right of publicity and, (2) that the right has been infringed by the defendant. The first element requires showing that the element of identity at issue – such as a name, image, or persona – is distinctive or widely recognizable. To prove the second element, a Plaintiff usually provides evidence that Defendant used the famous identity in advertising, marketing, or the sale of goods or services. Some states allow this right to extend beyond a person’s death, and others combine this right with other privacy rights.
As artificial intelligence grows in sophistication, the proliferation of “deepfakes” – a pressing issue well before the release of ChatGPT– surged. These images or videos are created by a form of artificial intelligence called “deep learning”, which enables hyper-realistic replication of a person’s appearance and voice. Deep learning creates media that is often indistinguishable from reality.
After a steady drip of deepfakes depicting politicians and actors in the past half-decade, this technology raises significant legal and ethical concerns about the publicity rights of those in the public eye. Deepfakes enable the replication of an artist’s voice or face without their consent, and the creation of misleading content, fraudulent endorsements, or even reputational damage. For example, in 2023 multiple deepfakes of Tom Cruise went viral on TikTok. While these videos generally seemed lighthearted in nature – the fake Cruise played coin tricks and sung songs – the believability of these uses highlights the risk of more malicious deepfake tactics.
Some have already raised concerns before Congress. In July 2024, Senators Marsha Blackburn, Amy Klobuchar, and Thom Tillis introduced a bill called the “Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act”. The goal of this bill is to protect “voice and visual likeness of all individuals from unauthorized computer-generated recreations from generative artificial intelligence (AI) and other technologies”. The bill was reintroduced in April 2025, earning the support of Google, which owns YouTube, and the Recording Industry Association of America. The NO FAKES Act establishes a national digital replication right, and violations of this right would include public display, distribution, transmission, and communication of a person’s digitally simulated identity.
Actors and groups representing their interests have taken it upon themselves to mitigate the risks posed by deepfake technology. SAG-AFTRA, the labor union representing actors and singers, advocated for stronger contractual protections to prevent AI-generated likenesses from being exploited. Two new California laws, AB 2602 and AB 1836, codified SAG-AFTRA’s demands by requiring explicit consent from artists before their digital likeness can be used and by mandating clear markings on work that includes AI-generated replicas.
Courts will also need to contend with the First Amendment implications of restricting deepfake use. While the Supreme Court in Zacchini v. Scripps-Howard Broadcasting Co. held that the First Amendment does not provide blanket immunity when a performer’s likeness is appropriated commercially, many courts have treated the holding narrowly. As a result, the constitutional boundaries of the right of publicity and the First Amendment remain unsettled, a problem that will only be further complicated as courts contend with AI-generated content. In particular, unauthorized deep fakes for commercial gain may constitute a potential violation of the right of publicity, in light of precedent recognizing violations of the right by virtual creations and drawings. Deepfake use in satire, parody, or political commentary presents a more complicated legal question. Courts have historically protected these forms of expression under the First Amendment. Whether a deepfake falls into one of these protected categories will determine if a right of publicity claim can succeed.
Moving forward, the right of publicity needs to evolve in order to address the unique challenges posed by deepfake technology. State-level protections offer some support. Establishing a federal right of publicity – like under the NO FAKES Act – would provide uniform standards and stronger protections for artists and public figures against unauthorized AI-generated likenesses. Such a law could explicitly prohibit the commercial use of a person’s image, voice, or likeness without consent, regardless of whether it was created with or without the use of artificial intelligence.
Additionally, for-profit companies could implement a licensing framework, similar to how the performing rights organization system currently functions. Celebrities and public figures could license their name, image, and likeness (NIL) rights for a lump sum or through a royalty-based arrangement. The license could include specific terms outlining prohibited and permissible uses of the NIL. Authorized users would then obtain access to the NIL by paying licensing fees.
As artificial intelligence rapidly improves, our legal system must adapt to protect public figures from deepfakes – ranging from those that harm reputations to those that pose significant safety risks. Existing laws, designed for traditional media, must be retooled for the digital age, if not fully replaced by a federal Right of Publicity. Without change, public figures may find themselves powerless to defend their likenesses from a novel form of exploitation.
Hailey Kozuchowski is an Associate Editor for the Michigan Technology Law Review