' Expanding the Use of Artificial Intelligence in Photographic Eyewitness Lineups | MTLR

Expanding the Use of Artificial Intelligence in Photographic Eyewitness Lineups

Eyewitness identification plays a critical role in the United States justice system in all stages of the criminal investigation process, from charging decisions to in-court identifications to convictions. Yet reliance on eyewitness identification is risky. Identification of the wrong defendant can lead to inefficiency at best and a wrongful conviction at worst. According to the Innocence Project, “eyewitness misidentification contributes to an overwhelming majority of wrongful convictions that have been overturned by post-conviction DNA testing.”

There are two main types of eyewitness identification procedures: corporeal lineups and photographic lineups. In a corporeal lineup, the witness picks the perpetrator out of an in-person group of people, while a photographic lineup involves the witness viewing a photo, usually of six people, to identify the suspect. In both photographic and corporeal lineups, the police present the witness with both the suspected perpetrator and a number of “fillers,” or randomly selected people with similar characteristics to the suspect, to ensure accurate witness identification.

While both corporeal and photographic lineups pose the potential for incorrect identification, corporeal lineups afford defendants more legal protection. In United States v. Wade and Gilbert v. California, the Supreme Court created the Wade-Gilbert rule, holding that post-indictment corporeal lineups are a critical stage in a defendant’s criminal prosecution. As a result, the defendant is entitled to counsel under the Sixth Amendment, and denial of this right renders the resulting identification inadmissible at trial. In contrast, in United States v. Ash, the Court held that a photographic lineup is not a critical stage in a criminal prosecution, and denial of counsel in a photographic lineup is therefore permitted under the Sixth Amendment. As a result, potential misidentifications are especially risky for defendants facing photographic eyewitness lineups.

Nonetheless, criminal defendants maintain due process rights with respect to both types of identifications. In a series of landmark cases, the Court constructed a three-step test to determine whether a photographic lineup is so “impermissibly suggestive” as to violate a defendant’s Due Process rights. First, under Perry v. New Hampshire, the Court asks whether the eyewitness identification procedure was arranged by the police or another state actor. If so, the Court next asks, under Stovall v. Denno, whether the procedure was impermissibly suggestive. Finally, the Court last asks, under Manson v. Brathwaite, whether the procedure was so impermissibly suggestive as to result in a substantial chance of misidentification. If all three of these factors are met, then the identification is inadmissible; however, if the procedure was suggestive, but not impermissibly so, or if it was impermissibly suggestive but only enough to result in a potential, but not substantial chance of misidentification, then these factors are not met, and the identification is admissible.

While these cases purport to safeguard defendants’ due process rights, they leave space in which an impermissibly suggestive identification procedure is admissible at trial, and the defendant is unable to challenge a conviction through the Due Process clause. Accordingly, it is imperative that police departments utilize fair and unbiased procedures in conducting photographic lineups, lest the lineup produce a wrongful conviction. Still, studies have shown that mistaken eyewitness identifications repeatedly lead to wrongful convictions, often due to suggestive police practices.

This is not to claim that police are always intentionally suggestive when conducting eyewitness identification lineups. Creating an airtight photo array can be a difficult and tedious task. According to the Handbook of Eyewitness Psychology, neither the suspect nor any of the filler members of a lineup should stand out, and each filler should be an equally good alternative to the suspect. As the Handbook explains, “When the suspect stands out in the lineup relative to the other lineup members, uncertain eyewitnesses may be cued to identify the suspect based simply on his distinctiveness rather than a true match between their memory of the culprit.”

While some traits are obvious, such as the suspect’s weight and race, others can be more difficult. If, for example, a suspect has a particular scar, tattoo, birthmark, or other physical characteristic, this can be difficult to reconstruct. Additionally, all six photographs in the lineup should have similar backgrounds, lighting, clothing, facial hair, complexions, and hairstyles. The “relative judgment” process, in which a witness compares each face and picks the photograph closest to their memory of the perpetrator, renders these factors especially important. While constructing an accurate and nonsuggestive photographic lineup is a difficult task, failure comes at a high price: not only does it risk a wrongful conviction, it also means the actual perpetrator walks free.

Given the importance and difficulty of constructing accurate photographic arrays, utilizing images generated by Artificial Intelligence (AI) has the potential to cure many of the current defects resulting in eyewitness bias and suggestion. However, Studies on the usage of AI-generated images in photographic lineups are limited due to the nature of the new technology, and the Supreme Court has yet to address the issue. Additionally, there are potential downsides to using AI-generated photographs in lineups, including an “uncanny valley” effect and potential racial biases. While AI-generated photographic lineups are a promising development for eyewitness identification and police departments should begin exploring their usage, further research should be conducted before implementing AI lineups on a large scale.

While AI-generated human images began as glossy, overedited models, often with a few extra fingers, AI image generation has rapidly improved. By embracing human imperfections, AI programs like Google’s Nano Banana Pro and Adobe Firefly have quickly learned to generate more realistic images. For example, Emily Pellegrini, the completely AI-generated “influencer,” recently went viral for tricking many followers, including professional athletes, entrepreneurs, and public figures, into believing she was a real human. Additionally, images and videos ranging from boxer Jake Paul applying makeup to the Pope wearing Balenciaga have likewise recently fooled the internet due to their hyper-realistic nature.

While there is ample criticism directed towards public use of these AI-generated images, for police departments constructing photo arrays, they present a promising opportunity. As a 2024 study in the Scientific Reports journal explains, using AI images as “filler” in photo arrays allows officers to create images directly from the description of a culprit using text-to-image generation. These AI-generated arrays could save police departments valuable time in creating photo lineups; however, more importantly, they could also increase fairness and reduce bias for suspected defendants. Going forward, if a police department is unable to aggregate a sufficient number of photos closely resembling a defendant, the department could generate a realistic AI filler image instead of settling for less realistic real photographs and potentially biasing eyewitnesses.

At the same time, AI-generated images protect the identities of the real people currently used as “fillers” in photographic lineups. By creating new images from millions of facial features and inputs, utilizing AI-generated images creates “new” faces, reducing the risk that real faces appear during eyewitness identifications. While there are few initial studies on the topic, two of the key current studies on AI photo lineups, one by Greenspan & Bergold and one by Bell, Menne, Mayer & Buchner, suggest positive results, opening the door for further near-term exploration of this novel AI usage.

Nonetheless, the usage of AI-generated images in photographic lineups is not without risk. If the AI technology underperforms in a specific photographic array, the result could be five obvious filler images, leaving the identity of the suspect immediately clear to any observer. This risk is heightened by the“uncanny valley” phenomenon, in which almost human-like images create a unique sense of revulsion and discomfort in observers, quickly alerting them to the images’ falsity. If this occurs in a real eyewitness identification procedure, the attempted use of AI images to decrease bias could quickly backfire, leading to a more suggestive and unjust identification procedure.

Additionally, due to the limited initial research on the topic, a full-scale rollout of an AI-focused procedure would be unwise at this time. For example, in the Greenspan & Bergold study, the researchers found that people largely failed to detect differences between real and AI-generated photographs in sample lineups. However, this study focused solely on white males, leaving open the need to conduct similar studies on members of other racial groups. Prior studies of Facial Recognition Technology (FRT) have shown certain facial recognition tools to be less accurate for demographics outside of white males, exacerbating this concern. Given the severe consequence of incorrect convictions resulting from poorly-generated AI images, additional research should be conducted before a mass integration of AI technology into police procedures occurs.

The use of AI-generated images in photographic eyewitness identifications is a promising new technique with the potential for increased police efficiency and a reduction in wrongful convictions. Especially as the accuracy of AI-generated images increases, police departments should begin to consider future use of AI fillers in their photographic lineups. However, at the same time, given the potential downsides of poor-quality AI images and the lack of current research on the topic, further studies should be conducted before implementing AI identifications on a large scale across the United States.

Elaine Mione is an Associate Editor for the Michigan Technology Law Review

Submit a Comment

Your email address will not be published. Required fields are marked *