Facial recognition technology continues to experience an onslaught of complications and backlash.
In February, Clearview AI, Inc., a facial recognition software company, experienced a data breach. The stolen data included its entire customer list, the number of searches made by each customer, and the number of accounts set up by each customer. Clearview’s customers mostly consist of law enforcement agencies. Critics, such as U.S. Senators Ed Markey and Ron Wyden, see this as yet another indication of the drawbacks of Clearview’s facial recognition technology greatly outweighing any potential benefits, and further reason to impose legislation regulating the use of such technology. Numerous critics have accused the company of violating consumer privacy policies of the web sites it scrapes. Clearview currently houses a database of over three billion photographs from various social media platforms, and although the Clearview breach did not access this data, major players in the tech industry have previously voiced concerns regarding its disregard for consumer privacy by issuing cease-and-desist letters to have Clearview remove their images.
In addition to the Commercial Facial Recognition Privacy Act of 2019, U.S. Senators Jeff Merkley and Cory Booker have introduced a Senate bill, the Ethical Use of Facial Recognition Act, to create a moratorium on the government’s use of facial recognition technology until a Congressional commission passes further regulations. Merkley worries this technology will create a dystopian “police state that tracks us everywhere we go.” Booker also challenged the technology’s tendency to misidentify and disproportionately target women and minorities. The bill still allows law enforcement to use facial recognition technology, but it places the ultimate authority over whether it may do so with the judiciary by requiring law enforcement to seek a warrant before using this technology. This seems to somewhat reconcile beliefs of the legitimate uses of facial recognition in the context of law enforcement while preventing rampant, unregulated, and unrestricted exploitation of the technology.
UCLA garnered intense backlash when it was announced that it would consider implementing the use of facial recognition technology on its campus. UCLA planned to use this to screen for individuals who had been restricted from accessing campus grounds. However, it ultimately halted its plans to move forward with the program. The university cited the same concern as critics of Clearview’s software: that any potential benefits are outweighed by concerns from the student body about drawbacks being socially undesirable or infringing on constitutional rights.
Civil liberties groups—including the ACLU, the Electronic Frontier Foundation, and the Liberty Coalition—have signed an open letter supporting demands by student groups at institutions nationwide that their schools not use facial recognition technology. Student groups argue that the use of facial recognition technology on campus raises concerns of violation of students’ privacy, racial profiling, and racial biases. The civil liberties groups fear that “[e]xposing students and educators to facial recognition profoundly limits their ability to study, research, and express freely without fear of official retaliation,” demonstrating yet another problem that could ensue from the unregulated use of facial recognition technology.
There has also been no shortage of legal challenges to the use of facial recognition technology. A class action suit filed in February claims that Clearview violated Illinois biometrics laws by collecting, storing, and using biometric information without written consent. This suit follows an earlier suit also filed in Illinois, which claimed Clearview’s app was a greed-based “‘insidious encroachment’ on civil liberties” because it collected data without consumer consent or cause to believe any person in the database had committed any wrongdoing.
What does this all mean? Facial recognition technology undoubtedly has boundless benefits. We use it every day to unlock our iPhones. Retail stores like Saks and Macy’s use it to understand our spending habits and customer service experiences. Taylor Swift even used it to identify stalkers at her concerts. However, such boundless capabilities could pave the way to detrimental infringements of privacy, especially given Clearview’s reluctance to disclose who it has sold its technology to, indicating a real need to impose meaningful regulation on this and other advanced A.I. technology. Prominent tech CEOs such as Elon Musk of Tesla and Sundar Pichai of Alphabet and Google have backed the idea of increased regulation. Though the exact scope of such legislation is currently unclear, some sort of legislation appears inevitable given public criticism of the technology, multiple hearings in the House of Representatives, and staunch legal opposition.
* Julia Deng is the Symposium Editor on the Michigan Technology Law Review.