In this article, Type Media Center fellow Eyal Press discusses the controversial use of A.I. facial recognition technology in law enforcement.
- Artificial Racial Profiling: In this article, Eyal Press repeatedly conveys the fact that, when artificial intelligence is used for facial recognition, there is a racial bias that arises. He references a study by the National Institue of Standards and Technology, which found that “many facial-recognition systems falsely identified Black and Asian faces between ten and a hundred times more frequently than Caucasian ones.”
- Automation Bias: Press discusses the concept of “automation bias”, a term coined by researchers to describe the reluctance people have to criticize technology that they do not understand, which has benefitted A.I. significantly.
- Pulling from a Stacked Deck: The article references a critical flaw in law enforcement’s use of A.I. facial recognition, raised by National Association of Criminal Defense Lawyers attorney Clare Garvie. Those who have a pre-existing criminal record are more likely to be identified by A.I.; Garvie explains that “the more times you engage with the police, the more times you’re picked up, the more tickets you’ve bought to the misidentification lottery”.
- Blinded by Facial Recognition: A central theme that Press comes back to repeatedly throughout the article is the concept that, by relying so heavily on A.I. in their investigations, law enforcement officers ignore evidence that may clearly disprove the technology’s findings.
Type Media Center’s Note
This article by our fellow Eyal Press reflects Type Media Center’s dedication to nurturing independent journalism that not only informs but strives for societal change.