Guest Commentary: How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now

Share:

When used by police and immigration enforcement, biometric surveillance technology can perpetuate an already dangerous racist system.

By Ashley Del Villar and Myaisha Hayes

Face recognition technology may sound futuristic, or perhaps too abstract to seem harmful. But we are already living in a reality in which face recognition and other forms of biometric surveillance pervade our daily lives. These technologies threaten our privacy and free speech rights and, when used by police and immigration enforcement, serve as yet another dangerous system to abuse Black and Brown people on a massive scale. Big Tech companies are profiting off these abuses because they are the ones developing and selling face recognition to government agencies. And it’s our communities — particularly communities of color — that face the harmful consequences.

The good news is that there is a national movement against face recognition that is gaining momentum every day. Recently, a coalition of grassroots organizations from across the country called on Congress to take immediate action to stop government use of dangerous face recognition. Here in Washington state, a place where companies like Amazon, Microsoft, and Palantir loom large, we know firsthand how tech companies collaborate with immigration and law enforcement agencies to build large-scale surveillance tools that facilitate and fuel racist systems that harm both immigrants and U.S. citizens.

Face recognition technology is racist, from how it was built to how it is used. It’s been used by police departments to wrongfully arrest Black men, by ICE and CBP to target and track immigrant families, and by the FBI to surveil Black Lives Matter demonstrators exercising their First Amendment rights. Face recognition massively expands the government’s power to track our movements and target people based on their race, religion, political affiliation, or speech — and while everyone’s rights are at stake, Black and Brown people are harmed the most when this racist technology collides with our racist systems.

Our law and immigration enforcement systems are rooted in this country’s racist history, including slavery, and were created to uphold white supremacy. This is why it’s often those who sit at the margins — folks of color, immigrants, the poor, disabled, women, and trans or gender nonconforming people — who face systemic violence and brutality. Face recognition technology, which was created by those with the most power in society, will only exacerbate this legacy and pattern of state-sanctioned violence against our communities. We’re already seeing this dynamic at work.

In Detroit, police use of face recognition led to the wrongful arrest of Robert Williams, a Black man who was arrested at his home in front of his family. Face recognition’s proven track record of inaccuracy when used against people of color makes us even more likely to be targeted, arrested, or detained. But even if this technology was perfectly accurate, it would still harm communities of color by facilitating systems that are already racist.

The Department of Homeland Security and its sub-agencies ICE and CBP have already committed horrific abuses. With face recognition, they could potentially pinpoint the location of immigrants across the country, marking them for detention and deportation on an unprecedented scale. In 2017, for example, DHS, ICE, and the Department of Health and Human Services used technology supplied by Palantir to tag, track, locate, and arrest 400 people in an operation that targeted the family members and caregivers of unaccompanied migrant children. Face recognition would only expand the power of agencies like ICE to target and tear apart communities of color throughout the country.

Congress is starting to respond. Last week, Sens. Edward Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley reintroduced the Facial Recognition and Biometric Technology Moratorium Act, an important bill that responds to the imminent threat of this dangerous surveillance technology. This bill comes as grassroots-powered coalitions continue to pass bans on face recognition across the country. King County, Wash. became the latest jurisdiction to ban face recognition after a unanimous vote by its county council. Big Tech companies — most recently Amazon — have also been forced to make commitments to stop selling face recognition to law enforcement. These wins are not an accident; they are the result of years of local organizing and activism from the communities most impacted.

There’s no doubt these victories are important, but any moratorium is still a temporary solution. Our communities have been clear: We want new systems to keep us safe — systems not rooted in slavery and racism. We need Congress to not only stop face recognition technology, but permanently divest from our racist punishment systems and reinvest in our communities. Until the federal government takes action, our communities will remain in danger.

Big Tech companies like Microsoft are already lobbying for weak regulations that protect their corporate interests and effectively greenlight these dangerous systems. In addition to stopping government acquisition, use, and funding of face recognition technology for state and local face surveillance, the federal government must support local grassroots-powered progress by rejecting Big Tech efforts to preempt state and local bans and moratoria. We can’t let Big Tech stamp out our hard-won advancements.

We are at a critical moment. The fight against face recognition comes alongside a nationwide reckoning with racism and policing led by the Black Lives Matter movement. We must take this opportunity to recognize the role of surveillance in exacerbating the inherent racism of our law and immigration enforcement systems. We must stop face and other biometric surveillance and confront these systemic harms. Only then will we be on the path to equity and justice.

Ashley Del Villar , Digital Privacy Campaign Coordinator, La Resistencia

Myaisha Hayes , Campaign Strategies Director, MediaJustice

Share:

About The Author

Disclaimer: the views expressed by guest writers are strictly those of the author and may not reflect the views of the Vanguard, its editor, or its editorial board.

Related posts

8 thoughts on “Guest Commentary: How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now”

  1. Chris Griffith

    I can just see it now.

     

    Cop #1: “Hmm, that guy who owns Joe’s Used Tire Emporium has called in tips on six wanted felons over the past four months. Thanks to him we’ve caught two armed robbers, a child abuser, a rapist, and two burglars.”

     

    Cop #2: “You know … he may be running facial recognition software on his camera system. So what do you think … should we talk to the DA and get a search warrant for his business? A dangerous man like him who uses illegal software to help us catch the bad guys belongs in jail.”

     

    Cop #1 looks at Cop #2, then both of them burst out laughing and go back to their business.

    1. Tia Will

      Thanks for the smile, and consider:

      Cop #3 walks in : “Maybe we should also look at the number of POC arrested and held without bail on the basis of faulty facial recognition who were subsequently found to be factually innocent.” All stop laughing.

      Is facial recognition the next “infallible” line up identification? Is it the next “infallible” fingerprint ID? Is it the next “He must be guilty or he wouldn’t have confessed” ? I don’t know the answers. But I do know caution is warranted.

      1. David Greenwald

        It does seem that we at least figured out the pitfalls of this much quicker than we did of line up identification. Although I am amazed reading some books from the 30s and 50s on wrongful convictions, we actually knew it was a problem much sooner, most didn’t pay attention.

  2. Bill Marshall

    But I do know caution is warranted.

    True story… rest of the two posts are “talk-talk”, ‘what if’ns’.

    Every tool has its place… but ‘ya gotta know when it is inappropriate to use a hammer to drive a screw, or use a screwdriver to drive a nail… but there are times when you need both a hammer and a screwdriver to get the job done.  Often other tools as well…

    If the “tools” for identifying a perp, and convicting them include: facial recognition, ‘line-ups’, confessions, DNA, motive, opportunity (including lack of alibi), means, etc., I believe if two, three more of those lead to the same conclusion, no problem with an arrest and maybe a conviction… each have their individual error rates, but in conjunction could well be the basis for no sane “reasonable doubt” at trial… two should be sufficient for ‘probable cause’ for arrest… one should be sufficient for detaining and questioning… the requirements for arrest, holding over for trial, are far less than conviction…

  3. Alan Miller

    I question you this:  if facial recognition didn’t have disparate impacts, would it still be a bad idea?  If yes, can’t we all fight the bad idea without having to add:

    Our law and immigration enforcement systems are rooted in this country’s racist history, including slavery, and were created to uphold white supremacy. This is why it’s often those who sit at the margins — folks of color, immigrants, the poor, disabled, women, and trans or gender nonconforming people — who face systemic violence and brutality.

    Is this above a template placed in every progressive article on a subject that has disparate impact?  I’ve read the same sentences with the same dog-whistle terms with the same wording on Covid-19, police stops, housing, etc, etc, etc.

    As well, I don’t think there will ever be an end to the use of a technology that is found on my Apple laptop, and everyone else’s.  It will just stop those who follow the law.  Like global warming, it’s too late – the strategy has to be how to adapt.  Like when you commit a crime or protest, how to hide your face from cameras.  Like how to spray paint you license plate with magic spray to fool the license plate readers  😐

    We must stop face and other biometric surveillance and confront these systemic harms.

    Why are people complaining about the government surveillance, when you do it to yourselves?  You fools all put you pictures and those of your children on Facebook, software which tags people by facial recognition and automatically figures out who affiliates with who.  When the government comes after aboriginal Jewish trans bicycle riders who do origami, we’ve already told them how to find us and all those we associate with.  Stop using Facebook!

    Maybe binge watch “Person of Interest” for a dystopian fantasy of when this all goes too far.  Oh yeah, the lead characters were white.  Nevermind.

    1. David Greenwald

      ” if facial recognition didn’t have disparate impacts, would it still be a bad idea? ”

      Maybe but one of the clear problems is with identifying – wrongly identifying people of color. So it seems like segmenting off that issue is not a good idea on this issue.

      1. Bill Marshall

        Maybe but one of the clear problems is with identifying – wrongly identifying people of color. So it seems like segmenting off that issue is not a good idea on this issue.

        So, perfectly OK to use on whites, but not “people of color”…

        Second sentence is very strange… given the context of the article… whatever…

        1. David Greenwald

          The point isn’t whether it’s okay or not to use on white, the point is that there is a specific problem with using it on people of color.

Leave a Reply

X Close

Newsletter Sign-Up

X Close

Monthly Subscriber Sign-Up

Enter the maximum amount you want to pay each month
$ USD
Sign up for