This Glitchy, Error-Prone Tool Could Get You Deported—Even If You're a US Citizen

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Juan Carlos Lopez-Gomez, despite his US citizenship and Social Security card, was arrested on April 16 on an unfounded suspicion of him being an “unauthorized alien.” Immigration and Customs Enforcement kept him in county jail for 30 hours “ based on biometric confirmation of his identity ”—an obvious mistake of facial recognition technology. Another US citizen, Jensy Machado, was held at gunpoint and handcuffed by ICE agents . He was another victim of mistaken identity after someone else gave his home address on a deportation order. This is the reality of immigration policing in 2025: Arrest first, verify later.
That risk only grows as ICE shreds due process safeguards, citizens and noncitizens alike face growing threats from mistaken identity, and immigration policing agencies increasingly embrace error-prone technology, especially facial recognition. Last month, it was revealed that Customs and Border Protection requested pitches from tech firms to expand their use of an especially error-prone facial recognition technology —the same kind of technology used wrongly to arrest and jail Lopez-Gomez. ICE already has nearly $9 million in contracts with Clearview AI, a facial recognition company with white nationalist ties that was at one point the private facial recognition system most used by federal agencies. When reckless policing is combined with powerful and inaccurate dragnet tools, the result will inevitably be more stories like Lopez-Gomez's and Machado's.
Studies have shown that facial recognition technology is disproportionately likely to misidentify people of color , especially Black women. And with the recent rapid increase of ICE activity, facial recognition risks more and more people arbitrarily being caught in ICE's dragnet without rights to due process to prove their legal standing. Even for American citizens who have “nothing to hide,” simply looking like the wrong person can get you jailed or even deported.
While facial recognition's mistakes are dangerous, its potential for abuse when working as intended is even scarier. For example, facial recognition lets Donald Trump use ICE as a more powerful weapon for retribution. The president himself admits he's using immigration enforcement to target people for their political opinions and that he seeks to deport people regardless of citizenship . In the context of a presidential administration that is uncommonly willing to ignore legal procedures and judicial orders, a perfectly accurate facial recognition system could be the most dangerous possibility of all: Federal agents could use facial recognition on photos and footage of protests to identify each of the president's perceived enemies, and they could be arrested and even deported without due process rights.
And the more facial recognition technology expands across our daily lives, the more dangerous it becomes. By working with local law enforcement and private companies, including by sharing facial recognition technology , ICE is growing their ability to round people up—beyond what they already can do. This deputization of surveillance infrastructure comes in many forms: Local police departments integrate facial recognition into their body cameras , landlords use facial recognition instead of a key to admit or deny tenants, and stadiums use facial recognition for security. Even New York public schools used facial recognition on their security camera footage until a recent moratorium . Across the country, other states and municipalities have imposed regulations on facial recognition in general, including Boston, San Francisco, Portland, and Vermont. Bans on the technology in schools specifically have been passed in Florida and await the governor's signature in Colorado. Any facial recognition, no matter its intended use, is at inherent risk of being handed over to ICE for indiscriminate or politically retaliatory deportations.
At state and local levels, however, there is a way to fight back. For example, there is a package of “Ban the Scan” bills in the New York State Legislature and New York City Council to curb the use of facial recognition: by police ( S5609 / A1045 ), by landlords ( A6363 and Intro. 425 ), by public accommodations ( A6211 and Intro. 217 ), and to make permanent the ban on its use in schools ( S3827 / A6720 ). And while New York, the most monitored city in the country, has the opportunity to lead the way, legislative protections against the risks of facial recognition must proliferate nationwide to fully protect our communities. The only way to completely prevent Trump—or any future authoritarian—from allowing this tech to do his and ICE's bidding is to prevent the use of facial recognition in the first place.
Because of the immense power of facial recognition to target political enemies and indiscriminately deport any of us, and because the system's racially biased inaccuracy is already causing wrongful arrests, banning facial recognition is an essential step toward preventing authoritarian overreach. The New York State Legislature can and should pass the Ban the Scan package of facial recognition bills, and the rest of the country should follow suit in protecting communities from this force multiplier for tyranny.
