DETROIT (LifeSiteNews) — The head of the Detroit Police Department is working on changing the policies surrounding the use of facial recognition software after the alleged wrongful arrest and detainment of a pregnant Detroit woman earlier this year. The case highlights one of the myriad dangers of an increased reliance on artificial intelligence.
Detroit Police Chief James White announced the plan for the changes after 32-year-old Porcha Woodruff was arrested at 7:50 a.m. February 16 on charges of robbery and carjacking, the Associated Press reported. Woodruff was eight months’ pregnant with her third child at the time.
“My two children had to witness their mother being arrested,” she said.
Woodruff said her children “stood there crying as I was brought away,” and that she worried the stress of the arrest and her subsequent hours in jail could have been harmful or even fatal for her unborn baby.
Police had identified Woodruff as a subject after using facial recognition technology to track down possible suspects in a January 29 carjacking case.
The officer assigned to the case, LaShauntia Oliver with the Detroit Police Department’s (DPD) Commercial Auto Theft Unit, obtained video surveillance footage from a BP gas station after discovering that a cell phone stolen in the carjacking had been returned there, Government Technology reported.
A photo of the woman who returned the phone was then submitted to the DPD’s facial recognition software, in accordance with department policy.
IT trade publication Tech Republic describes the controversial software as “a biometric tool that analyzes a person’s facial features to match or confirm their identity” through using “artificial intelligence and machine learning to scan each face and compare its unique identifiers against a database of images.”
Chief White said in a statement that the Detroit Police Department’s “technology yielded an investigative lead, which is exactly what it was supposed to do.”
However, he said the subsequent human detective work was shoddy since the software “produced 73 possibilities of who the suspect was” and the investigating officer did not properly follow through on checking out the various leads.
According to White, officers are supposed to “do a number of things to ensure that the investigative lead is possibly your suspect.” In this case, he said, “It does not appear that that occurred.”
The Michigan chapter of the left-wing American Civil Liberties Union (ACLU) filed a lawsuit in the U.S. District Court for the Eastern District of Michigan on Woodruff’s behalf.
In March, according to the lawsuit, the Wayne County Prosecutor’s Office dismissed the case because the victim did not appear in court.
The prosecutor’s office said Woodruff’s arrest was “appropriate based upon the facts,” the AP reported — but Chief White said officers will no longer be allowed to use images from facial recognition software in photo lineups.
According to White, Woodruff’s arrest “emanated from, candidly and unfortunately, a poor investigation.” He said he is making charges to “ensure that something like this will never happen again.”
The changes, which will be reviewed by the Detroit Police Board of Commissioners, will include requiring two captains to assess arrest warrants created after use of facial recognition software, and mandating the acquisition of additional evidence beyond the investigative leads derived from the technology.
“It’s particularly difficult when you’re talking about someone who was eight months’ pregnant, so we empathize with that,” White said. “We recognize we have to do better and there will be accountability on this mistake.”
Meanwhile, this isn’t the first time a reportedly wrongful arrest has triggered scrutiny into the use of facial recognition software in criminal investigation. There have been two other such cases in Detroit alone.
A 2022 study warning about the potential for facial recognition software to result in wrongful arrests noted that “face recognition has been used as probable cause to make arrests” without the use of corroborating evidence.
“In a number of cases across multiple jurisdictions, people have found themselves jailed and facing criminal charges based on a face recognition search alone; no other evidence was sought to confirm the suspect’s identity,” the report stated.
An analysis of the study published by Georgetown Law highlighted the assessment that, as “currently used in criminal investigations, face recognition is likely an unreliable source of identity evidence.”
In addition to serious worries about false positives in criminal investigations, use of the technology has generated controversy due to concerns about personal privacy and the manifold harms potentially occasioned by data breaches, Tech Republic pointed out. The technology has also been widely criticized on the left for yielding an outsized number of false positives for blacks and other people with darker skin.
Despite those concerns, use of biometric technology is increasing worldwide as artificial intelligence (AI) systems are predicted to become more and more commonplace.
Last year, Spanish police announced they were implementing an automatic facial recognition tool, and Canadian Prime Minister Justin Trudeau said Canada is eying a mandate that would require air travelers to provide “digital travel documents,” including facial recognition biometric data to pre-board flights.
In the U.S., Miami International Airport (MIA) also announced last year that it was equipping its boarding gates with biometric facial recognition technology in “the largest implementation of biometrics in any U.S. airport.”
Steven Mosher, president of the Population Research Institute, told LifeSiteNews that the intrusive biometric screening tech is reminiscent of similar tools widely in use by China’s Communist regime.
“I, for one, do not want to live in a hi[gh]-tech digital dictatorship of the kind that we see in China, where everyone is tracked throughout the day on surveillance cameras, not to mention on their own phones,” Mosher said.
“We are perilously close to that already,” he added, arguing that the U.S. needs to implement “robust privacy laws” in order “to protect us from this kind of intrusive government surveillance.”