Posted on 10/31/2023 12:11:33 PM PDT by nickcarraway
A Cambridge scholar argues the case, but with new conflicts come new questions
With major global military conflicts asserting themselves into the story of the twenty-first century, a parallel debate is being waged in academic, ethical and political circles about the appropriate uses for facial recognition technology (FRT) and other biometric systems in warfare.
In August – eighteen months after Russia began its invasion of Ukraine – Cambridge University Press published an article by Juan Espindola, a researcher at the National Autonomous University of Mexico, entitled “Facial Recognition in War Contexts: Mass Surveillance and Mass Atrocity”, which focuses on various uses of facial recognition technology by Ukrainian soldiers. Since then, the conflict in the Middle East has added new urgency to questions that the paper raises about what it calls “some of the most serious concerns with FRT in the context of war, including the infringement of informational privacy; the indiscriminate and disproportionate harms it may inflict, particularly when the technology is coupled with social media intelligence; and the potential abuse of the technology once the fog of war dissipates.”
Thus far, reports of facial recognition in the Israeli-Palestinian conflict have focused on uses related to identifying victims of the Hamas attack of October 7, which left at least 1300 dead, injured thousands more, and saw Hamas take Israeli hostages across the border into Gaza. Israel has used both Amazon’s Rekognition system and Corsight AI’s FRT to locate the missing and the dead using face biometrics.
Yet even before the current war, human rights groups had raised flags about how Israel was using facial recognition for mass surveillance and control of the Palestinian territories, and the Israeli government is likely to use every tool available to it in its current response to Hamas’ brutality.
Espindola points out that facial recognition still disproportionately hurts minority groups, writing that “the deployment of FRT in authoritarian and liberal democratic regimes alike to persecute ethnic groups, repress political dissidents, or conduct widespread unjustified surveillance – particularly when the technology is integrated into closed-circuit television, or CCTV, systems – has been aptly described as a political and social menace.”
However, his paper examines whether FRT deployment can be justified as a tool for espionage and counterintelligence, and leans heavily on a the work of the French philosopher Cécile Fabre and her 2022 book Spying through a Glass Darkly: The Ethics of Espionage and Counter-Intelligence – which Espondola calls “the most systematic and rigorous defense of espionage and counterintelligence as a permissible, even mandatory, form of self-defense in the face of threats to fundamental rights.”
According to Fabre’s ethical framework, Espindola writes, the use cases of FRT by Ukraine are justifiable on threat-prevention grounds, particularly deploying facial recognition to reveal Russian infiltrators amid Unraine’s displaced citizenry and to identify Russian soldiers who commit war crimes. The third use of identifying the dead by posting images on social media is more ethically iffy, but may be justifiable on humanitarian grounds.
Espindola does devote significant space to the objections to facial recognition as a justifiable form of counterintelligence. Yet his conviction rarely wavers throughout the paper; a particularly remarkable passage on objections grounded in privacy states outright that “Ukraine’s technological feat with FRT has been accomplished precisely because the services of companies like PimEyes, FindClone, or, most controversial of all, Clearview AI violate informational privacy.”
In his conclusion, Espindola says of FRT in war, “there is a plausible case to make about the permissibility of its deployment both to acquire information to prevent harm and to fulfill humanitarian obligations in certain contexts.” He hedges somewhat in his final analysis, saying that “whether the wartime benefits of FRT outweigh its postbellum risks is a matter to be decided contextually.”
Regrettably, there will be no lack of fresh context in which to continue evaluating whether facial recognition and other biometrics belong on the battlefield.
It is already being used for war, its use will only increase.
currently our govt is using facebook and id.me as well as 23 and me to build a huge database of biometric data.
I am positive they will only use it to make our lives better and happier
“...facial recognition still disproportionately hurts minority groups...”
DUNE! the original dune had the perfect assassination tool. guessing the trouble the developers are having is how to make it fly slow on its own. but when they figure out how to do it...game on!
Facial recognition can isolate a bad actor in a group and reduce peripheral casualties. You can bet it is being used and effectively, by the Israelis.
Child groomers and EV humpers are not respected outside the walls of your mental institutions you call collage.
When people want to do something badly enough, especially when they put themselves at a disadvantage by not doing it, they will find a way to justify it.
And solving it will be explicitly racist. Different methods must be used depending on the color of the skin.
For that matter, computerized facial recognition works rather poorly on white people. The 90%+ reliability just is not there. If you do get a prospective match, you had better have other methods available to verify it. DNA samples would be the absolute gold standard for that.
It does make a good technological drama toy to impress suspects. It is moderately effective when searching a large database of driver's license photos and comparing them with a recent mug shot of a suspect.
Think of a Google search that might return 1,000 plausible matches. Then turn over the results to a couple of interns in your office for cutting down the list. You might eventually get your man.
There is much work needed to improve the algorithms. Send me some grant money, and I will get right on it. Too bad there are so many much smarter people than I, ahead of me in that line.
Somebody will eventually figure out how to do it better.
I guarantee you that DARPA has a facial recognition system for kamikaze drones or an autonomous aerial weapons system, if not already in service then certainly under development.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.