Posted on 08/20/2019 11:46:38 AM PDT by Kaslin
Jeff Bezos must really be getting tired of these headlines coming up all the time. It seems that their facial recognition software (known as Rekognition) has been subjected to yet another test and come up a little short. Or a lot short, particularly if you happen to be one of the more than two dozen state lawmakers who showed up as hits matching them against a database of known criminals. But hey… when you’re making omelets you’ve got to crack a few eggs, right? (CBS San Francisco)
A recent test of Amazons facial recognition technology reportedly ended with a major fail, as some state lawmakers turned up as suspected criminals.
The test performed by the American Civil Liberties Union screened 120 lawmakers images against a database of 25,000 mugshots. Twenty-six of the lawmakers were wrongly identified as suspects.
The ACLU said the findings show the need to block law enforcement from using this technology in officers body cameras. Meanwhile, supporters of facial recognition say police could use the technology to help alert officers to criminals, especially at large events.
As usual, let’s get the obvious joke out of the way first. If the software is identifying California legislators as criminals, honestly… how broken is it really? (Insert rimshot gif here.)
Getting back to the actual story, the first thing to note is that the “test” in question was performed by the American Civil Liberties Union (ACLU). At least that’s how it’s phrased in the CBS report. Last I checked, they weren’t a software development firm, so did they really make up and perform the test themselves or shop the job out to a firm with more direct experience? I’d like to see the details.
Of course, the results aren’t that suspect. Of all the facial recognition software out there that we’ve looked at, Amazon’s seems to be the one that winds up producing the most spectacular (and frequently hilarious) epic fails when put to independent testing. In that light, perhaps the ACLU wasn’t off the mark.
Of course, the ACLU isn’t looking to improve the technology. This test was run so they can continue their campaign to prevent law enforcement from using the software. Democratic Assemblymember Phil Ting of San Francisco (who was tagged as a felon) is quoted as saying, While we can laugh about it as legislators, its no laughing matter if you are an individual who is trying to get a job, for an individual trying to get a home. If you get falsely accused of an arrest, what happens? It could impact your ability to get employment.
These types of scare tactics are all too common and should be derided. I’ve asked multiple times now and am still waiting for an answer to one simple question. Does anyone have evidence of even a single instance where someone was misidentified by facial recognition and gone on to be prosecuted (or persecuted, as Ting suggests) because the mistake wasn’t discovered? I’ve yet to hear of a case. Did the police show up and arrest Ting after he was misidentified? I somehow doubt it.
Look, the technology is still in its infancy and it’s got a few bugs in it. They’re working them out as they go. Eventually, they’ll get it up to speed and the error rates should drop down to acceptable levels. And if this software can help catch a suspect in a violent crime in a matter of minutes or hours rather than days or weeks after they were spotted by a security camera, that’s a tool that the police need to have.
...well... It’s not wrong
“Twenty-six of the lawmakers were wrongly identified as suspects.”
Need to know which 26, so we can assess the validity of that statement.
Also, which lawmakers were correctly identified as suspects.
So..?
What’s the problem? It works too well??
A lot of them are criminals.
how does the author assume that the results were wrong?
after all, the testees were legislators!
... ended with a major fail, as some state lawmakers turned up as suspected criminals.
this confuses me. That sounds like a success.
It only tagged “some” as criminals. Clearly the system needs for work as I don’t believe that the sample of Kalifornica legislature members could be less than 100% criminal.
“...well... Its not wrong.”
He might feel the number should run way higher than 26 if the software is working properly.
Is China’s facial recognition of criminals any better, or does the organ removal and recycling machine not care?
It’s smarter than about half the voters.
Sounds like it’s working correctly to me.
Swatted because you have similar facial features as wanted cop killer?
What could possibly go wrong? Oh and you have nothing to fear from red flag laws either.
Lawmakers identified as criminals?
This is not news.
And?
Sounds like it’s working just fine.
Not sure how the facial recognition works. Is it MATCHING facial features one to another as in matching the features of person x to the features of a known criminal y? If that is the case then the software may not be working very well. The legislators need to provide proof that they are actually NOT the known criminal and should stay in police custody until the do so. That will keep them out of the public sector for awhile and limit the damage they are currently doing to the city/county/state/country.
If, on the other hand, the software is matching the facial features of the legislators to feature that would indicate that they are a criminal or have criminal tendencies, we have a different story. If this is the case then the software seems to be working perfectly.
Tweak it, run it through again. Lets see if we can’t up those numbers.
Did the screening results indicate a bias in identifying D over R as criminals?
As others above me have said: maybe it’s working just right. The devil is in the details
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.