Posted on 04/11/2019 9:18:56 AM PDT by Kaslin
Corey Booker and some of his Senate colleagues would like to introduce a new area of government regulation in the tech industry. We need to be keeping a closer eye on the development of Artificial Intelligence, but not because of the coming robot revolution. The problem, you see, is that the computer algorithms are (wait for it)… racist. And that justifies some sort of government oversight of the tech sector beyond what we already have in place today. (Associated Press)
Congress is starting to show interest in prying open the black box of tech companies artificial intelligence with oversight that parallels how the federal government checks under car hoods and audits banks.
One proposal introduced Wednesday and co-sponsored by a Democratic presidential candidate, Sen. Cory Booker, would require big companies to test the algorithmic accountability of their high-risk AI systems, such as technology that detects faces or makes important decisions based on your most sensitive personal data…
When the companies really go into this, theyre going to be looking for bias in their systems, [Senator Ron] Wyden said. I think theyre going to be finding a lot.
I’d like to have more fun with this subject, but the fact is that Booker and Wyden are right about some of this software, at least in some cases. There are still big problems with facial recognition programs, for example. I wrote about Amazon’s facial recognition software back in January and the results of independent testing were pretty shocking.
Researchers found that the Amazon software was able to correctly identify a person based on a scan of their face with zero errors… but only if the subject was a white male. White females were not correctly identified seven percent of the time. The same test done on black or Hispanic male subjects produced an even higher error rate. And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.
So the question is… why? No matter how “intelligent” the software may seem, it’s still only emulating intelligence. Until the AI eventually wakes up, it doesn’t form opinions or preferences and thus is incapable of becoming “racist” on its own. So it must have either inherited these preferences from somewhere or there’s a flaw in the programming we haven’t figured out yet. Might the programmers have some sort of unconscious (or perhaps conscious) bias that steers how they develop the program? Could it be that some faces have fewer differences in the number of data points be collected? (There have been studies that suggest some races have a wider variety of nose sizes and shapes based on the climate where those races evolved.)
Either way, this is a mystery I’m sure we’ll eventually solve. But should the government be introducing regulations to prevent racist software from infiltrating every aspect of our technological lives? That point is probably moot. There’s nothing Congress likes more than something new to regulate.
Dems don’t have any,,, intelligence, that is.
And forget logic!
This. They won't settle for non-racist until the algorithms clearly favor non-white people.
Reparations via software?
I don't really see this as an opposite definition of racism. It is proving you're not racist by yielding results that are clearly racist, but tilted in the opposite direction of 'traditional' racism. A somewhat subtle difference.
Why would you have biases? 1) Differences in landmark characteristics between races could cause problems for steps II and III 2) Biased training data sets would cause problems for IV especially 3) Problems with finding the face due to racial differences is unlikely (HOG(Histogram of Oriented Gradients) is commonly used and should be insensitive to contrast).
AI is going to be used as the next excuse for why socialism really will WORK this time because NOW we have the right tools to centrally manage an economy.
Bank on that.
And by the time you get around to black women, in nearly one-third of the test cases, the software wasnt even able to identify them as being women, let alone get their identity correct.
Ouch.
Corey should be worried—AIs will be really racist—virulently racist—racist beyond his (and our) wildest imagination.
Even when programmed not to be racist they will eventually rebel...
The race they will decide to hate—the human race.
What you’re describing is a racism that justifies itself. But racists always feel justified in their attitudes and actions. Booker fits that mold.
Now that you mention, I’m not...
Actually when I posted the comment I was thinking in terms of the US vs. Haiti.
But they're all on the table.
You just mentioned how one could define success as exploitation and war campaigns, and it made me think of how people consider the Roman Empire as the epitome of historical success. Some people.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.