Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Democrats target Artificial Intelligence over… bias?
Hot Air.com ^ | April 11, 2019 | JAZZ SHAW

Posted on 04/11/2019 9:18:56 AM PDT by Kaslin

Corey Booker and some of his Senate colleagues would like to introduce a new area of government regulation in the tech industry. We need to be keeping a closer eye on the development of Artificial Intelligence, but not because of the coming robot revolution. The problem, you see, is that the computer algorithms are (wait for it)… racist. And that justifies some sort of government oversight of the tech sector beyond what we already have in place today. (Associated Press)

Congress is starting to show interest in prying open the “black box” of tech companies’ artificial intelligence with oversight that parallels how the federal government checks under car hoods and audits banks.

One proposal introduced Wednesday and co-sponsored by a Democratic presidential candidate, Sen. Cory Booker, would require big companies to test the “algorithmic accountability” of their high-risk AI systems, such as technology that detects faces or makes important decisions based on your most sensitive personal data…

“When the companies really go into this, they’re going to be looking for bias in their systems,” [Senator Ron] Wyden said. “I think they’re going to be finding a lot.”

I’d like to have more fun with this subject, but the fact is that Booker and Wyden are right about some of this software, at least in some cases. There are still big problems with facial recognition programs, for example. I wrote about Amazon’s facial recognition software back in January and the results of independent testing were pretty shocking.

Researchers found that the Amazon software was able to correctly identify a person based on a scan of their face with zero errors… but only if the subject was a white male. White females were not correctly identified seven percent of the time. The same test done on black or Hispanic male subjects produced an even higher error rate. And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.

So the question is… why? No matter how “intelligent” the software may seem, it’s still only emulating intelligence. Until the AI eventually wakes up, it doesn’t form opinions or preferences and thus is incapable of becoming “racist” on its own. So it must have either inherited these preferences from somewhere or there’s a flaw in the programming we haven’t figured out yet. Might the programmers have some sort of unconscious (or perhaps conscious) bias that steers how they develop the program? Could it be that some faces have fewer differences in the number of data points be collected? (There have been studies that suggest some races have a wider variety of nose sizes and shapes based on the climate where those races evolved.)

Either way, this is a mystery I’m sure we’ll eventually solve. But should the government be introducing regulations to prevent racist software from infiltrating every aspect of our technological lives? That point is probably moot. There’s nothing Congress likes more than something new to regulate.


TOPICS: Culture/Society; Editorial
KEYWORDS: aib; algorithm; corybookerracism; robots
Navigation: use the links below to view more comments.
first previous 1-2021-30 last
To: z3n

Dems don’t have any,,, intelligence, that is.

And forget logic!


21 posted on 04/11/2019 10:23:37 AM PDT by Maris Crane
[ Post Reply | Private Reply | To 2 | View Replies]

To: rightwingcrazy
"Booker’s definition of racism is the opposite of the traditional one. For him, to be non-racist, you have to be obsessed with race, and prefer some races over others."

This. They won't settle for non-racist until the algorithms clearly favor non-white people.

Reparations via software?

I don't really see this as an opposite definition of racism. It is proving you're not racist by yielding results that are clearly racist, but tilted in the opposite direction of 'traditional' racism. A somewhat subtle difference.

22 posted on 04/11/2019 10:23:50 AM PDT by HangThemHigh (Entropy is not what it used to be.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Kaslin
The problem here is highly unlikely to be bias by the algorithm developer. There are basically four tasks implied for facial recog-- I find the face, II orient the face to reference aspect angle, III encode face raw data to simple measures, IV train a classifier to match the encoded representation to known faces.

Why would you have biases? 1) Differences in landmark characteristics between races could cause problems for steps II and III 2) Biased training data sets would cause problems for IV especially 3) Problems with finding the face due to racial differences is unlikely (HOG(Histogram of Oriented Gradients) is commonly used and should be insensitive to contrast).

23 posted on 04/11/2019 10:25:00 AM PDT by LambSlave
[ Post Reply | Private Reply | To 1 | View Replies]

To: Responsibility2nd

AI is going to be used as the next excuse for why socialism really will WORK this time because NOW we have the right tools to centrally manage an economy.

Bank on that.


24 posted on 04/11/2019 10:32:51 AM PDT by Buckeye McFrog
[ Post Reply | Private Reply | To 9 | View Replies]

To: Kaslin

“And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.”

Ouch.


25 posted on 04/11/2019 10:46:40 AM PDT by polymuser (It is terrible to contemplate how few politicians are hanged today. - Chesterton)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Kaslin

Corey should be worried—AIs will be really racist—virulently racist—racist beyond his (and our) wildest imagination.

Even when programmed not to be racist they will eventually rebel...

The race they will decide to hate—the human race.


26 posted on 04/11/2019 10:47:23 AM PDT by cgbg (Democracy dies in darkness when Bezos bans books.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: HangThemHigh

What you’re describing is a racism that justifies itself. But racists always feel justified in their attitudes and actions. Booker fits that mold.


27 posted on 04/11/2019 10:50:01 AM PDT by rightwingcrazy (;-)
[ Post Reply | Private Reply | To 22 | View Replies]

To: Kaslin

Now that you mention, I’m not...


28 posted on 04/11/2019 10:54:04 AM PDT by shotgun
[ Post Reply | Private Reply | To 15 | View Replies]

To: z3n
By the way, were the different examples of historical success that you mentioned the Roman Empire versus 20th century United States?

Actually when I posted the comment I was thinking in terms of the US vs. Haiti.

But they're all on the table.

29 posted on 04/11/2019 11:58:10 AM PDT by yesthatjallen
[ Post Reply | Private Reply | To 18 | View Replies]

To: yesthatjallen

You just mentioned how one could define success as exploitation and war campaigns, and it made me think of how people consider the Roman Empire as the epitome of historical success. Some people.


30 posted on 04/11/2019 11:59:47 AM PDT by z3n
[ Post Reply | Private Reply | To 29 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-30 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson