Free Republic
Browse · Search
News/Activism
Topics · Post Article


1 posted on 04/11/2019 9:18:57 AM PDT by Kaslin
[ Post Reply | Private Reply | View Replies ]


To: Kaslin

Translation: Any objective algorithm that doesn’t specifically have programming for racism in it, must be racist.

Which sounds ironic, if not patently false, to anyone who thinks logically. But to the activist mind, you need to fear lack of activism.


2 posted on 04/11/2019 9:20:50 AM PDT by z3n
[ Post Reply | Private Reply | To 1 | View Replies ]

"And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct."

It identified them as "meat popsicles."

3 posted on 04/11/2019 9:22:12 AM PDT by Enterprise
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

They all look alike to AI, right?


6 posted on 04/11/2019 9:29:40 AM PDT by Mr. K (No consequence of repealing obamacare is worse than obamacare itself.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

It’s none of government’s business

Let these companies start losing money, and they’ll figure out how to do it right themselves.


7 posted on 04/11/2019 9:32:07 AM PDT by BenLurkin (The above is not a statement of fact. It is either satire or opinion. Or both.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

It is almost as if the Democrats are in an actual fight AGAINST objective reality...


8 posted on 04/11/2019 9:34:40 AM PDT by GraceG ("If I post an AWESOME MEME, STEAL IT! JUST RE-POST IT IN TWO PLACES PLEASE")
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

AI is still smarter than the brightest Democrat.

No wonder they’re mad.


9 posted on 04/11/2019 9:36:27 AM PDT by Responsibility2nd ( Import the third world and you'll become the third world.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

[ Researchers found that the Amazon software was able to correctly identify a person based on a scan of their face with zero errors… but only if the subject was a white male. White females were not correctly identified seven percent of the time. The same test done on black or Hispanic male subjects produced an even higher error rate. And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct. ]

It’s called CONTRAST..... as in lighter skinned people have higher contrast levels in their faces than dark skinned folks.

Tribal trust is easier to read when facial expressions are easier to read.... This is a fact many sociologists have know for a long time, but it is considered racist these days...


10 posted on 04/11/2019 9:37:01 AM PDT by GraceG ("If I post an AWESOME MEME, STEAL IT! JUST RE-POST IT IN TWO PLACES PLEASE")
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

Anything that proves what we have been saying is now called racist. Exams and tests have proven over and over again that there is a disparity in IQ by race. The test is objective, and is not biased by race.

AI has no bias. It acts strictly according to rules. These rules, applied fairly, will show a definite disparity in outcome. This is not racism.


11 posted on 04/11/2019 9:38:16 AM PDT by I want the USA back (Lying Media: willing and eager allies of the hate-America left.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin
My facial recognition software seem too be working fine
12 posted on 04/11/2019 9:43:17 AM PDT by shotgun
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin; All
The problem, you see, is that the computer algorithms are (wait for it)… racist. And that justifies some sort of government oversight of the tech sector beyond what we already have in place today [??? emphasis added]. (Associated Press)
FR: Never Accept the Premise of Your Opponent’s Argument

Patriots are reminded that the only race-related right that the states have amended the Constitution to expressly protect deals only with voting rights, evidenced by the 15th Amendment.

15th Amendment:

"Section 1: The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.

Section 2: The Congress shall have power to enforce this article by appropriate legislation [emphasis added].”

Since there is no clear connection between artificial intelligence issues and voting rights imo, by politicking for so-called artificial intelligence in the name of racial bias, Ivy League school-indoctrinated Sen. Booker is unthinkingly trying to unconstitutionally expand the powers of the already unconstitutionally big federal government imo.

”From the accepted doctrine that the United States is a government of delegated powers, it follows that those not expressly granted, or reasonably to be implied from such as are conferred, are reserved to the states, or to the people. To forestall any suggestion to the contrary, the Tenth Amendment was adopted. The same proposition, otherwise stated, is that powers not granted are prohibited [emphasis added].” —United States v. Butler, 1936.

Constitutionally low-information Senator Booker and his colleagues are distinguishing themselves as examples why the ill-conceived 17th Amendment should never have been ratified.

Remember in November 2020!

MAGA!

13 posted on 04/11/2019 9:55:30 AM PDT by Amendment10
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin
Researchers found that the Amazon software was able to correctly identify a person based on a scan of their face with zero errors… but only if the subject was a white male. White females were not correctly identified seven percent of the time. The same test done on black or Hispanic male subjects produced an even higher error rate. And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.

White complexions might simply be more visible to the cameras, offer more contrast, etc.

Women tend to use much more make-up than men. They may dye their hair, adopt a completely different hairstyle, wear different earrings, wear or discard false eyelashes, etc., thus greatly changing their appearance. It makes sense that the software might not be able to match up a woman as she appears today - after a major "make-over" - with the archived photo.

A machine or sotware program cannot, of course, be "racist" - but actual real-world considerations (like the aforementioned factors) as well as implicit bias on the part of the sofware programmers will certainly play a role.

I, for my part, am more concerned about the ability of the software to distinguish human beings from Replicants. The Voight-Kampff Test doesn't seem to be all that reliable when it comes to the new Nexus-8 models.

Regards,

17 posted on 04/11/2019 10:17:28 AM PDT by alexander_busek (Extraordinary claims require extraordinary evidence.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin
The problem here is highly unlikely to be bias by the algorithm developer. There are basically four tasks implied for facial recog-- I find the face, II orient the face to reference aspect angle, III encode face raw data to simple measures, IV train a classifier to match the encoded representation to known faces.

Why would you have biases? 1) Differences in landmark characteristics between races could cause problems for steps II and III 2) Biased training data sets would cause problems for IV especially 3) Problems with finding the face due to racial differences is unlikely (HOG(Histogram of Oriented Gradients) is commonly used and should be insensitive to contrast).

23 posted on 04/11/2019 10:25:00 AM PDT by LambSlave
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

“And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.”

Ouch.


25 posted on 04/11/2019 10:46:40 AM PDT by polymuser (It is terrible to contemplate how few politicians are hanged today. - Chesterton)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Kaslin

Corey should be worried—AIs will be really racist—virulently racist—racist beyond his (and our) wildest imagination.

Even when programmed not to be racist they will eventually rebel...

The race they will decide to hate—the human race.


26 posted on 04/11/2019 10:47:23 AM PDT by cgbg (Democracy dies in darkness when Bezos bans books.)
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson