Free Republic
Browse · Search
Smoky Backroom
Topics · Post Article

Skip to comments.

Facebook apology as AI labels black men 'primates'
bbc ^

Posted on 09/06/2021 7:35:57 PM PDT by algore

Facebook users who watched a newspaper video featuring black men were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system.

Facebook told BBC News it "was clearly an unacceptable error", disabled the system and launched an investigation.

"We apologise to anyone who may have seen these offensive recommendations."

It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.

'Genuinely sorry'

In 2015, Google's Photos app labelled pictures of black people as "gorillas".

The company said it was "appalled and genuinely sorry", though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla".

(Excerpt) Read more at bbc.com ...


TOPICS: Heated Discussion
KEYWORDS: ai; primates
Navigation: use the links below to view more comments.
first previous 1-2021-30 last
To: algore

AI is getting pretty smart. Scary smart.


21 posted on 09/06/2021 8:51:32 PM PDT by SJSAMPLE
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore
AI is not politically correct nor is it emotionally sensitive. The machine learning models employing reinforcement learning often generate brutally efficient, out of the box solutions to problems that humans would not have considered. Interesting technology. The supervised learning models tend to generate a more acceptable result as the human in the loop makes decision about the suitability of a given result.
22 posted on 09/06/2021 9:51:18 PM PDT by Myrddin
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore

AI - just around the corner.


23 posted on 09/06/2021 10:54:01 PM PDT by aquila48 (Do not let them make you care! Guilting you is how they control you. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore; a fool in paradise; acapesket; Baynative; beef; BullDog108; Califreak; cgbg; ...
This is the Farcebook Is Evil ping list.


h/t pookie18's cartoons

Facebook is a perfect example of socialism:
You get it for free but the quality sucks.
You have no say in how it works.
The guy who runs it gets rich.
There's no real competition.
You have no privacy.
And if you say one thing they don't like
they'll shut you up.

If you'd like to be on or off this list, please click Private Reply below and drop me a FReepmail

24 posted on 09/06/2021 11:03:56 PM PDT by upchuck (The longer I remain unjabbed, the more evidence I see supporting my decision.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: BeauBo

Imagine if they had called them bi-peds !😅


25 posted on 09/07/2021 3:29:30 AM PDT by Ikeon (Hate your life? Try socialism, at least you wont be alone anymore!😃)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Myrddin
The machine learning models employing reinforcement learning often generate brutally efficient, out of the box solutions to problems that humans would not have considered.

That's great, if that's what you want.

And if those solutions are accurate as well as efficient.

Most of what people call AI is really machine learning, as you suggest. In this case, it is likely that data scientists poured petabytes of images that are tagged (by some overlord) as X or Y or Z etc into some canned program that spits out an algorithm that, after cascading data and gaps between success and failure back and forth until "convergence", separates input images X, Y, and Z etc buckets.

If these models are supported by decent humans, there will be an attendant suite of monitoring and back testing routines to ensure the out-of-sample performance is strong. This should be doubley so if there is unsupervised learning involved.

Unlike statistical or econometric or biostatistics modeling, ML is not transparent and is fraught with "black box" unintended consequences. Even the most noble of data scientists can build biased models if their data are biased, and the nature of ML optimization and its lack of transparency will codify that bias. A better check on bias, is the character of the model development team. Nobody is perfect, but if you have love for your fellow man regardless if they lean left or right or whatever, or if they listen to Springsteen, more often than not you should be ok.

But nobility is in the eye of the beholder. If it is an accident, then this demonstrates that FB aren't the Masters of the Tech Multiverse they make themselves out to be...any model built by any good development team has overrides and guardrails to prevent blatantly stupid outcomes (like this). I mean...a simple line of code saying "If YHAT is ("primate", "monkey", etc) then end" will work. The alternative explanation, is this FB incident shows the TRUE bias of allegedly woke tech overlords: they really DO see blacks as less-than-human.

26 posted on 09/07/2021 4:17:49 AM PDT by DoodleBob (Gravity's waiting period is about 9.8 m/s^2)
[ Post Reply | Private Reply | To 22 | View Replies]

To: algore

No Comment


27 posted on 09/07/2021 4:21:45 AM PDT by Pollard (#*&% Communism)
[ Post Reply | Private Reply | To 1 | View Replies]

To: The Antiyuppie

There was an incident like this a few years ago where Google was classifying pictures of black people as gorillas. After that dust up there is no excuse for not having sufficient data in their training set to distinguish between the two, and even less for not testing for that specific case.


28 posted on 09/07/2021 5:19:31 AM PDT by beef (The Chinese have a little secret—diversity is _not_ a strength.)
[ Post Reply | Private Reply | To 18 | View Replies]

To: algore

Bill Clinton thought that obama should be serving him coffee.


29 posted on 09/07/2021 4:05:18 PM PDT by minnesota_bound (I need more money. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob
I was on a technical demo team around Christmas last year. We had 1 week to demonstrate capability. The input data was TLE (Two Line Element) of every satellite for the period 2000 to 2020 (June) and AIS data of every vessel on the seas over the same time frame. The task was to identify if ANY vessel had evaded the observation foot print of the satellites. Cite the vessel(s) with date/time of the start/end of the voyage evaded. Python, Dask and Spark running in ECM with S3 data stores achieved the objected.

The scope of that work didn't draw in ML, but the team members had the capability had it been part of the challenge. It's fun to work with competent co-workers. It did sort of spoil the Christmas holiday, but government proposals tend to do that.

30 posted on 09/07/2021 10:17:36 PM PDT by Myrddin
[ Post Reply | Private Reply | To 26 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-30 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Smoky Backroom
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson