Posted on 09/06/2021 7:35:57 PM PDT by algore
Facebook users who watched a newspaper video featuring black men were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system.
Facebook told BBC News it "was clearly an unacceptable error", disabled the system and launched an investigation.
"We apologise to anyone who may have seen these offensive recommendations."
It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.
'Genuinely sorry'
In 2015, Google's Photos app labelled pictures of black people as "gorillas".
The company said it was "appalled and genuinely sorry", though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla".
(Excerpt) Read more at bbc.com ...
AI is getting pretty smart. Scary smart.
AI - just around the corner.
h/t pookie18's cartoons
You get it for free but the quality sucks. You have no say in how it works. The guy who runs it gets rich. There's no real competition. You have no privacy. And if you say one thing they don't like they'll shut you up. |
If you'd like to be on or off this list, please click Private Reply below and drop me a FReepmail
Imagine if they had called them bi-peds !😅
That's great, if that's what you want.
And if those solutions are accurate as well as efficient.
Most of what people call AI is really machine learning, as you suggest. In this case, it is likely that data scientists poured petabytes of images that are tagged (by some overlord) as X or Y or Z etc into some canned program that spits out an algorithm that, after cascading data and gaps between success and failure back and forth until "convergence", separates input images X, Y, and Z etc buckets.
If these models are supported by decent humans, there will be an attendant suite of monitoring and back testing routines to ensure the out-of-sample performance is strong. This should be doubley so if there is unsupervised learning involved.
Unlike statistical or econometric or biostatistics modeling, ML is not transparent and is fraught with "black box" unintended consequences. Even the most noble of data scientists can build biased models if their data are biased, and the nature of ML optimization and its lack of transparency will codify that bias. A better check on bias, is the character of the model development team. Nobody is perfect, but if you have love for your fellow man regardless if they lean left or right or whatever, or if they listen to Springsteen, more often than not you should be ok.
But nobility is in the eye of the beholder. If it is an accident, then this demonstrates that FB aren't the Masters of the Tech Multiverse they make themselves out to be...any model built by any good development team has overrides and guardrails to prevent blatantly stupid outcomes (like this). I mean...a simple line of code saying "If YHAT is ("primate", "monkey", etc) then end" will work. The alternative explanation, is this FB incident shows the TRUE bias of allegedly woke tech overlords: they really DO see blacks as less-than-human.
No Comment
There was an incident like this a few years ago where Google was classifying pictures of black people as gorillas. After that dust up there is no excuse for not having sufficient data in their training set to distinguish between the two, and even less for not testing for that specific case.
Bill Clinton thought that obama should be serving him coffee.
The scope of that work didn't draw in ML, but the team members had the capability had it been part of the challenge. It's fun to work with competent co-workers. It did sort of spoil the Christmas holiday, but government proposals tend to do that.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.