Posted on 09/09/2020 8:19:55 AM PDT by Noumenon
How will AI strategy in the enterprise be changed by the widespread attention to systemic racism?
Like a lot of complicated topics, the discussion of racism in AI systems tends to be filtered through events that make headline news -- the Microsoft chatbot that Twitter users turned into a racist, the Google algorithm that labeled images of Black people as gorillas, the photo-enhancing algorithm that changed a grainy headshot of former President Barack Obama into a white man's face. Less sensational but even more alarming are the exposés on race-biased algorithms that influence life-altering decisions on who should get loans and medical care or be arrested.
Stories like these call attention to serious problems with society's application of artificial intelligence, but to understand racism in AI -- and form a business strategy for dealing with it -- enterprise leaders must get beneath the surface of the news and beyond the algorithm.
"I think that racism and bias are rampant in AI and data science from inception," said Desmond Upton Patton, associate professor of sociology at Columbia University. "It starts with how we conceive a problem [for AI to solve]. The people involved in defining the problem approach it from a biased lens. It also reaches down into how we categorize the data, and how the AI tools are created. What is missing is racial inclusivity into who gets to develop AI tools."
(Excerpt) Read more at searchcio.techtarget.com ...
Barack Obama went to hawaii to work on his tan. Obvious tan lines. Black? Not his sole identity.
If an AI system learns from actual events, it can not help but become racist or, at the very least, culturist. AI is not politically correct unless you code that ghost into the machine.
Wait! I thought Obama was white. I saw his mom.
Well played.
What is called “AI” are nothing more than algorithms that process large amounts of data to compute probabilities and to identify trends that aid in decision making. Woke ideology insists that no one is allowed to data and probabilities when making decisions. They have fundamentally incompatible goals.
I absolutely agree! Thats why I have long advocated that all AI programming be done by dogs. They are the most unbiased creatures on the planet.
Smart people who work hard get those jobs. Do you see a problem with that?
Desmond Upton Patton, associate professor of sociology at Columbia University. Color me un-surprised.
Note the sociology jargon here and the PRESUMPTION OF RACISM; "The people involved in defining the problem approach it from a biased lens. It also reaches down into how we categorize the data, and how the AI tools are created. What is missing is racial inclusivity into who gets to develop AI tools."
The real problem is that AI systems don’t give advantages to privileged groups unless they are forced to do so.
They always pull out some idiot associate professor of sociology to champion this stuff. AIs are rule-based and if “It’s racist” trumps any rule sets - as it both does and is intended to - then of course there’s a “systemic” problem. But it isn’t in the AI.
What is called AI are nothing more than algorithms that process large amounts of data to compute probabilities and to identify trends that aid in decision making. Woke ideology insists that no one is allowed to data and probabilities when making decisions. They have fundamentally incompatible goals.
A biased lens? An inanimate object can be biased?
Kind of like the pressure on Big Tech to “diversify”. They are not idiots. They know full well that hiring anything but the best-and-the-brightest to develop their tech will be their death knell (eventually). But they will hire enough “diverse” people to look good without killing their profits. They will give them loud titles, decent salaries, but stick them in places where they can’t do any major harm.
However, if the start actually drinking their own Kool-Aid, they are dead meat.
The WOKE thug’s 15 minutes are about up...
No problem. The makers of AI will just insert code to “un-bias” the program according to the dictates of the new woke diversity dogma. Google already does it when it changes your un-PC search string to something more PC.
When reality conflicts with leftist diktats, change reality. No problem comrade.
The only time race (whatever that is) is put into loan approval algorithms is to see if the race is on the “special deal/thumb on the scale” list.
The rest is based PURELY on monetary issues, FICO score, length of employment, annual salary, asset/debt ratios, etc.
The idea that an AI could discern and decide ANYTHING other than special set-asides is complete nonsense.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.