The Diversity, Equity, and Inclusion crowd just gained absolute power over hiring for such jobs and now those jobs are being taken over by "robots."
Oh, the bitter irony.
And no matter how much you scream "racist" at robots, they just don't give a damn.
The funny thing though is that if math and science are racist and robots are the products of math and science (by way of logic and programming), it stands to reason that robots are indeed systemically racist.
Nope. Actually easier to program for DEI targets than wriggle yourself there with DEI staff.
Am I the only one who thinks this AI crap is a bunch of BS? It can only regurgitate what it is programmed to regurgitate folks. There is no “intelligence” from these robots. Of course the robots will be programmed to not allow white males to be hired. Then the left will hold this out as “proof” that whites shouldn’t be hired - because a robot is super intelligent and decided that whites aren’t worth hiring, so it just shows how racist society is before the robots took over. This is the insanity we are dealing with. And it’s not just insanity. It is evil.
Except the AI stories I have read all say its lefty biased.
So white folk might have issues anyway.
That inventory model will likely do a nice job. If Walmart's customer base shifts to,say, Young vegetarian Men from Micronesia who eat fish, then it'll miss the mark due to Bias. But statisticians monitor models to avoid that miss.
Let's say Walmart fires it's modeling staff and outsources inventory modeling to a boutique stats firm. Well, who's monitoring THEIR personnel's bias (maybe they're bug-eaters), their training data (from Whole Foods), or what's in their model (they can't code...they outsource that process). NOW you're introducing all sorts of bias.
A lot of AI is really a zillion optimization functions with pooled output, focused on shrinking the gap between actual and predicted values. The "black box" Risk is real...even developers aren't always certain how the model works. That risk is compounded with "unsupervised learning" where the model re-estimates the zillion functions with fresh data every day or week.
People like IBM's Chief may have a view from the 30,000 foot level. Grinding out if each of those zillion functions are statistically significant or if the variables make sense or are intuitive, is unglamorous work that is necessary to avoid bias.