Posted on 05/19/2025 9:12:29 AM PDT by Red Badger
The first AI attempts a decade ago were wholly empirical mining of available data
The folks in charge were very troubled by the results
AI is only as good as who feeds it
Like a pit bull sorta
Will review this, but it smacks of being a polemic.
AI is not remotely what it is being sold as... it has its use cases and can add efficiencies.. but this AI is a panacea for all things... No, its not.
If you bother to look into the actual results it spits out, even the biggest, most well trained AI’s create mirages and false output.
AI is not “Intelligence” at all, they are at their core probability engines.
AI resume screening tools showed strong racial and gender bias,
Well, if you have one of those unique names AI probably has problems recognizing you as a person. Tushiaquandra, UARCO, LaDesmendia are probably is not in the AI database as actual names.
AI is not remotely what it is being sold as... it has its use cases and can add efficiencies.. but this AI is a panacea for all things... No, its not.
If you bother to look into the actual results it spits out, even the biggest, most well trained AI’s create mirages and false output.
AI is not “Intelligence” at all, they are at their core probability engines.
How do we change that?
Don’t name your kid L’Marlius for a start.
With the population being 13% black, wouldn’t that be expected as a result?
Methinks this is the AI-Hypemeister’s off ramp, they will use this as a scapegoat, rather than admitting they knew they were lying about the current state of AI.
AI resume screening tools showed strong racial and gender bias, with White-associated names preferred in 85.1% of tests
The names suck.
What about Watermelondrea?
The researchers validated three hypotheses about intersectionality and found that shorter resumes and varying name frequencies significantly impacted bias measurements.
Now lets correlate other data points. There are many other data points to look at in this. and even then, you don’t get an answer, only another question.
Actually the first AI attempts go back decades.
The algorithms being used today were developed a long long long time ago in terms of tech, I was first exposed to them in the 80s.. and they weren’t new then.
What has changed is the computational power, and the amount of data available to train them on.
Today we can build out gigantic “neural networks” with 100s of thousand if not millions of devices/nodes if we want, this was impossible back then. And we didn’t have remotely the amount of data available. Today we now have 2-3 decades worth of nearly every single action people engage in throughout their lives, with context and other information around it.
AI is always vulnerable to the training data, but also, what it decides is “IMPORTANT” in the training data.. if its manually trained, humans feed it lots of data telling it whats important.. and then it generates probabilities from that training data and analyzes new data based on the criteria it was trained on.
The second type of training is literally just letting the algorithms themsevles try to decide what is relevant and important when fed lots and lots of data, find patterns and things on its own. And in those types of training, you really don’t know fully what the “AI” is going to decide what is relevant.
Neither model is perfect. Even when you have humans doing the training, you still aren’t sure what the AI is going to fully decide is relevant.
For example, way back in the 80s the military tried to train “AI” to determine if an arial photo had a tank in it or not. They had 2 sets of data... 1 was pictures with tanks in them, and 1 was pictues with no tanks in them. They “trained” the system by telling it these are the pictures iwth tanks, and these are the pictures without. And it got to decide what about the pictures made the picture have a tank on it.
They got the system to near perfectly detect a tank every time with their training data sets. They thought they had achieved a great milestone...
Then they brought in new pictures, and the system failed miserably. Turns out the “AI” had not decided that a visible tank was what differentiated the picture sets... but that the brightness of the pictures did.. apparently all the pictures with tanks they used to train with were taken on a sunny day, and all the pictures in the training set without tanks had been taken on a cloudy day. When the new set of pictures was thrown at it, it utterly failed detecting tanks, but very accurately differentiated between sunny day pictures and overcast day pictures.
AI absolutely has its place, and has some great use cases, and models brings efficiency, especially to a lot of boilerplate things... but those who blindly take what AI tells them as accurate, will get burned... make no mistake about it.
It seems AI is smarter than I thought, and some people are dumb for giving their kids Negro names.
The YouTube video, “Top 60 Ghetto Black Names”, from 14 years ago is hilarious. I can find the video, but can’t find the link to post
Exactly my thinking. 13% of the population is black so that’s about right.
How did Michael Jordan do?
This is ridiculous. AI chose 85% white over black. Blacks make up roughly 13% of the population. Two percent disparity one way or the other does not imply racism.
Besides; many blacks are given traditional Christian names. Only in the latter 1960s ‘Black Power’ era did parents start giving their kids labels they thought Africanized them - even if it was gobbledygook to actual Africans (55 countries with almost as many languages and dialects).
Personally I think it cruel to name a child so badly that no one knows how to spell or pronounce it. It aggravates those trying to do so while keeping the poorly named in a state of perpetual consternation.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.