The first AI attempts a decade ago were wholly empirical mining of available data
The folks in charge were very troubled by the results
AI is only as good as who feeds it
Like a pit bull sorta
Actually the first AI attempts go back decades.
The algorithms being used today were developed a long long long time ago in terms of tech, I was first exposed to them in the 80s.. and they weren’t new then.
What has changed is the computational power, and the amount of data available to train them on.
Today we can build out gigantic “neural networks” with 100s of thousand if not millions of devices/nodes if we want, this was impossible back then. And we didn’t have remotely the amount of data available. Today we now have 2-3 decades worth of nearly every single action people engage in throughout their lives, with context and other information around it.
AI is always vulnerable to the training data, but also, what it decides is “IMPORTANT” in the training data.. if its manually trained, humans feed it lots of data telling it whats important.. and then it generates probabilities from that training data and analyzes new data based on the criteria it was trained on.
The second type of training is literally just letting the algorithms themsevles try to decide what is relevant and important when fed lots and lots of data, find patterns and things on its own. And in those types of training, you really don’t know fully what the “AI” is going to decide what is relevant.
Neither model is perfect. Even when you have humans doing the training, you still aren’t sure what the AI is going to fully decide is relevant.
For example, way back in the 80s the military tried to train “AI” to determine if an arial photo had a tank in it or not. They had 2 sets of data... 1 was pictures with tanks in them, and 1 was pictues with no tanks in them. They “trained” the system by telling it these are the pictures iwth tanks, and these are the pictures without. And it got to decide what about the pictures made the picture have a tank on it.
They got the system to near perfectly detect a tank every time with their training data sets. They thought they had achieved a great milestone...
Then they brought in new pictures, and the system failed miserably. Turns out the “AI” had not decided that a visible tank was what differentiated the picture sets... but that the brightness of the pictures did.. apparently all the pictures with tanks they used to train with were taken on a sunny day, and all the pictures in the training set without tanks had been taken on a cloudy day. When the new set of pictures was thrown at it, it utterly failed detecting tanks, but very accurately differentiated between sunny day pictures and overcast day pictures.
AI absolutely has its place, and has some great use cases, and models brings efficiency, especially to a lot of boilerplate things... but those who blindly take what AI tells them as accurate, will get burned... make no mistake about it.
This is ridiculous. AI chose 85% white over black. Blacks make up roughly 13% of the population. Two percent disparity one way or the other does not imply racism.
Besides; many blacks are given traditional Christian names. Only in the latter 1960s ‘Black Power’ era did parents start giving their kids labels they thought Africanized them - even if it was gobbledygook to actual Africans (55 countries with almost as many languages and dialects).
Personally I think it cruel to name a child so badly that no one knows how to spell or pronounce it. It aggravates those trying to do so while keeping the poorly named in a state of perpetual consternation.
Garbage in, garbage out. So many think AI is smart.
If you tell it 2+2=5 so will it. Define an elephant with a picture of a giraffe and that’s what you’ll get.
Feed it all the NYT, WAPO, AP, etc.articles and it’s answers will be biased.
Simple.