Posted on 07/20/2022 3:25:25 PM PDT by devane617
For workers who use machine-learning models to help them make decisions, knowing when to trust a model's predictions is not always an easy task, especially since these models are often so complex that their inner workings remain a mystery.
Users sometimes employ a technique, known as selective regression, in which the model estimates its confidence level for each prediction and will reject predictions when its confidence is too low. Then a human can examine those cases, gather additional information, and make a decision about each one manually.
But while selective regression has been shown to improve the overall performance of a model, researchers at MIT and the MIT-IBM Watson AI Lab have discovered that the technique can have the opposite effect for underrepresented groups of people in a dataset. As the model's confidence increases with selective regression, its chance of making the right prediction also increases, but this does not always happen for all subgroups.
For instance, a model suggesting loan approvals might make fewer errors on average, but it may actually make more wrong predictions for Black or female applicants. One reason this can occur is due to the fact that the model's confidence measure is trained using overrepresented groups and may not be accurate for these underrepresented groups.
Once they had identified this problem, the MIT researchers developed two algorithms that can remedy the issue. Using real-world datasets, they show that the algorithms reduce performance disparities that had affected marginalized subgroups.
(Excerpt) Read more at techxplore.com ...
The algorithms have NO bias. They are reflecting true facts of the world that today’s wokeness prevents us from admitting.
We demand equal outcomes
More poison...! You cannot gaslight an algorithm to make someone college material or credit worthy, but it looks like that is being tried.
Selective Regression as described is likely to increase the confidence and presumption of accuracy by the average user.
In other words, Selective Regression would require more checking, clarifying and tinkering than the average user would want to bother doing.
In America, we are used to a ‘set it & forget it’ type of dynamic. We expect extremely accurate results and we tend to get lazy on checking those results.
I saw it first some 45 years ago when calculators were introduced into classrooms. How many of us actually checked the sums provided by our little ‘miracle machines’?
I sure didn’t, unless ordered to by a teacher.
“For instance, a model suggesting loan approvals might make fewer errors on average, but it may actually make more wrong predictions for Black or female applicants. One reason this can occur is due to the fact that the model’s confidence measure is trained using overrepresented groups and may not be accurate for these underrepresented groups.”
Ha ha ha. The model should be color-blind and just look at facts.
Confidence level? The confidence level that matters involves whether the loan applicant is likely to pay back the loan.
Just wait until are almost fully socialized medical system uses AI to decide who gets rationed out and who gets rationed in for treatment.
Easy! Solve it by introducing the desired bias into the algorithm.
Then claim it isn’t there!
🎯
How long before they talk about the Earth’s new black box, run by Ai and powered by solar. Making Ai more or less immortal.
I had a number crunching job in the 1990s and developed Excel spreadsheets to do the work.
My old timer boss checked every single calculation for a year before he trusted the spreadsheets.
At that point he threw in the towel and started using them.
;-)
paint everything black...
That is what these algorithms aim to correct. Much like a woke employee making decisions based on race and sex rather than merit, they want the AI doing the same. I once listened to Jordan Peterson worry about AI because the worlds most evil people are generally in charge of programming it.
I was under the distinct impression that solar power is not available at night. Or on a cloudy day.
And batteries eventually wear out.
I am not too worried about an "immortal" AI.
That’s a good boss. He walked the talk.
I have not looked that deeply in to it yet. Technology, AI and batteries are all advancing rapidly. It is meant to survive the end of Earth and beyond. https://www.earthsblackbox.com/
LOL. In other words, when a model built on logic does not take political pressure and liberal, woke concerns into account, it awards loans to the people most likely to repay them and doesn’t falsely claim that certain minorities are just as good of a credit risk as whites.
Some of the "corner cases" for the H135 and H145 helicopters were very challenging. The delivered library had over 5500 unit tests with every normal and corner case tested.
fairness...
when it comes right down to it, it’s ALL 1’s and 0’s
what do they want, an equal number of 0’s and 1’s???
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.