Posted on 03/20/2019 7:08:19 AM PDT by Reeses
The UK government is launching an inquiry into whether algorithms used in areas such as criminal justice, recruitment, and finance are biased against people based on gender or race.
The Department for Culture, Media, and Sport (DCMS) announced the new inquiry on Wednesday. It will be led by the Centre for Data Ethics and Innovation, an independent watchdog set up by the department in 2018 to come up with best practice policy on how to police artificial intelligence.
The Cabinet Offices Race Disparity Unit, which highlights discrimination in public services, will also work on the new review.
The inquiry comes amid the rapid development of artificial intelligence. The technologies surrounding AI have provided big advances, but also thrown up challenges around the fairness and transparency of decision making. Tech giants such as Amazon (AMZN) have already run into issues of algorithmic bias in recruitment and Google (GOOGL) established a team last year to review the ethics of its AI.
Technology is a force for good and continues to improve peoples lives, but we must make sure it is developed in a safe and secure way, digital secretary Jeremy Wright said in a statement.
Highlighting examples of potential pitfalls, DCMS said algorithms used to scan CVs and shortlist candidates could discriminate against certain groups of people due to the unconscious, or conscious, bias of programmers who wrote the code.
It also warned that the growing use of algorithms in finance could lead to a lack of transparency when applied to loans.
(Excerpt) Read more at finance.yahoo.com ...
AlGore has rhythm?....................
Facts can’t be documented if the left doesn’t like the implications.
“The Department for Culture, Media, and Sport (DCMS)”
Well that’s Orwellian as it gets. England is so dead. I am sick to death of this race crap.
We know what the results must be. After all, crime percentages are obviously equal for all cultures, so if some culture (we know which one) demonstrates a much much much higher crime rate, the algorithm must be flawed.
Well, if one uses historical data and bases actions on that historical data then one is more likely to get the same results as before.
The trick is to find the “real” variables to act upon. Not skin color or race, but rather if the person is a scumbag or not.
You might look at a person's record to determine this but you may also find that the correlation between whether or not someone is a scumbag and their ethnicity is going to get you in trouble anyway, even though those were not inputs into the algorithm.
“an independent watchdog set up by the department in 2018 to come up with best practice policy on how to police artificial intelligence”
Great, now we’re going to have to worry about PC robots. On the upside, if they turn the robots into pajama boys, we might be able to actually defeat our new robot overlords.
When self-driving cars are in crash mode and have to choose between running over a white man or a minority, they will be required by law to run over the white man, except if the man is wearing a dress.
So they will modify their algorithm to randomly pick white males as suspects to balance things out.
Can’t wait till they start combining that w the ancestry DNA search.
We need to kick in your door and shoot your dog at 3am in the name of fairness.
Men are more likely to commit crimes, especially violent crimes, than women.
Blacks, whether you blame culture or biology in lower average IQ or current politics, are more likely to commit crimes than whites and east Asians.
Muslims, due to culture and religion, are more likely to commit crimes like terrorism and rape of non-Muslims.
If the algorithm says these groups are more likely to commit crimes, it is based on DATA and intelligence.
Any algorithm that says an old white woman is as likely to rape as a Muslim man is flawed beyond reason.
The problem with using algorithms to correct for so-called implicit bias is that implicit bias itself isn’t sound. Where it may exist, there is NO proof it affects real world behavior.
Psychologys Favorite Tool for Measuring Racism Isnt Up to the Job
Almost two decades after its introduction, the implicit association test has failed to deliver on its lofty promises.
http://nymag.com/scienceofus/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html
It is a new form of oppression by algorithm. You’re white and male? Your resume gets downgraded in the short list because of your demographics, regardless of your qualifications.
As the old saying goes, “Even observing an experiment introduces bias.”................
Life by Algorithm .. a Progre$$ive Solution for what ails ye ?? The data is still out, however.
Centre for Data Ethics and Innovation, an independent watchdog set up by the department in 2018
—
Centre for Data Ethics and Innovation Consultation
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.