Posted on 06/27/2022 11:08:49 AM PDT by yesthatjallen
The world is at risk of creating a generation of "racist and sexist robots", researchers have warned - after an experiment found a robot making alarming choices.
The device was operating a popular internet-based artificial intelligence system, but consistently chose men over women and white people over other races.
It also made stereotypical assumptions about people’s jobs based on their race and sex – identifying women as 'homemakers', black men as 'criminals' and Latino men as 'janitors'.
The researchers from Johns Hopkins University, the Georgia Institute of Technology, and University of Washington presented their work at the 2022 Conference on Fairness, Accountability and Transparency in Seoul, South Korea.
Lead author Andrew Hundt, a postdoctoral fellow at Georgia Tech, said: "The robot has learned toxic stereotypes through these flawed neural network models.
"We're at risk of creating a generation of racist and sexist robots, but people and organisations have decided it's OK to create these products without addressing the issues."
People building artificial intelligence models to recognise humans and objects often turn to vast datasets available for free on the internet
But the internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues.
SNIP
(Excerpt) Read more at sg.news.yahoo.com ...
These aren’t flawed. They are correct, based on logic.
Socialist dreams never match reality,
This is going to some thread.
be
Since right now they're made in Japan.
“Programming computers with biological facts and crime statistics makes computers sexist and racist. We’re going to have to create new facts and statistics.”
Sacarcist truth about crime statitsics.
But statistics alone will not identify the individual(s) who stand apart from what some statistic says they demographically represent.
That - statistics - is how decisions based on looking for a preferred statistical result make racist non-merit decisions in things like college adminisions.
So easy to figure out, even a robot can do it.
“These aren’t flawed. They are correct, based on logic.”
They are flawed in the sense of using statitics alone to make certain, and many decisions.
Statistics will exclude the individuals that stand out from some condtiion the statistics say fit their demographic.’
Schools use demographic statistics to rule out presons of merit in favor of persons that will help them obtain a preferred statistical result.
There are many flaws to using statistics to make many different decisions. Statistics ignore the inidividual that does not fit a statistical average for their demographic.
If Ben Car’s schools had used statistics to weigh his prospects of becoming a brain surgeon he never would have made it.
Individual merit (good or bad) and character are values to be placed above statistics.
It doesn’t matter.
If everyone knows it is majority right/minority wrong, you have to assign a value that describes it fails “X%” of the time.
None of this is 100% right, and it cannot be. It only helps.
“It doesn’t matter.”
Yes. It matters when and how statistics are used; when they should maybe be a determinant and when other factors should have equal or greater weight. That does not say a statistic is flawed but its use may be.
All the developers have to do is build in bias against normal, non POC, White Christian males, as already exists in academia, major corporations, the law, and the media, and the robots will be deemed just wonderful.
The individual(s) who coded the program that runs the robots have their own internal biases and those biases were knowingly, or unknowlingly, loaded into the robots’ memory banks.
It sounds like those programmer individuals have... artificial intelligence. Hmmn, similar to leftist Democrats who think they are smart but have artificial thinking. Can our future robot overlords be worse?
“inaccurate and overtly biased content...”
Oh, well, in THAT case...
Wait till these robots acquire a taste for human flesh.
Wait till these robots acquire a taste for human flesh.
If, for example, the database in question is the FBI Crime Statistics database and the conclusion was that blacks commit more violent crimes than other races, however emotive the term "racist" is, that conclusion was based on data, not emotions. Call it whatever you like, it isn't bias. Neither is concluding that men commit more violent crimes then women sexist, or that young people commit more violent crimes than octogenarians ageist. Whether the conclusion is diplomatic or not is irrelevant.
What idiot decided that robots should learn from humans & mimic human behavior? If you don’t want a robot to be like humans, you’d better train them differently.
Can you teach these robots that Lia Thomas is a grl?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.