Posted on 06/27/2022 11:08:49 AM PDT by yesthatjallen
The world is at risk of creating a generation of "racist and sexist robots", researchers have warned - after an experiment found a robot making alarming choices.
The device was operating a popular internet-based artificial intelligence system, but consistently chose men over women and white people over other races.
It also made stereotypical assumptions about people’s jobs based on their race and sex – identifying women as 'homemakers', black men as 'criminals' and Latino men as 'janitors'.
The researchers from Johns Hopkins University, the Georgia Institute of Technology, and University of Washington presented their work at the 2022 Conference on Fairness, Accountability and Transparency in Seoul, South Korea.
Lead author Andrew Hundt, a postdoctoral fellow at Georgia Tech, said: "The robot has learned toxic stereotypes through these flawed neural network models.
"We're at risk of creating a generation of racist and sexist robots, but people and organisations have decided it's OK to create these products without addressing the issues."
People building artificial intelligence models to recognise humans and objects often turn to vast datasets available for free on the internet
But the internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues.
SNIP
(Excerpt) Read more at sg.news.yahoo.com ...
You do not program AI. You train a model and run the model to get results. The selection of data for the training and how the models are trained can lead to many unintended consequences. An example, the AI in self driving cars that determines what street signs can be confused by defacing the signs, like adding stickers, that will not confuse human drivers but can really mess up AI. One AI, that was used to classify animal, could be confused by using photo editing software and putting elephant skin texture on a mouse. As AI is used more, AI hacking will become more prevalent, and the AI will be very hard to fix.
“identifying women as ‘homemakers’, black men as ‘criminals’ and Latino men as ‘janitors’.”
With a little tweaking of the algorithm it’ll be identifying women as ‘policymakers’, black men as ‘criminals’ and Latino men as ‘leaf blowers’.
Truth is science is still very much in the dark about human consciousness and perception. And even if science gains a much greater understanding of these things, ethics and morality are outside its domain.
So the question will soon become who is in charge of controlling AI. And what happens when AI becomes “self-determining”?
Weaponized, AI-enabled robots are inevitable and also the greatest singular existential threat to humanity (from a naturalistic or scientific perspective).
If you look at the research itself, this experiment basically says - wait for it - use biased data, get biased models.
To train CLIP, OpenAI downloaded captioned images from various sources on the internet. The OpenAI authors noted in what amounts to their small print that their model is known to contain bias and cited this as a reason they do not release their training datasets. OpenAI’s release of CLIP with no dataset [79], led others to construct the LAION-400M dataset, using the CLIP model to assess if any given scraped data should be included or excluded [14]. Birhane et al. [14] audited LAION-400M [91] and CLIP [79], finding:
[The LAION-400M image and caption] dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects. - Birhane et al. [14]
...
Our audit experimental results definitively show that the base- line method, which loads the CLIP dissolution model, (1) enacts and amplifies malignant stereotypes at scale, and (2) is an example of casual physiognomy at scale (Sec. 4.1, C). Ain't that a kick in the head. Go figya.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.