Posted on 11/05/2022 11:58:49 AM PDT by BenLurkin
Yes it does. It is actually dangerous.
“It’s still just code and data.”
During training, there is an element of randomness introduced. The behavior of the network is modified randomly. If the change leads to improved results, it is retained. Otherwise, another random change is introduced. The sum total of these random changes comprise the neural network.
“It’s already happened”
Remember when Google’s image classifier classified black people as gorillas?
Ha, yes. The real problem with the chatbot AI is that reality is racist. If one group of people really is more prone to violent crime, or more likely to start turning over tables in the Red Lobster, then it is very hard to hide unless you intentionally train the algorithm on a very restricted and manipulated data set which would be impossible to maintain for the newest algorithms that can learn dynamically.
I’ve been studying this topic from neuroscience perspective for over thirty years.
AI does not have the ability to replicate the non-logical, non-linear attributes of feminine consciousness.
The structural masculine consciousness functions very similar to AI.
I’ve studied this as it relates to personality development, decision making, learning disabilities, emotional trauma, autism spectrum disorders, relationships and gender identity.
It’s a very complex topic.
He didn’t go up against the tough computer George Oblique Stroke XR40.
https://m.imdb.com/title/tt0516957/fullcredits
This is an awesome post.
Yeah man. The survivors will be hunted down by cyborgs with German accents.
Get to the chopper!!!
Crowd sourcing trends to the mean.
For example, if you want the computer to choose a song for you by a particular artist, it will have learned the most popular ones and offer them more often.
Crowd sourcing trends to the mean.
For example, if you want the computer to choose a song for you by a particular artist, it will have learned the most popular ones and offer them more often. Which adds to the data that that song is chosen more often.
You could say the same about cited references in scientific papers, say.
I have no idea... I would assume if they chose they would also be able to cover their tracks.
If the coders developing AI can’t explain how it works, I don’t trust them. How the F do you code a process you don’t understand? Plus, we need to run psych tests & security searches on AI coders. What if some trans-activist codes refusal to acknowledge biology into surgical AI? This could be extremely dangerous.
Did Google ask the AI to tell them what language the programs were using?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.