Posted on 11/05/2022 11:58:49 AM PDT by BenLurkin
What's your favorite ice cream flavor? You might say vanilla or chocolate, and if I asked why, you’d probably say it’s because it tastes good. But why does it taste good, and why do you still want to try other flavors sometimes? Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.
There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.
AI systems have been used for autonomous cars, customer service chatbots, and diagnosing disease, and have the power to perform some tasks better than humans can. For example, a machine that is capable of remembering one trillion items, such as digits, letters, and words, versus humans, who on average remember seven in their short-term memory would be able to process and compute information at a much faster and improved rate than humans. Among the different deep learning models include generative adversarial networks (GANs), which are most often used to train generative AI models, such as text-to-image generator MidJourney AI. GANs essentially pit AI models against each other to do a specific task; the "winner" of each interaction is then pitted against another model, allowing the model to iterate itself until it becomes very good at doing that task. The issue is that this creates models that their developers simply can't explain.
(Excerpt) Read more at vice.com ...
Yes it does. It is actually dangerous.
“It’s still just code and data.”
During training, there is an element of randomness introduced. The behavior of the network is modified randomly. If the change leads to improved results, it is retained. Otherwise, another random change is introduced. The sum total of these random changes comprise the neural network.
“It’s already happened”
Remember when Google’s image classifier classified black people as gorillas?
Ha, yes. The real problem with the chatbot AI is that reality is racist. If one group of people really is more prone to violent crime, or more likely to start turning over tables in the Red Lobster, then it is very hard to hide unless you intentionally train the algorithm on a very restricted and manipulated data set which would be impossible to maintain for the newest algorithms that can learn dynamically.
I’ve been studying this topic from neuroscience perspective for over thirty years.
AI does not have the ability to replicate the non-logical, non-linear attributes of feminine consciousness.
The structural masculine consciousness functions very similar to AI.
I’ve studied this as it relates to personality development, decision making, learning disabilities, emotional trauma, autism spectrum disorders, relationships and gender identity.
It’s a very complex topic.
He didn’t go up against the tough computer George Oblique Stroke XR40.
https://m.imdb.com/title/tt0516957/fullcredits
This is an awesome post.
Yeah man. The survivors will be hunted down by cyborgs with German accents.
Get to the chopper!!!
Crowd sourcing trends to the mean.
For example, if you want the computer to choose a song for you by a particular artist, it will have learned the most popular ones and offer them more often.
Crowd sourcing trends to the mean.
For example, if you want the computer to choose a song for you by a particular artist, it will have learned the most popular ones and offer them more often. Which adds to the data that that song is chosen more often.
You could say the same about cited references in scientific papers, say.
I have no idea... I would assume if they chose they would also be able to cover their tracks.
If the coders developing AI can’t explain how it works, I don’t trust them. How the F do you code a process you don’t understand? Plus, we need to run psych tests & security searches on AI coders. What if some trans-activist codes refusal to acknowledge biology into surgical AI? This could be extremely dangerous.
Did Google ask the AI to tell them what language the programs were using?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.