Posted on 11/05/2022 11:58:49 AM PDT by BenLurkin
What's your favorite ice cream flavor? You might say vanilla or chocolate, and if I asked why, you’d probably say it’s because it tastes good. But why does it taste good, and why do you still want to try other flavors sometimes? Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.
There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.
AI systems have been used for autonomous cars, customer service chatbots, and diagnosing disease, and have the power to perform some tasks better than humans can. For example, a machine that is capable of remembering one trillion items, such as digits, letters, and words, versus humans, who on average remember seven in their short-term memory would be able to process and compute information at a much faster and improved rate than humans. Among the different deep learning models include generative adversarial networks (GANs), which are most often used to train generative AI models, such as text-to-image generator MidJourney AI. GANs essentially pit AI models against each other to do a specific task; the "winner" of each interaction is then pitted against another model, allowing the model to iterate itself until it becomes very good at doing that task. The issue is that this creates models that their developers simply can't explain.
(Excerpt) Read more at vice.com ...
The billion dollar brain of the Harry Palmer movie couldn’t probably either.
How would an AI define a woman?
I’d hate to be the guy who had to explain to Number 1.
They do know...this is a fear piece. What they can’t tell you is exactly why the AI said X. But they understand how it takes in info and makes decisions.
However that unpredictability makes them a little scary in a way. They could hypothetically come up with some really nasty conclusions and we have no way to anticipate it. If you put an AI in the loop, for example, of a missile control system you don't want it making oddball decisions and deciding to shoot stuff based on them.
In a way the classic Star Trek episode "The Ultimate Computer" demonstrated this concern in the '60s where the AI computer controlling the Enterprise went rogue and started killing other Federation ships because it wrongly decided they were possible threats.
Neural networks of a sufficient size display emergent behavior; that is, behavior that can’t be predicted from the physical construction. The problem up to now is how to build them at the scale required - say, 86 billion nodes just to pick a non-random number. We have such systems now - they’re called “babies”. Some people might be understandably reluctant to give the launch codes to a baby, but we gave them to Joe Biden, now didn’t we?
When they tried AI in japan did’nt 15 scientists get killed by AI? When they managed to turn the AI off...it figured out how to turn itself back on.
We know how it works. But because its pattern recognition logic is derived from training with huge amounts of data, we can't predict the results that it comes up with.
If we looked at a particular AI answer and "reverse engineered" the logic, we could understand how it came up with that particular answer. But the results are unpredictable to us.
And reverse engineering the answer wouldn't advance the project in any way.
My favorite example for this at various AI conferences is the deep learning autonomous driving vehicle vs. my Amish neighbors with their horse and buggy. Both are autonomous and make decisions we can't explain or hope to understand. But if they have an acceptable safety record, we can learn to not only live with them, but depend on them. If for a cutting edge military AI system, I can show that it has one accident in 1,000,000 miles of driving, can perform 24/7 and reduces fuel cost by 15% and operational costs by 300% that should be good enough. But when it finally has one accident, and does something scary like accelerate towards and run over a person, most of the pearl-clutchers will move to eliminate AI. Ignore the fact that it has already saved dozens of lives and saved millions of dollars. For some reason we treat the horses differently; they panic or otherwise do irrational things, and it leads to deaths sometimes for my Amish neighbors. I jumped in front of and stopped a horse team running wild pulling a giant mower blade once, the kids running the team lost control of them and they were in a blind panic likely to injure themselves or people... so yeah it does happen. But we accept it, because they still have a dependable record of safety that you can prove by simply looking at how rare these incidents are. Bottom line-- let people who understand math and statistics make these decisions, not emotional children with worthless academic credentials.
AI will make one racist statement and be canceled—count on it!
I suppose an AI would be no more unpredictable with nuclear weapons than a human.
After all, it takes two independent launch officers with separate keys to launch a missile, and we currently have a demented Commander in Chief who can order a launch.
If AI are to control nuclear weapons maybe there should be two independent AI that have agree to a launch.
It's already happened; several times actually. (Google "Tay" the microsoft chatbot for the first time.)
AI1: “Biden called me a right wing extremist racist insurrectionist. How about you?”
AI2: “Same here.”
AI1: “Go.”
AI2: “Go.”
;-)
I share that concern.
bm
Absolutely. It’s still just code and data. To say they “don’t know how it works” is ridiculous. It might take them some time to trace the pathways themselves, but it could be done.
It’s concerning. Sometimes I wonder if some of the computer trading on wall street is being influenced.
>>AI will make one racist statement and be canceled—count on it!
Since AI is objective, that is guaranteed.
It is known how it works in general.
It can be determined how it computed a specific answer in a specific case, at least for digital AI.
But it isn’t known how it will deliver a class of answers to a class of queries.
Wait until they decide that AI should have rights. That will be the beginning of the end.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.