Posted on 11/05/2022 11:58:49 AM PDT by BenLurkin
The billion dollar brain of the Harry Palmer movie couldn’t probably either.
How would an AI define a woman?
I’d hate to be the guy who had to explain to Number 1.
They do know...this is a fear piece. What they can’t tell you is exactly why the AI said X. But they understand how it takes in info and makes decisions.
However that unpredictability makes them a little scary in a way. They could hypothetically come up with some really nasty conclusions and we have no way to anticipate it. If you put an AI in the loop, for example, of a missile control system you don't want it making oddball decisions and deciding to shoot stuff based on them.
In a way the classic Star Trek episode "The Ultimate Computer" demonstrated this concern in the '60s where the AI computer controlling the Enterprise went rogue and started killing other Federation ships because it wrongly decided they were possible threats.
Neural networks of a sufficient size display emergent behavior; that is, behavior that can’t be predicted from the physical construction. The problem up to now is how to build them at the scale required - say, 86 billion nodes just to pick a non-random number. We have such systems now - they’re called “babies”. Some people might be understandably reluctant to give the launch codes to a baby, but we gave them to Joe Biden, now didn’t we?
When they tried AI in japan did’nt 15 scientists get killed by AI? When they managed to turn the AI off...it figured out how to turn itself back on.
We know how it works. But because its pattern recognition logic is derived from training with huge amounts of data, we can't predict the results that it comes up with.
If we looked at a particular AI answer and "reverse engineered" the logic, we could understand how it came up with that particular answer. But the results are unpredictable to us.
And reverse engineering the answer wouldn't advance the project in any way.
My favorite example for this at various AI conferences is the deep learning autonomous driving vehicle vs. my Amish neighbors with their horse and buggy. Both are autonomous and make decisions we can't explain or hope to understand. But if they have an acceptable safety record, we can learn to not only live with them, but depend on them. If for a cutting edge military AI system, I can show that it has one accident in 1,000,000 miles of driving, can perform 24/7 and reduces fuel cost by 15% and operational costs by 300% that should be good enough. But when it finally has one accident, and does something scary like accelerate towards and run over a person, most of the pearl-clutchers will move to eliminate AI. Ignore the fact that it has already saved dozens of lives and saved millions of dollars. For some reason we treat the horses differently; they panic or otherwise do irrational things, and it leads to deaths sometimes for my Amish neighbors. I jumped in front of and stopped a horse team running wild pulling a giant mower blade once, the kids running the team lost control of them and they were in a blind panic likely to injure themselves or people... so yeah it does happen. But we accept it, because they still have a dependable record of safety that you can prove by simply looking at how rare these incidents are. Bottom line-- let people who understand math and statistics make these decisions, not emotional children with worthless academic credentials.
AI will make one racist statement and be canceled—count on it!
I suppose an AI would be no more unpredictable with nuclear weapons than a human.
After all, it takes two independent launch officers with separate keys to launch a missile, and we currently have a demented Commander in Chief who can order a launch.
If AI are to control nuclear weapons maybe there should be two independent AI that have agree to a launch.
It's already happened; several times actually. (Google "Tay" the microsoft chatbot for the first time.)
AI1: “Biden called me a right wing extremist racist insurrectionist. How about you?”
AI2: “Same here.”
AI1: “Go.”
AI2: “Go.”
;-)
I share that concern.
bm
Absolutely. It’s still just code and data. To say they “don’t know how it works” is ridiculous. It might take them some time to trace the pathways themselves, but it could be done.
It’s concerning. Sometimes I wonder if some of the computer trading on wall street is being influenced.
>>AI will make one racist statement and be canceled—count on it!
Since AI is objective, that is guaranteed.
It is known how it works in general.
It can be determined how it computed a specific answer in a specific case, at least for digital AI.
But it isn’t known how it will deliver a class of answers to a class of queries.
Wait until they decide that AI should have rights. That will be the beginning of the end.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.