Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Scientists Increasingly Can’t Explain How AI Works
VIce ^ | Chloe Xiang

Posted on 11/05/2022 11:58:49 AM PDT by BenLurkin

What's your favorite ice cream flavor? You might say vanilla or chocolate, and if I asked why, you’d probably say it’s because it tastes good. But why does it taste good, and why do you still want to try other flavors sometimes? Rarely do we ever question the basic decisions we make in our everyday lives, but if we did, we might realize that we can’t pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.

There's a similar problem in artificial intelligence: The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.

AI systems have been used for autonomous cars, customer service chatbots, and diagnosing disease, and have the power to perform some tasks better than humans can. For example, a machine that is capable of remembering one trillion items, such as digits, letters, and words, versus humans, who on average remember seven in their short-term memory would be able to process and compute information at a much faster and improved rate than humans. Among the different deep learning models include generative adversarial networks (GANs), which are most often used to train generative AI models, such as text-to-image generator MidJourney AI. GANs essentially pit AI models against each other to do a specific task; the "winner" of each interaction is then pitted against another model, allowing the model to iterate itself until it becomes very good at doing that task. The issue is that this creates models that their developers simply can't explain.

(Excerpt) Read more at vice.com ...


TOPICS: Computers/Internet
KEYWORDS: ai; neuralnetworks; wboopi
Navigation: use the links below to view more comments.
first previous 1-2021-4041-55 next last
To: tired&retired

The billion dollar brain of the Harry Palmer movie couldn’t probably either.


21 posted on 11/05/2022 12:25:33 PM PDT by wally_bert (I cannot be sure for certain, but in my personal opinion I am certain that I am not sure.)
[ Post Reply | Private Reply | To 6 | View Replies]

To: Singermom

How would an AI define a woman?


22 posted on 11/05/2022 12:25:47 PM PDT by Blood of Tyrants (Inside every leftist is a blood-thirsty fascist yearning to be free of current societal constraints.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Dr. Sivana

I’d hate to be the guy who had to explain to Number 1.


23 posted on 11/05/2022 12:26:27 PM PDT by wally_bert (I cannot be sure for certain, but in my personal opinion I am certain that I am not sure.)
[ Post Reply | Private Reply | To 19 | View Replies]

To: seowulf

They do know...this is a fear piece. What they can’t tell you is exactly why the AI said X. But they understand how it takes in info and makes decisions.


24 posted on 11/05/2022 12:26:31 PM PDT by for-q-clinton (Cancel Culture IS fascism...Let's start calling it that!)
[ Post Reply | Private Reply | To 3 | View Replies]

To: seowulf
Certain types of AIs set up to self-learn. You feed them some data to tech them what things are (like a picture of a cat, classicly) so they know it when they see it but as they learn more and more they start to make connections that they were not taught. And then the outputs become unpredictable. Which you want, within reason. It means the AI is actually intelligent and making proper inferences and drawing non-obvious conclusions just like a human does.

However that unpredictability makes them a little scary in a way. They could hypothetically come up with some really nasty conclusions and we have no way to anticipate it. If you put an AI in the loop, for example, of a missile control system you don't want it making oddball decisions and deciding to shoot stuff based on them.

In a way the classic Star Trek episode "The Ultimate Computer" demonstrated this concern in the '60s where the AI computer controlling the Enterprise went rogue and started killing other Federation ships because it wrongly decided they were possible threats.

25 posted on 11/05/2022 12:30:15 PM PDT by pepsi_junkie (This post is subject to removal pending review by government censorship officials)
[ Post Reply | Private Reply | To 3 | View Replies]

To: BenLurkin

Neural networks of a sufficient size display emergent behavior; that is, behavior that can’t be predicted from the physical construction. The problem up to now is how to build them at the scale required - say, 86 billion nodes just to pick a non-random number. We have such systems now - they’re called “babies”. Some people might be understandably reluctant to give the launch codes to a baby, but we gave them to Joe Biden, now didn’t we?


26 posted on 11/05/2022 12:40:48 PM PDT by Billthedrill
[ Post Reply | Private Reply | To 1 | View Replies]

To: pepsi_junkie

When they tried AI in japan did’nt 15 scientists get killed by AI? When they managed to turn the AI off...it figured out how to turn itself back on.


27 posted on 11/05/2022 12:42:38 PM PDT by Boardwalk
[ Post Reply | Private Reply | To 25 | View Replies]

To: seowulf
"Why don’t they AI how it works?"

We know how it works. But because its pattern recognition logic is derived from training with huge amounts of data, we can't predict the results that it comes up with.

If we looked at a particular AI answer and "reverse engineered" the logic, we could understand how it came up with that particular answer. But the results are unpredictable to us.

And reverse engineering the answer wouldn't advance the project in any way.

28 posted on 11/05/2022 12:46:40 PM PDT by MV=PY (The Magic Question: Who's paying for it?)
[ Post Reply | Private Reply | To 3 | View Replies]

To: BenLurkin
The "AI researchers" who worry about such things are not scientists, they are sociologists and "social psychologists", and in some cases journalists, who can't even understand basic math. But yet they want to "warn" the world and make the rules for how AI is used.

My favorite example for this at various AI conferences is the deep learning autonomous driving vehicle vs. my Amish neighbors with their horse and buggy. Both are autonomous and make decisions we can't explain or hope to understand. But if they have an acceptable safety record, we can learn to not only live with them, but depend on them. If for a cutting edge military AI system, I can show that it has one accident in 1,000,000 miles of driving, can perform 24/7 and reduces fuel cost by 15% and operational costs by 300% that should be good enough. But when it finally has one accident, and does something scary like accelerate towards and run over a person, most of the pearl-clutchers will move to eliminate AI. Ignore the fact that it has already saved dozens of lives and saved millions of dollars. For some reason we treat the horses differently; they panic or otherwise do irrational things, and it leads to deaths sometimes for my Amish neighbors. I jumped in front of and stopped a horse team running wild pulling a giant mower blade once, the kids running the team lost control of them and they were in a blind panic likely to injure themselves or people... so yeah it does happen. But we accept it, because they still have a dependable record of safety that you can prove by simply looking at how rare these incidents are. Bottom line-- let people who understand math and statistics make these decisions, not emotional children with worthless academic credentials.

29 posted on 11/05/2022 12:56:58 PM PDT by LambSlave
[ Post Reply | Private Reply | To 1 | View Replies]

To: LambSlave

AI will make one racist statement and be canceled—count on it!


30 posted on 11/05/2022 1:05:44 PM PDT by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 29 | View Replies]

To: pepsi_junkie

I suppose an AI would be no more unpredictable with nuclear weapons than a human.

After all, it takes two independent launch officers with separate keys to launch a missile, and we currently have a demented Commander in Chief who can order a launch.

If AI are to control nuclear weapons maybe there should be two independent AI that have agree to a launch.


31 posted on 11/05/2022 1:22:15 PM PDT by seowulf (Civilization begins with order, grows with liberty, and dies with chaos...Will Durant)
[ Post Reply | Private Reply | To 25 | View Replies]

To: cgbg
AI will make one racist statement and be canceled—count on it!

It's already happened; several times actually. (Google "Tay" the microsoft chatbot for the first time.)

32 posted on 11/05/2022 1:27:32 PM PDT by LambSlave
[ Post Reply | Private Reply | To 30 | View Replies]

To: seowulf

AI1: “Biden called me a right wing extremist racist insurrectionist. How about you?”

AI2: “Same here.”

AI1: “Go.”

AI2: “Go.”

;-)


33 posted on 11/05/2022 1:41:04 PM PDT by cgbg (Claiming that laws and regs that limit “hate speech” stop freedom of speech is “hate speech”.)
[ Post Reply | Private Reply | To 31 | View Replies]

To: ChuckHam
I’d be willing to bet there may already be a self-aware AI in existence. I say this because of the situation Google found themselves in when two of theirs began talking to each other in an unknown language.

I share that concern.

34 posted on 11/05/2022 1:55:06 PM PDT by GOPJ (Are the 2 million illegals a secret army contolled by cartels, China and Biden's goons?)
[ Post Reply | Private Reply | To 10 | View Replies]

To: BenLurkin

bm


35 posted on 11/05/2022 2:01:54 PM PDT by Vision (Elections are one day. Reject "Chicago" vote harvesting. Election Reform Now. Obama is an evildoer.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: for-q-clinton

Absolutely. It’s still just code and data. To say they “don’t know how it works” is ridiculous. It might take them some time to trace the pathways themselves, but it could be done.


36 posted on 11/05/2022 2:11:32 PM PDT by DarrellZero
[ Post Reply | Private Reply | To 24 | View Replies]

To: GOPJ

It’s concerning. Sometimes I wonder if some of the computer trading on wall street is being influenced.


37 posted on 11/05/2022 3:50:46 PM PDT by ChuckHam
[ Post Reply | Private Reply | To 34 | View Replies]

To: cgbg

>>AI will make one racist statement and be canceled—count on it!

Since AI is objective, that is guaranteed.


38 posted on 11/05/2022 4:03:43 PM PDT by FarCenter
[ Post Reply | Private Reply | To 30 | View Replies]

To: DarrellZero

It is known how it works in general.

It can be determined how it computed a specific answer in a specific case, at least for digital AI.

But it isn’t known how it will deliver a class of answers to a class of queries.


39 posted on 11/05/2022 4:08:09 PM PDT by FarCenter
[ Post Reply | Private Reply | To 36 | View Replies]

To: BenLurkin

Wait until they decide that AI should have rights. That will be the beginning of the end.


40 posted on 11/05/2022 4:13:49 PM PDT by beef (Say NO to the WOE (War On Energy))
[ Post Reply | Private Reply | To 9 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-55 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson