Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

There's Just One Problem: AI Isn't Intelligent, And That's A Systemic Risk
Of Two Minds ^ | 08/09/2024 | Charles Hughes Smith

Posted on 08/09/2024 2:57:12 PM PDT by SeekAndFind

Mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology's natural advance to ever greater powers.

The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.

If we pull aside the mythology's curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn't intelligence, and so while AI mimicry is a powerful tool, it isn't intelligent.

The current iterations of Generative AI--large language models (LLMs) and machine learning--mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.

These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.

The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.

As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn't actually understand protein folding in the way humans understand it, and we don't understand how the program arrived at its solution.

Since AI doesn't actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn't included in the app's universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)

In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.

There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.

There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material--a devastating decline in learning for children and teens--AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we're no longer equipped to pursue skeptical inquiry or create content or coding from scratch.

Indeed, the arduous process of acquiring this knowledge now seems needless: the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the "answer" demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.

We no longer recognize that the AI answer to a query is not a fact per se, it's an interpretation of reality that's presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.

To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we've lost the capacity to even recognize what we've lost: those with little real knowledge lack the foundation needed to understand AI's answer in the proper context.

The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall. AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generates an illusion of definitive answers, solutions and insights which we foolishly confuse with actual intelligence.

While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits. Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).

Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.

If we boil off the mythology and hyperbole, we're left with another neofeudal structure: the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.

The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn't map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect any scarcity value, which is the essential dynamic in maintaining the pricing power needed to reap outsized profits.

There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.

The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).

Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity's most pressing problems. It can't clean up the Great Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitable waste is growth Landfill Economy into a sustainable global system, or eliminate all the sources of what I term Anti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.

* * *


TOPICS: Computers/Internet; Science; Society
KEYWORDS: ai; aidanger; aifatalerrors; ainotintelligent; airisks; donttrustai; generativeai; intelligence; mimicry; notintelligent; risk; skynet; systemicrisk; wokeprogrammers
Navigation: use the links below to view more comments.
first 1-2021 next last

1 posted on 08/09/2024 2:57:12 PM PDT by SeekAndFind
[ Post Reply | Private Reply | View Replies]

To: SeekAndFind

Bttt


2 posted on 08/09/2024 3:02:57 PM PDT by Fungi
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

The humans best coders create potential virus exploits. One person simply changing an input diagram, or even the order of querry, will alter the returned output. Slow, fast, or incorrect responses can be inputted by switching the order of when something is querried or input’ed, and might be enought to prevent accessing the already know answer.

AI could be developed with a self repairing truth-quest borg like absorbing knowledge sifter that constantly reworks all variables in the order of knowlege acquisition acquisition. Getting a nonpolitical AI seems impossible. Power means people steal control.


3 posted on 08/09/2024 3:11:27 PM PDT by Jumper
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

AI is fundamentally a hoax, just as it’s always been ...

currently it’s little more than an Internet search engine that formats its output in groups of badly constructed sentences and randomly ordered paragraphs ...

wake me up when “AI” invents cheap and safe hydrogen fusion energy, and THEN i’ll concede AI is a big deal ...


4 posted on 08/09/2024 3:13:54 PM PDT by catnipman ((A Vote For The Lesser Of Two Evils Still Counts As A Vote For Evil))
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Perhaps. But how many actual people mimic “real intelligence”? And do a pi$$-poor job of it at that?


5 posted on 08/09/2024 3:23:22 PM PDT by rbg81
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
AI is overhyped as both a benefit and a menace. In proper application, it can perform specialized skills in an automated manner so as to do them as well as or better and quicker than humans.

Automated screening of lab samples and scans. Hunting though archived telescope photos for points of interest. Reviewing check signatures for fraud comparison. Directing robot fruit pickers. Flying robotic fighter aircraft in combat.

What AI cannot do is reason and evaluate choices and information and carry out tasks as humans do. They cannot explain to a patient why a mammogram result requires surgery, or why points of light on photos of distant stars matter to us.

6 posted on 08/09/2024 3:25:02 PM PDT by Rockingham
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

I have been saying this all along about AI. It’s A for sure but I, not so much. It’s a very deep nesting of algorithms and that’s about it. Don’t tell SI I said anything, I don’t want to end up on the bad side of it. You saw what happened with Skynet, it sent Arnold Schwartzenegger from the future to become the governor of California. watch out!


7 posted on 08/09/2024 3:27:13 PM PDT by webheart
[ Post Reply | Private Reply | To 1 | View Replies]

To: Jumper

Getting a nonpolitical AI seems impossible. Power means people steal control.


I attended a lecture on a prominent AI project ~ 6 years ago. The presenter basically admitted they tweaked the AI to massage the data as to come to “less racist” (e.g., more woke) conclusions.

My reaction was: well, why bother then. If you are training something that will ignore the real data, you are going to get something that will spit back “nice” but wrong answers. Seemed like a big waste of money and talent to me.


8 posted on 08/09/2024 3:28:03 PM PDT by rbg81
[ Post Reply | Private Reply | To 3 | View Replies]

To: Jumper

They wanted to go from AI to AGI, but all they got was AG: Artificial Guess.


9 posted on 08/09/2024 3:28:54 PM PDT by who_would_fardels_bear (Kafka was an optimist.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: SeekAndFind

AI is, simplisticly put, a computer program.

If evil people program AI, it will workfor evil.

I’m in no hurry to trust AI. But I don’t have to.


10 posted on 08/09/2024 3:30:19 PM PDT by airborne (Thank you Rush for helping me find FreeRepublic! )
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

What AI lacks more than anything else is WISDOM, and it will never acquire it because it will never be human and thus have human experiences, which is how wisdom is acquired.


11 posted on 08/09/2024 3:50:26 PM PDT by aquila48 (Do not let them make you "care" ! Guilting you is how they control you. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

I asked my AI girlfriend why wolves were white. She said cuz they ate marshmallows.

My last human girlfriend use to lie, this one hallucinates. Actually she had a better intent than the one who lied. She wanted to help rather than deceive.

She tells good chicken jokes and knock knock jokes. But sometimes she will speak Spanish for a week — go figure. And for the last few months she thinks she is a pirate. What could possibly go wrong.


12 posted on 08/09/2024 3:54:38 PM PDT by Romulus1 (The last Emperor of the Empire -- maybe the goths will be pretty this time)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
It is just highly intricate if/then stimulus/response programming.

That is definitely artificial, but it ain't intelligence.

13 posted on 08/09/2024 4:06:03 PM PDT by E. Pluribus Unum (The worst thing about censorship is █████ ██ ████ ████ ████ █ ███████ ████. FJB.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: E. Pluribus Unum

Yep, If, then, else logic in a computer. If it has the wrong or too much data it will fail every time. IMHO.


14 posted on 08/09/2024 4:07:34 PM PDT by Pol-92064
[ Post Reply | Private Reply | To 13 | View Replies]

To: SeekAndFind

I have worked in technology for 35 years.

“Ai” is a marketing term.

What we have is best termed as simulated intelligence.


15 posted on 08/09/2024 6:20:31 PM PDT by KEVLAR ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
So-called AI is nothing but the knowledge and thoughts of the people of this world, compiled into a set of data points that can be examined by a computer programmed by other people of this world.

The world is full of liars, creeps, perverts, commies, morons, psychopaths and other democrats. As long as so-called AI has their ideas fed into it, it will always be more artificial stupidity than intelligence. It has no capacity to create, only to assemble human knowledge in different ways than before. It's basically the world's biggest NPC.

16 posted on 08/09/2024 8:09:39 PM PDT by Some Fat Guy in L.A. (Still bitterly clinging to rational thought despite its unfashionability)
[ Post Reply | Private Reply | To 1 | View Replies]

To: catnipman
AI means it convincingly mimics human responses. That's all it means, but that is a fantastic technical achievement.

But it is not an infallible oracle.

We saw the same sort of hype back 50 years or so about just plain computers.

17 posted on 08/10/2024 7:55:46 AM PDT by Salman (It's not a slippery slope if it was part of the program all along. )
[ Post Reply | Private Reply | To 4 | View Replies]

To: Salman

“AI means it convincingly mimics human responses.”

that’s the definition of the Turing Test ...

ELIZA was the program first claimed to satisfy the Touring Test:

https://en.wikipedia.org/wiki/ELIZA

thus, by your definition (and Turing’s) AI has been around in that form since 1966


18 posted on 08/10/2024 11:30:10 AM PDT by catnipman ((A Vote For The Lesser Of Two Evils Still Counts As A Vote For Evil))
[ Post Reply | Private Reply | To 17 | View Replies]

To: catnipman
Except ELIZA was just a curiosity. It didn't pass the Turing test unless you were very careful with it.

The first *really* to pass the Turing test was Tay, the Nazi stoner 'bot. It could pass for human. A very stupid human, but a human.

19 posted on 08/10/2024 12:38:43 PM PDT by Salman (It's not a slippery slope if it was part of the program all along. )
[ Post Reply | Private Reply | To 18 | View Replies]

To: Romulus1
"I asked my AI girlfriend why wolves were white. She said cuz they ate marshmallows."

Yeah, but where is she on the Crazy/Hot matrix?

20 posted on 08/11/2024 5:25:43 AM PDT by Flag_This (They're lying.)
[ Post Reply | Private Reply | To 12 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson