Posted on 07/11/2023 4:20:59 AM PDT by MtnClimber
Technical topics, of any sort at all, are generally subject to serious distortion when they hit the level of public discussion. There are many reasons for this – ideology, click-lust, and the sheer inability of the average journo school grad to adequately wrap his head around whatever concept is under consideration.
There’s no end of examples: Just think of the garbage written about global warming or COVID.
The latest of these topics is Artificial Intelligence (AI). Commentary on AI has exploded across the media sphere since the release of ChatGPT, an AI app purportedly capable of learning how to produce prose in any style at request. The consensus, to quote a style not yet mastered by ChatGPT, is almost uniformly “a tale told by an idiot, full of sound and fury, signifying nothing.”
The media uproar has been characterized by two approaches -- the first (and most common) is complete lack of understanding of the technology. The second is an impression of the topic derived from movies, largely HAL 9000 and Skynet (an older generation would add Colossus). These AI entities are uniformly insane, malevolent, or both (though not to the level of the one envisioned in Harlan Ellison’s “I Have no Mouth, and I Must Scream” which is so overcome by existential loathing that it destroys all humanity except for five individuals, whom it then sets out to torture for all eternity). For some reason, nobody ever suggests the AI Samantha in the superb film Her, who is cheerful, helpful, and even loving. That says more about human nature than it does Artificial Intelligence.
Artificial Intelligence was introduced by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” Turing had first proposed computers in the 1930s and then played a role in building the earliest working models
(Excerpt) Read more at americanthinker.com ...
Problem is that AI has no “common sense” and there is a whole lot of garbage mixed in with any actual usable knowledge on the internet.
A human can sift the garbage out while he’s synthesizing, but even most humans don’t do that very well.
We’ll be eternally free and eternally young.
What a beautiful world it will be.
Yes, there will still need to be a human in the loop to do quality control so to speak. But even that can probably be achieved algorithmically to a substantial degree, like by, say, looking for indications of consensus. For instance, if your bot scans through a thousand postings and articles in the area of basic math and find that most of them contain the expression “2 + 2 = 4” with just a few outlier variations, then maybe it would be reasonable for it to treat 2 + 2 = 4 as good information.
The left is setting up AI as the new left-leaning fact checker...
Many conflate AI to consciousness, it is not consciousness. It is a powerful software tool made possible by the massive computing power now available at low cost. i.e. my home PC has a 24 core processor that can run at 5.8ghz and a GPU with many thousands of small cores, and it is not an extremely expensive PC. I once sold computers and I remember well 512k ram and a single 8-bit core running at less than 5mhz... we have come a long way.
The tool referred to as AI is a game-changing technology. It has similarities with another powerful technology that has been with us for millennia, that technology is written language. Both written language and AI alter man’s relationship to time. Written language allows us to extend our thoughts and ideas into the future, far beyond our lifespan... this is a very, very powerful thing! AI allows us to do things we could do ourselves if we could only live forever... EVERYTHING AI does could be done by a single ordinary man IF he had forever to do it... the man could look at billions of terabytes of data and glean what was needed to arrive at a solution... if only he had the TIME.
The concept of the stored program universal computing machine that was elegantly described for the first time by Alan Turing in his seminal paper published in the 30’s set us up for the eventual appearance of what is referred to now as AI.
Computers allow us to do things we could do ourselves if we just had the time, or could work at incredible speed... this is a wonderful thing.
AI and robotics will someday soon alter the relationship of surgeons with time and allow them to do amazing things that simply cannot be done now because a surgeon can only work so fast and for only so long... a good surgeon could repair what is now considered impossible to repair if he could only work very quickly, at a cellular level and not tire...
Consensus might be alright for math problems, but for any controversial subject, it’s not going to work.
If an AI followed the consensus on COVID, for example...
What we are sorely in need of is artificial common sense, and lots of it ;-)
I guess it depends on what “work” means. If it means that AI is able to identify areas of consensus that are in conflict and then determine which one is ultimately correct, then yeah it won’t work. That’s something that even people struggle to do well. But if it simply means that when a controversy is detected — i.e. that consensus has formed around opposing positions — AI is able to present both positions accurately, then maybe it could be said to work.
That is probably already happening.
By that metric, "global warming" is a dire threat that must be combatted by entirely tearing civilization down and rebuilding it in the image of Mikhail Bakunin.
There’s a lot of published material out there opposing the climate change consensus. You might call it a counter-consensus. A well designed AI would detect it and include it in its results.
An AI well designed to YOUR requirements would detect and identify the competing consensuses.
An AI well designed to a leftist's requirements would ignore the "GW is a hoax" consensus.
Well, but when it came to COVID, it probably wouldn’t have detected that there were two different schools of “consensus”. Because one side was being heavily censored and communicating mainly outside of the normal channels.
So it most likely would have surmised that the consensus position was the one not being censored, since that’s what it would have found everywhere, especially from the most “trusted” and “reliable” sources. Even though that consensus proved to be wrong in just about every imaginable way.
The same would hold true for other controversial positions, like climate change, transexuals, etc.
That’s right — AI is just a tool and will reflect the intent of its designers, for better or for worse. That’s how all tools work.
And there lies the problem: When the intent and requirements of the designers are hidden or presented dishonestly, the tool can generate chaos and confusion.
The consensus doesn’t have to be a totally binary thing. A well designed AI would detect and present opposing positions even if they were expressed less often on the internet. The results would be proportional to some degree rather than winner take all.
But yeah, with something like COVID where the establishment left totally flooded the zone, it’s inevitable that an AI using a statistical-linguistic model would give more weight to the establishment position. It’s just reflecting the available data. What more can it do?
I mean, AI isn’t going fight our fights for us. If we lose the public communications battle, then AI will reflect that in its results.
Yes, of course. For AI to be a good tool, it needs to be well designed and presented in an open and honest way.
AI is only as good as the people behind it. If AI becomes a problem, it won’t be an AI problem; it will be a people problem. Everything comes down to people.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.