Posted on 05/17/2023 8:00:56 AM PDT by SeekAndFind
In the real-world, the costs are all we know for sure and profits remain elusive and contingent.
No one knows how the flood of AI products will play out, but we do know it's unleashed a corporate frenzy to "get our own AI up and running." Corporate fads are one of the least discussed but most obvious dynamics in the economy. Corporations follow fads as avidly as any other heedless consumer, rushing headlong into whatever everyone else is doing.
Globalization is a recent example. Back in the early 2000s, I sat next to corporate employees on flights to China and other Asian destinations who described the travails and costly disasters created by their employers' mad rush to move production overseas: quality control cratered, proprietary technologies were stolen and quickly copied, costs soared rather than declined, and so on.
So let's talk about costs of AI rather than just the benefits. Like many other heavily-hyped technologies, Large Language Model (LLM) AI is presented as stand-alone and "free." But it's actually not stand-alone or free: it requires an army of humans toiling away to make it functional: "We Are Grunt Workers": The Lowly Humans Helping Run ChatGPT Make Just $15 Per Hour (Zero Hedge).
"We are grunt workers, but there would be no AI language systems without it. You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT. You have nothing."
The tasks performed by this hidden army of human workers is euphemistically sanitized by corporate-speak as data enrichment work.
Then there's the stupendous costs of all the extra computing power needed to deliver AI to the masses: For tech giants, AI like Bing and Bard poses billion-dollar search problem
What makes this form of AI pricier than conventional search is the computing power involved. Such AI depends on billions of dollars of chips, a cost that has to be spread out over their useful life of several years, analysts said. Electricity likewise adds costs and pressure to companies with carbon-footprint goals.
Corporations are counting on the magic of the Waste Is Growth / Landfill Economy to generate higher margins from whatever AI touches--don't ask, it's magic--but few ask how all this magic will work in a global recession where consumers will have less income and credit to buy, buy, buy.
LLM-AI is riddled with errors, and nobody can tell what's semi-accurate, what's misleading and what's flat-out wrong. Despite wildly optimistic claims, locating the errors and semi-accuracies can't be fully automated. Errors are inconsequential in an AI-generated book report, but when patients' health is on the line, they become very consequential: I'm an ER doctor: Here's what I found when I asked ChatGPT to diagnose my patients.
This raises fundamental questions about precisely how much work LLM-AI can perform without human oversight, and the all-too breezy claims that tens of millions of jobs will be lost as this iteration of AI automates vast swaths of human labor.
AI excels at echo-chamber reinforcement of risky or error-prone suppositions and policies: Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous. What's the threshold for concern that the AI conclusions are riskier than presented? How do we calculate the possibilities that the AI conclusions are catastrophically misguided?
At what point will decision-makers realize that trusting AI is not worth the risk? If history is any guide, that realization will only arise from financial losses and bad decisions. For the rest of us, it might just be the novelty wears off as the inadequacies pile up: Noam Chomsky: The False Promise of ChatGPT.
Since all this LLM-AI is "free," what AI-created goods and services will generate hundreds of billions of dollars in new revenues and tens of billions in new profits? The general answer is the profits will flow from firing millions of costly humans and replacing them with "nearly free" AI software.
But since all your competitors are rushing down the same frenzied path to AI, what competitive advantage will accrue to what is already a commodity (LLM-AI)? Nobody asks such questions because the euphoria of tech revolutions is so much fun.
The enthusiasm unleashed by new technologies is selectively euphoric: the benefits will prove immeasurable and the costs will soon be near-zero. But in the real-world, the costs are all we know for sure and profits remain elusive and contingent.
Exactly what gets wiped out by the meteor strike is not yet known.
* * *
AI is just another innovation that in a few years will become ubiquitous just like computers did in the 1990’s.
IMHO, AI is good for various situations, but I agree that it’s overhyped. For example, an AI tool could help a professional financial planner as a kind of back-up tool to make sure he’s covered all the bases for his client’s specific needs.
AI will be good for some jobs, but it definitely has limitations.
For example, AI might seem perfect as a receptionist, as it can process human language, figure out what general category their issue is, and forward their call to the right department with a high degree of accuracy. But how would the AI handle the constant spam callers, claiming to be from the Social Security Administration, the IRS, or contacting you about your car’s extended warranty? And how would the AI handle the inevitable irate human callers who are just angry and not in any mood to cooperate with an AI that wants to help them? Even the least competent humans would probably handle those curveballs better than the most competent AI.
“AI” is not just Large Language Models (LLMs). There’s plenty of other techniques that can be used to create useless results ;-)
A handwritten checklist could
Perform the same service could it not?
I believe ChatGPT passed the Bar Exam and a medical exam.
Many applications where there’s a process of “take into consideration a number of variables, weigh them up, and execute a fairly standard plan” are candidates. Both the legal and medical professions are full of them.
I’ve a friend who is a real estate agent. When it comes to home values he believes home values can’t be appraised well by an algorithm. I’d agree, except neural network based AI isn’t about an algorithm, more about how it has been trained. I think he needs to be careful, I wouldn’t be so certain.
My son does insurance underwriting. Same concepts can apply.
...and this stuff is in its infancy.
Whew! And that's just what I can ramble off from the top of my head as an amateur financial planner. A professional might can use more than a checklist to help optimize a financial plan for each of his clients.
Whew indeed! Well, a checklist could still work, but, it would be obviously an encyclopedia sized checklist. So perhaps AI would help in such case.
I don’t see it as “artificial intelligence,” though, that is just a wishful thinking term.
It’s a program, created and maintained by human beings.
I agree on what we today call AI not being real artificial intelligence, at least as the term was used in sci-fi for most of our lives (i.e. Hal, Data). But I’d say that today’s AI usually passes the classic Turing Test for AI.
What I look forward to if AI becomes common...
*Tax preparation
*Coding (I needed a quick arduino code and the free chatAI program pushed one out in minutes.)
*Generative Design (which I already use)
* Menu planning and shopping
*Inventory tracking and ordering
*Vacation planning and booking
*Medical and healthcare
I see it as a tool. I don’t really care if it replaces journalists and other useless jobs. The only bad part is it is all designed by liberal evil whackos.
“Humans doing the hard jobs on minimum wage while robots write poetry and paint is not the future I wanted.” -Karl Sharro @KarleMarks via Twitter
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.