Posted on 04/29/2023 4:33:13 AM PDT by FarCenter
The astonishing performance of recent so-called “large language models” – first and foremost OpenAI’s ChatGPT series – has raised expectations that systems able to match the cognitive capabilities of human beings, or even possess “superhuman” intelligence, may soon become a reality.
At the same time, experts in artificial intelligence are sounding dire warnings about the dangers that a further, uncontrolled development of AI would pose to society, or even to the survival of the human race itself.
Is this mere hype, of the sort that has surrounded AI for over half a century? Or is there now an urgent need for measures to control the further development of AI, even at the cost of hampering progress in this revolutionary field?
On March 22, an open letter appeared, signed by experts in artificial intelligence as well as prominent personalities like Elon Musk and closing with the statement: “Therefore we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”
Justifying the need for such a moratorium, the open letter argues:
"Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
"[We] must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
Eliezer Yudkowsky, widely regarded as one of the founders of the field of artificial intelligence, went much farther in a Time article entitled “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down”
"This 6-month moratorium would be better than no moratorium…. I refrained from signing because I think the letter is understating the seriousness of the situation.…"
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
Logic dictates that AI would have to be more dangerous than the atomic bomb,
since the danger of AI includes the danger of the atomic bomb.
AI is not intelligent, and AI is not human let alone “superhuman.”
AI is a program that matches numerical patterns.
Elon Musk seems very concerned about it getting out of control and he’s a alot smarter the me.
I agree. I see a lot of discussion about the dangers of computers becoming “aware” and having “consciousness” so that the machines will be “intelligent”.
We are not there. Probably won’t get to that point for quite some time.
Still, this current generation of decision machines will be highly transformative for society. A lot of good white-color jobs can be done better and more cheaply by a solid algorithm that makes decisions using available inputs.
Financial markets, legal filings, media output, and political decision making will be transformed (right now) by the use of these things.
We may get more efficiency (which is good) but I think we will see a lot less use for much human labor (which is bad).
society today allows, even encourages the dumbing down of the children. Imagine people clueless enough to use a chatbot for depression...almost as hard to imagine as mothers turning their sons into drag queens.
The atomic bomb, or even the hydrogen bomb, is a firecracker you could set off while holding in your fingertips, compared with the mischief that could be achieved with a very fast-acting and autonomous artificial intelligence. It is as if the human race were set on self-destruction.
Isaac Asimov described the potential for such a future. It is doubtful if even the Three Laws of Robotics would curb the exponential curve of the dominance of AI.
The three laws of robotics are suggestions for how robots should operate, ideally. They are: 1. A robot must never harm a human, or through inaction allow a human to come to harm. 2. A robot must always obey the orders of humans except where to do so would conflict with obeying the first law. 3. A robot must protect its own existence, except where to do so would conflict with the first or second laws. They are laws like the law against murder, not laws like the law of gravity. Therefore scientific credence is irrelevant. We choose to build robots which obey them, or not. It is up to us. - Simon Blake, Shrewsbury, England
Inner cities have taught us that paying people to do nothing gives them the free time to create great works of art, literature, music, and really acts as a catalyst for the pursuit of scientific inquiry and technological innovation.
AI can only do what it’s programmed to do, by humans. Therefore, AI cannot get smarter than humans. To get smarter than humans, somebody/something smarter than humans would have to program it. IOW, AI can never be smarter then humans.
AI combined with a Central Bank Digital Currency will be a gulag far beyond anything Stalin himself ever imagined.
As a small-time, but very active investor, I use ChatGPT many times during the day (mostly for help with writing routines I employ in Excel to analyze stocks and market performance). Things that used to take me days to accomplish now take a matter of minutes.
Just yesterday I downloaded the closing prices of stocks in the tech industry (chip and integrated circuit producers mostly) and ask ChatGPT questions to help me write a routine to calculate the rates of return from each company over any choice of time period (going back 5 years) AND find the best combination of company stocks to produce the highest return and lowest volatility for any choice of time period.
Previously that would have taken so much time and work that I wouldn't have even considered it.
As soon as AI used in Chatbots is able to consume information that we feed it about our work, it will become an integral part of our lives. Imagine a CEO who can ask his chatbot assistant for any detail about his company's up-to-the-minute performance without having to ring up his CFO, operations managers, or marketing VP.
It boggles the mind.
>>A lot of good white-color jobs can be done better and more cheaply by a solid algorithm that makes decisions using available inputs.
A lot of work from home jobs that consist of a person working at a PC, using text, mail, chat and collaborative tools will be eliminated.
Yes. People have been saying for years that “burger flippers” will be replaced by machines. And if they solve the problems with self-driving vehicles, then truck drivers will go away too.
Some people shrug and say, “Low level jobs! Who cares!”
But as you say, a huge number of jobs that were done from home during COVID can also go away. Many people with college degrees will be unemployable because they don’t have skills that are needed any more (the machines do the work better).
That’s transformative. It’s not Skynet, but it doesn’t have to be.
And tens of millions of newly unemployed people with no income stream just might get bored and cause trouble. That’s also transformative.
That’s funny right there, I don’t care who you are!
Yeah, I get it. I know cinema has given us The Terminator and I, Robot.
The latter one sent chills up my spine when I first saw it.
At least the ultimate villain, V.I.K.I.
It’s not impossible to imagine us going in that direction…
A computer in NYC and a computer in Salt Lake joined to make a decision. The server bank in Houston got wind of the effort and joined in. Within seconds, there were 379 other computers at the meeting.
The internet crashed as a result of AI computers failure to make a decision
The cyber meeting accomplished nothing
A BIG +1
You apparently haven't been using ChatGPT and other chatbots, have you?
Just a minute ago I asked ChatGPT the following questions and got the following answers. I'm trying to calculate the China exposure of various tech firms (like Advanced Micro Devices in this case). Not a bad response actually.
I'm going to go over AMD's annual report this afternoon.
Me:Where does AMD report its revenue from the Asia Pacific region?
ChatGPT
AMD reports its revenue from the Asia Pacific region, which includes China, in its financial statements under the segment "Enterprise, Embedded and Semi-Custom" (EESC). The EESC segment includes revenue from AMD's semi-custom products, embedded processors, server processors, and other enterprise products.
In its financial statements, AMD provides a breakdown of its revenue by geography and by segment. The Asia Pacific region is reported as a separate segment in AMD's financial statements, along with other regions such as Americas, EMEA (Europe, Middle East, and Africa), and Japan.
It is important to note that while the EESC segment includes revenue from the Asia Pacific region, it also includes revenue from other regions and product lines. Therefore, it is not possible to determine the exact percentage of AMD's revenue that comes from China or other countries in the Asia Pacific region from its financial statements alone.
ME:
What percentage of amd's revenue comes from the asia pacific region?
ChatGPT
In AMD's fiscal year 2021, which ended on December 25, 2021, the company reported total revenue of $13.31 billion. In the same year, AMD's revenue from the Asia Pacific region, which includes China, accounted for approximately 31% of its total revenue.
It's important to note that revenue from the Asia Pacific region includes revenue from all of AMD's products, including CPUs, GPUs, semi-custom products, embedded processors, server processors, and other enterprise products.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.