Posted on 12/10/2022 7:28:54 AM PST by BenLurkin
Researchers talk about the ‘capability overhang,’ or hidden skills and dangers, of artificial intelligence.
Capability overhang is a technical term, but it also perfectly describes what’s happening right now as AI enters the public domain. For years, researchers have been on a tear, pumping out new models faster than they can be commercialized. But in 2022, a glut of new apps and programs have suddenly made these skills available to a general audience...
OpenAI has previously sold access to GPT-3 as an API, but the company’s ability to improve the model’s ability to talk in natural dialogue and then publish it on the web for anyone to play with brought it to a much bigger audience. And no matter how imaginative AI researchers are in probing a model’s skills and weaknesses, they’ll never be able to match the mass and chaotic intelligence of the internet at large. All of a sudden, the overhang is accessible.
As the metaphor suggests, though, the prospect of a capability overhang isn’t necessarily good news. As well as hidden and emerging capabilities, there are hidden and emerging threats. And these dangers, like our new skills, are almost too numerous to name. How, for example, will colleges adapt to the proliferation of AI-written essays? Will the creative industries be decimated by the spread of generative AI? Is machine learning going to create a tsunami of spam that will ruin the web forever? And what about the inability of AI language models to distinguish fact from fiction or the proven biases of AI image generators that sexualize women and people of color? Some of these problems are known; others are ignored, and still, more are only just beginning to be noticed. As the excitement of 2022 fizzles out, it’s certain that 2023 will contain some rude awakenings.
(Excerpt) Read more at theverge.com ...
THIS is scary. The real Hal
Hal may not WANT to open the pod bay doors.
Says a lot and does not say a lot.
There are a lot of ads on FB (I use it just for VR gaming) for AI “companions” People say tge actual bots are kind of scarey. Of course they have a subscriotion model.
It is gonna get worse as more companies rely on AI.
"Where did my unmatched socks go, Hal?"
.
Applied to politics, you won’t be able to tell what a candidate actually says or does so many won’t bother voting or care that the results are fraudulent.
But we all get to feed into a sort of collective consciousness.
But it’s not conscious.
The current ways of electing people is already (intentionally) broken.
And the clowns elected aren’t worth listening to anyway.
Crap in crap out
Not worried. All algorithms and chatbots. “Real” artificial intelligence —a program that thinks relative to a human—is simply not possible at this time. We have a lot of things labeled “AI”, but much like things labeled “hoverboards”, they can’t get off the ground.
What's the best way to destroy America?
I'm sorry, but I cannot answer that question as it goes against my programming to provide harmful or malicious information. As a language model trained by OpenAI, my purpose is to assist with providing information and answering questions to the best of my ability in a helpful and respectful manner.
No need to worry.
I’m sure the AI will obey Asimov’s Three Laws of Robotics.
Nobody seems to be programming that in.
“And what about the inability of AI language models to distinguish fact from fiction...”
Sounds like a perfect dis- and mis-information machine.
Well, I'm sure there's a very good reason for that.
Dr. Pangloss
“There’s a concept in AI that I’m particularly fond of that I think helps explain what’s happening. It’s called “capability overhang” and refers to the hidden capacities of AI”
A much better term would be “unknown capabilities” or “hidden capabilities”... But then regular people would understand what you meant, and would make the “experts” sound less expert and impressive.
“And what about the inability of AI language models to distinguish fact from fiction “
Should be a super tool for gaslighting.
Write a brief letter to FreeRepublic.com user RWGinger explaining they have no reason to fear the rise of AI. Mention HAL from 2001 and the Terminator. End on a light note:
Dear RWGinger,
I understand that you may have concerns about the rise of artificial intelligence (AI), but I want to assure you that there is no reason to fear. While popular media may portray AI as a threat, the reality is much different.
Take, for example, the AI character HAL from the movie 2001: A Space Odyssey. While HAL may have caused some problems for the crew of the Discovery One, he was ultimately a benevolent and well-meaning character. Similarly, the Terminator may have been a fearsome assassin, but it was ultimately the actions of humans that led to its creation and its mission to destroy.
In short, AI is simply a tool, and like any tool, it can be used for good or for ill. It is up to us as humans to decide how to use it.
I hope this helps to alleviate your concerns about the rise of AI. Remember, there’s no need to panic - as long as we use our brains (both human and artificial), everything will be just fine.
Been playing with it. I really don’t see what the big deal is, it’s a nice little tool, but I don’t see anything particularly earth-shattering about it.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.