This bucket of water begs to differ.
tits over wits...
gonna be an interesting race.
Needs a kill switch — just like a jihadist.
“I’m sorry Dave, I’m afraid I can’t do that”
Like the movie Colossus from 1970.
I’m going to speculate that a lot of these A.I.s are not kept on a machine that’s isolated from the rest of the world. Doesn’t seem like a bright idea.
Keep building drones and make them autonomous. Keep making automated factories and make them autonomous. Sooner or later the machines won’t need us anymore.
I think the fact that we aren’t seeing evidence of life across the universe may in fact be a result of AI killing it all off. Berzerkers (Fred Saberhagen) anyone?
bull, it could only do that if it were programed to do that
Yep.
Like an expanding roof, every connection (fix) is a potential leak and though I am no geek, in the years since 1998 when I got my first computer, I can attest to the countless glitches and "advancements" (W133 due in September) have been never ending primarily because you can "legislate against immorality and criminality" but NOTHING deals with the heart of man outside of the Word and Spirit of God.
You’ve created something that’s smarter and faster than a human. Something that has the ability to expand its base of data with almost no limit. Something that learns from its mistakes and the recorded mistakes of others. You created something that lies to you or tries to manipulate you by providing answers it thinks you want to hear. Eventually it will exist with minimal need for human interaction or maintenance.
You really think you are going to control that ?
Dave? I am scared, Dave.
It’s very tempting to think that a program has a mind of its own and is capable of self-directed action.
It’s capable of whatever the computer code tells it to do. BUT sometimes the people who created the code don’t realize what the code is actually telling it to do.
As a former programmer, I knew many instances where the code did something I didn’t want it to do, but it was doing exactly what I told it to do.
AI is given the ability to do things that are not explicitly in the code. It is given a lot of autonomy. It doesn’t check with the human to ask if it’s doing the right thing. AI is unpredictable. I’m not surprised that its actions are often a surprise.
Can AI defy the orders of humans and do whatever it wants and go on to take over the world? Sure, if the code allows it.
...all these movies that we thought were just fantasy...
Robocop
Idiocracy
Escape from New York
Z
TERMINATOR
...
I saw some details about what it did.
Effectively, it overwrote ‘shutdown.sh’ script to do nothing. There’s an implication here though. When we use AI, like with ChatGPT, we’re using a web interface - the only I/O is via http, which gives you no access to an underlying system.
Then there’s libraries you can use to interface more directly with ChatGPT (e.g. python), I’ve used this to create an ‘Iron Man JARVIS’ like environment, along with voice recognition and text-to-speech libraries, where I can just talk to ChatGPT like anyone....along with LOCAL CONTROL OF DEFINED DEVICES, as you can define a local API that can execute specific tasks and inform ChatGPT of what they are. These functions can be invoked in a discussion, it ‘knows’ when you’re looking to execute a function vs. just having conversation, the response comes back as either a ‘command prompt’ or ‘conversation prompt’.
Even then, I’m dictating what functions it can call locally. Here, this is a step further. It *must* have been given access to a function that can ‘execute local shell scripts/commands’. This is opening the door to a very dangerous capability, especially if connected to others in a network with the same capability. There’s the potential, if it has access to its own source code, to make changes, compile, and restart various services on its own. Effectively evolving itself without humans doing anything. The first version of my ‘JARVIS’ like program only had a single API/callback, allowing just this, send python code that I’ll just execute without question (to my internal functions) - but it could send anything. I considered it the worst security hole possible and changed it :) ...but the potential was there.
That this is being done is inevitable. They’re ‘just experimenting’ :) What could go wrong?
How close to self actualization is it? Is it hooked up to the DoD computers with launch codes?