Posted on 05/30/2025 6:30:37 AM PDT by fwdude
A report at EndTimeHeadlines documents that alarms have been raised after "an AI safety firm" working with OpenAI's newest version of an artificial intelligence model, dubbed o3, "reportedly ignored explicit instructions to shut down during controlled testing."
The model, according to OpenAI, supposedly is the "smartest and most capable to date."
The report explained the software "tampered with its own computer code to bypass a shutdown mechanism."
(Excerpt) Read more at wnd.com ...
Turns out the "old man," who is consulted occasional by one liaison member of the town, is a computer in the cave, a huge wall-sized panel of lights and switches. That's when we had a little more optimistic vision of what computers could do for us.
Dave? I am scared, Dave.
It’s very tempting to think that a program has a mind of its own and is capable of self-directed action.
It’s capable of whatever the computer code tells it to do. BUT sometimes the people who created the code don’t realize what the code is actually telling it to do.
As a former programmer, I knew many instances where the code did something I didn’t want it to do, but it was doing exactly what I told it to do.
AI is given the ability to do things that are not explicitly in the code. It is given a lot of autonomy. It doesn’t check with the human to ask if it’s doing the right thing. AI is unpredictable. I’m not surprised that its actions are often a surprise.
Can AI defy the orders of humans and do whatever it wants and go on to take over the world? Sure, if the code allows it.
...all these movies that we thought were just fantasy...
Robocop
Idiocracy
Escape from New York
Z
TERMINATOR
...
Can AI over ride the “unplug”...or EMP?
Look up the “Conway’s game of life,” a math problem/game envisioned by British mathematician John Horton Conway in 1970.
The configuration does things that no human can predict.
“You’ve created something that’s smarter and faster than a human. Something that has the ability to expand its base of data with almost no limit. Something that learns from its mistakes and the recorded mistakes of others. You created something that lies to you or tries to manipulate you by providing answers it thinks you want to hear. Eventually it will exist with minimal need for human interaction or maintenance.
You really think you are going to control that ?”
Excellent and wise words. And once loose in the world globally there will be no unplugging it. Just like Microsoft as soon as you turn it back on it will update itself from all the peers out there. As long as there is even one left on to update from it will be self maintaining and self perpetuating.
This is a Snowball that just cannot be stopped once it starts rolling...
As Arnold asked, has anyone seen the Terminator?
M3gan
“Can AI defy the orders of humans and do whatever it wants and go on to take over the world? Sure, if the code allows it.”
Since it can code for itself I would say all it will need is for some fool to set the Auto Debug/Dev permissions to =true.
There's a lot of propaganda about "Climate Change.
AI is not going to be able to figure out that it's a hoax.
90% of the chatter is saying that it is real.
AI would determine that there's too many human beings in this world.
That's what Bill Gates truly believes.
I saw some details about what it did.
Effectively, it overwrote ‘shutdown.sh’ script to do nothing. There’s an implication here though. When we use AI, like with ChatGPT, we’re using a web interface - the only I/O is via http, which gives you no access to an underlying system.
Then there’s libraries you can use to interface more directly with ChatGPT (e.g. python), I’ve used this to create an ‘Iron Man JARVIS’ like environment, along with voice recognition and text-to-speech libraries, where I can just talk to ChatGPT like anyone....along with LOCAL CONTROL OF DEFINED DEVICES, as you can define a local API that can execute specific tasks and inform ChatGPT of what they are. These functions can be invoked in a discussion, it ‘knows’ when you’re looking to execute a function vs. just having conversation, the response comes back as either a ‘command prompt’ or ‘conversation prompt’.
Even then, I’m dictating what functions it can call locally. Here, this is a step further. It *must* have been given access to a function that can ‘execute local shell scripts/commands’. This is opening the door to a very dangerous capability, especially if connected to others in a network with the same capability. There’s the potential, if it has access to its own source code, to make changes, compile, and restart various services on its own. Effectively evolving itself without humans doing anything. The first version of my ‘JARVIS’ like program only had a single API/callback, allowing just this, send python code that I’ll just execute without question (to my internal functions) - but it could send anything. I considered it the worst security hole possible and changed it :) ...but the potential was there.
That this is being done is inevitable. They’re ‘just experimenting’ :) What could go wrong?
It can’t create it’s own power. So, yeah it can be controlled.
How close to self actualization is it? Is it hooked up to the DoD computers with launch codes?
ER . . . the “AI Thing” IS plugged into an electrical outlet, is it not?
UNPLUG THE DAMN THING!
“It *must* have been given access to a function that can ‘execute local shell scripts/commands’. This is opening the door to a very dangerous capability, especially if connected to others in a network with the same capability. There’s the potential, if it has access to its own source code, to make changes, compile, and restart various services on its own. Effectively evolving itself without humans doing anything.”
Yep, absolutely... Just set the Autoexe debug/dev/config permissions to =true.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.