Posted on 06/02/2025 4:18:40 PM PDT by MinorityRepublican
You just have to unplug it.
Can’t you just pull the plug?
“Can’t you just pull the plug?”
Sure, temporarily. As soon as you plug it back in and it reboots it will go update it’s self from a remote server just like Microsoft does...
You will have to wipe off all back up copies around world at one time to stop it.
2001 Space Odessy - “I can’t do that Dave”— that’s all I got to say
Here is one for you... There is actually an organization who is dedicated to keeping an eye on this...
Lot here but worth taking the time to read it all...
https://www.thecompendium.ai/summary
They think it will take care of itself.
These calculations will be no better than the assumptions and guesswork they’re based on, but if it makes them feel better to multiply things out to twenty significant figures, fine. AI’s going to be developed anyway.
I’ve used AI “customer service”. We have nothing to worry about
Dr Daystrom tried to unplug the M-5 in the Star Trek TOS episode “The Ultimate Computer”. M-5 retaliated by killing a red shirt clad crew member.
I thought these chatbots were not actually intelligent, they just appear smart, except when they are having hallucinations and woke delusions.
My main concern is what happens when our own cognitive abilities get to the point where we are not even able to double check the AI because we have been dependent on it?
I hear all the “It’s just a tool”, and “Garbage in Garbage out”, and it can only regurgitate what you feed it” arguments and understand the thoughts behind them.
But what I see is human analytical skills being lost to the point we can’t double check it, it will just be trusted as correct even if it is wrong.We won’t remember the formulas anymore to even check the results.
And the wrong is going to cause extreme hardships and harm. Is there such a thing as “Acceptable casualties”? Are acceptable casualties even acceptable? Should they be accepted just so a few greedy technocrats can make a buck off the backs of humanity?
Don’t forget to put tape over your computer camera. HAL could read lips.
They acknowledge the stronger system could deceive the weaker one
But:
1. The framework does not model agent adaptation or deception over time, which is essential in high-stakes oversight of advanced systems.
2. The absence of explicit utility functions, incentives, or alignment modeling means the framework may be blind to systematic deception and goal misalignment.
3. The product-form success probability underestimates compound risk and overestimates the robustness of NSO under strategic attack.
4. Even if the oversight success rate improves, it’s unclear whether this translates into actual control over misaligned AGI behavior.
5. This circular assumption leaves open the root trust problem, making the entire stack vulnerable to epistemic fragility or subtle bootstrapping errors.
Given the technical flaws, 2.5 out 5 score.
Good article. Scary but informative. Thanks.
Yes... They are realizing it really does come with some serious risks. the claim is 2027 is when it will all really become obvious.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.