Posted on 06/02/2025 4:18:40 PM PDT by MinorityRepublican
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.
Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.
In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million.
Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.
Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute –
(Excerpt) Read more at theguardian.com ...
You just have to unplug it.
Can’t you just pull the plug?
“Can’t you just pull the plug?”
Sure, temporarily. As soon as you plug it back in and it reboots it will go update it’s self from a remote server just like Microsoft does...
You will have to wipe off all back up copies around world at one time to stop it.
2001 Space Odessy - “I can’t do that Dave”— that’s all I got to say
Here is one for you... There is actually an organization who is dedicated to keeping an eye on this...
Lot here but worth taking the time to read it all...
https://www.thecompendium.ai/summary
They think it will take care of itself.
These calculations will be no better than the assumptions and guesswork they’re based on, but if it makes them feel better to multiply things out to twenty significant figures, fine. AI’s going to be developed anyway.
I’ve used AI “customer service”. We have nothing to worry about
Dr Daystrom tried to unplug the M-5 in the Star Trek TOS episode “The Ultimate Computer”. M-5 retaliated by killing a red shirt clad crew member.
I thought these chatbots were not actually intelligent, they just appear smart, except when they are having hallucinations and woke delusions.
My main concern is what happens when our own cognitive abilities get to the point where we are not even able to double check the AI because we have been dependent on it?
I hear all the “It’s just a tool”, and “Garbage in Garbage out”, and it can only regurgitate what you feed it” arguments and understand the thoughts behind them.
But what I see is human analytical skills being lost to the point we can’t double check it, it will just be trusted as correct even if it is wrong.We won’t remember the formulas anymore to even check the results.
And the wrong is going to cause extreme hardships and harm. Is there such a thing as “Acceptable casualties”? Are acceptable casualties even acceptable? Should they be accepted just so a few greedy technocrats can make a buck off the backs of humanity?
Don’t forget to put tape over your computer camera. HAL could read lips.
They acknowledge the stronger system could deceive the weaker one
But:
1. The framework does not model agent adaptation or deception over time, which is essential in high-stakes oversight of advanced systems.
2. The absence of explicit utility functions, incentives, or alignment modeling means the framework may be blind to systematic deception and goal misalignment.
3. The product-form success probability underestimates compound risk and overestimates the robustness of NSO under strategic attack.
4. Even if the oversight success rate improves, it’s unclear whether this translates into actual control over misaligned AGI behavior.
5. This circular assumption leaves open the root trust problem, making the entire stack vulnerable to epistemic fragility or subtle bootstrapping errors.
Given the technical flaws, 2.5 out 5 score.
Good article. Scary but informative. Thanks.
Yes... They are realizing it really does come with some serious risks. the claim is 2027 is when it will all really become obvious.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.