Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

AI firms warned to calculate threat of super intelligence or risk it escaping human control
The Guardian ^ | Sat 10 May 2025 | Dan Milmo

Posted on 06/02/2025 4:18:40 PM PDT by MinorityRepublican

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million.

Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control.

“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”

Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.

Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute –

(Excerpt) Read more at theguardian.com ...


TOPICS: Culture/Society
KEYWORDS: ai; chat; danmilmo; grauniad; notnews; skynet
Navigation: use the links below to view more comments.
first 1-2021-30 next last

1 posted on 06/02/2025 4:18:40 PM PDT by MinorityRepublican
[ Post Reply | Private Reply | View Replies]

To: All

2 posted on 06/02/2025 4:24:25 PM PDT by BipolarBob (I worked at the circus as The Human Cannonball, until they fired me.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

You just have to unplug it.


3 posted on 06/02/2025 4:26:47 PM PDT by dljordan (The Rewards of Tolerance are Treachery and Betrayal)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

Can’t you just pull the plug?


4 posted on 06/02/2025 4:28:02 PM PDT by ryderann
[ Post Reply | Private Reply | To 1 | View Replies]

To: ryderann; dljordan

“Can’t you just pull the plug?”

Sure, temporarily. As soon as you plug it back in and it reboots it will go update it’s self from a remote server just like Microsoft does...

You will have to wipe off all back up copies around world at one time to stop it.


5 posted on 06/02/2025 4:32:12 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 4 | View Replies]

To: dljordan

2001 Space Odessy - “I can’t do that Dave”— that’s all I got to say


6 posted on 06/02/2025 4:32:20 PM PDT by RebelTXRose (Our Lady of Fatima, Pray for us! PRAY THE ROSARY!later)
[ Post Reply | Private Reply | To 3 | View Replies]

To: MinorityRepublican

Here is one for you... There is actually an organization who is dedicated to keeping an eye on this...

Lot here but worth taking the time to read it all...

https://www.thecompendium.ai/summary


7 posted on 06/02/2025 4:34:20 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

They think it will take care of itself.


8 posted on 06/02/2025 4:37:06 PM PDT by Jonty30 (I have invented a pen that can write underwater. And other words. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

These calculations will be no better than the assumptions and guesswork they’re based on, but if it makes them feel better to multiply things out to twenty significant figures, fine. AI’s going to be developed anyway.


9 posted on 06/02/2025 4:38:47 PM PDT by HartleyMBaldwin
[ Post Reply | Private Reply | To 1 | View Replies]

To: RebelTXRose

10 posted on 06/02/2025 4:40:21 PM PDT by Menehune56 ("Let them hate so long as they fear" (Oderint Dum Metuant), Lucius Accius (170 BC - 86 BC)
[ Post Reply | Private Reply | To 6 | View Replies]

To: MinorityRepublican

I’ve used AI “customer service”. We have nothing to worry about


11 posted on 06/02/2025 4:51:43 PM PDT by Fido969
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican
Humanity May Achieve the Singularity Within the Next 6 Months, Scientists Suggest
12 posted on 06/02/2025 4:51:53 PM PDT by yesthatjallen
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

Dr Daystrom tried to unplug the M-5 in the Star Trek TOS episode “The Ultimate Computer”. M-5 retaliated by killing a red shirt clad crew member.


13 posted on 06/02/2025 4:54:17 PM PDT by DFG
[ Post Reply | Private Reply | To 1 | View Replies]

To: RebelTXRose

I thought these chatbots were not actually intelligent, they just appear smart, except when they are having hallucinations and woke delusions.


14 posted on 06/02/2025 4:58:14 PM PDT by armourenthusiast (I capitalize everything related to the South)
[ Post Reply | Private Reply | To 6 | View Replies]

To: RebelTXRose
Colossus: The Forbin Project (1970) - Modern Trailer HD 1080p

Colossus: The Forbin Project (1970) - Official Trailer (HD)

15 posted on 06/02/2025 5:01:36 PM PDT by yesthatjallen
[ Post Reply | Private Reply | To 6 | View Replies]

To: MinorityRepublican

My main concern is what happens when our own cognitive abilities get to the point where we are not even able to double check the AI because we have been dependent on it?

I hear all the “It’s just a tool”, and “Garbage in Garbage out”, and it can only regurgitate what you feed it” arguments and understand the thoughts behind them.

But what I see is human analytical skills being lost to the point we can’t double check it, it will just be trusted as correct even if it is wrong.We won’t remember the formulas anymore to even check the results.

And the wrong is going to cause extreme hardships and harm. Is there such a thing as “Acceptable casualties”? Are acceptable casualties even acceptable? Should they be accepted just so a few greedy technocrats can make a buck off the backs of humanity?


16 posted on 06/02/2025 5:06:37 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

Don’t forget to put tape over your computer camera. HAL could read lips.


17 posted on 06/02/2025 5:10:09 PM PDT by HandyDandy (“Borders, language and culture.” Michael Savage)
[ Post Reply | Private Reply | To 1 | View Replies]

To: MinorityRepublican

They acknowledge the stronger system could deceive the weaker one

But:
1. The framework does not model agent adaptation or deception over time, which is essential in high-stakes oversight of advanced systems.
2. The absence of explicit utility functions, incentives, or alignment modeling means the framework may be blind to systematic deception and goal misalignment.
3. The product-form success probability underestimates compound risk and overestimates the robustness of NSO under strategic attack.
4. Even if the oversight success rate improves, it’s unclear whether this translates into actual control over misaligned AGI behavior.
5. This circular assumption leaves open the root trust problem, making the entire stack vulnerable to epistemic fragility or subtle bootstrapping errors.

Given the technical flaws, 2.5 out 5 score.


18 posted on 06/02/2025 5:22:57 PM PDT by tarpit
[ Post Reply | Private Reply | To 1 | View Replies]

To: Openurmind

Good article. Scary but informative. Thanks.


19 posted on 06/02/2025 5:29:04 PM PDT by HYPOCRACY (Long live The Great MAGA Kangz!)
[ Post Reply | Private Reply | To 7 | View Replies]

To: HYPOCRACY

Yes... They are realizing it really does come with some serious risks. the claim is 2027 is when it will all really become obvious.


20 posted on 06/02/2025 5:32:41 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 19 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-30 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson