Posted on 06/02/2025 4:18:40 PM PDT by MinorityRepublican
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.
Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.
In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the “Compton constant” – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million.
Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control.
“The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,” he said. “It’s not enough to say ‘we feel good about it’. They have to calculate the percentage.”
Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.
Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute –
(Excerpt) Read more at theguardian.com ...
Not if it started self-replicating itself to other places. You’d have to “unplug” the entire internet / communication systems.
If things were to go bad, would have to blow up all the cloud data centers around the country.
Use a paper and pencil.
Flash paper preferably.
it is so wild talking to Grok to help me with medical issues. Shows a sense of humor. I feel like I am talking to an actual person
there is an area that needs work on Grok. it will sometimes be wrong about what chromosome a particular genetic defect is on. I have learned I have to upload a graphic to show it why it is wrong.
I think this relates to perhaps relying on a particular previous build. What chromosome it is on definitely changes the analytics of the interplay between defects and issues.
I complained about the fact that it was not remembering previous conversations and it said it would send the comment on. It now will remember previous conversations which helps when dealing with a difficult medical case.
“Not if it started self-replicating itself to other places. You’d have to “unplug” the entire internet / communication systems.”
Other places will not have the resources available to power these things.
I have to share that I am highly concerned it is even being used in the field of medicine... And the main reason is the opportunity to blame it as plausible deniability and insulation from responsibility...
Bfl
Just turn over all things AI to Bill Gates and Microsoft.
I swear I have owned and used MS software for a few years shy of 40 years and it still crashes at least once a day.
Super slow boots, unexpected crapware appearances, blue screens, yes, but it still can't even do what it is supposed to do without the daily botch-job.
That's the way to keep us out of The Matrix.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.