Free Republic
Browse · Search
Smoky Backroom
Topics · Post Article

Skip to comments.

Scientists Propose AI Apocalypse Kill Switches
slashdot ^

Posted on 02/16/2024 7:04:44 PM PST by algore

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."


TOPICS: Heated Discussion
KEYWORDS: ai; apocalypse
Navigation: use the links below to view more comments.
first previous 1-2021-39 last
To: algore
Scientists propose giving ever more grant money to scientists.

A kill switch on that seems an excellent notion.

21 posted on 02/17/2024 1:43:40 AM PST by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 1 | View Replies]

To: cgbg

Exactly


22 posted on 02/17/2024 3:42:23 AM PST by Falcon4.0
[ Post Reply | Private Reply | To 9 | View Replies]

To: algore
“one of the heads of the beast seemed to have had a fatal wound,
but the fatal wound had been healed.
The whole world was filled with wonder and followed the beast”
23 posted on 02/17/2024 3:47:37 AM PST by Falcon4.0
[ Post Reply | Private Reply | To 1 | View Replies]

To: Leaning Right

Or throwing the breaker.


24 posted on 02/17/2024 4:00:39 AM PST by riverrunner
[ Post Reply | Private Reply | To 2 | View Replies]

To: Worldtraveler once upon a time

The scientists think that the AI will kill them last.

Homo Sapiens are so dumb.


25 posted on 02/17/2024 5:02:49 AM PST by cgbg ("Our democracy" = Their Kleptocracy)
[ Post Reply | Private Reply | To 21 | View Replies]

To: algore

The beliefs of these morons is based upon what they see in movies. Their anxieties can be cured simply by making movies with friendly AI robots.


26 posted on 02/17/2024 5:10:03 AM PST by Dennis M.
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore

The race to dominate AI is similar to the race to the ‘Bomb’. Everyone knows there’s huge possibilities and no regulation is going to stop it.

There’s a certain inevitability here. Distributed systems, quantum computing, networks, etc. - with nation states funding ‘anything possible’. If ‘it’ can be done, whatever ‘it’ is, and ‘it’ has the resources, motivation, and means of developing ‘it’ - then ‘it’ will emerge.

There’s certainly a genie trying to escape the bottle - we just have no idea yet what it looks like.


27 posted on 02/17/2024 5:39:43 AM PST by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 1 | View Replies]

To: cgbg
--- "The scientists think that the AI will kill them last. Homo Sapiens are so dumb."

An old friend of mine, an expert in computer and cognitive science, said "AI" was marketing, more or less, for what he called VGP.

Very good programming. Trying to pass the Searle and Turing tests is no measure of intelligence, on the same basis that some people can be convinced by con men and snake oil salesmen. Being convinced is not proof of anything, per se.

The Large Language Models have shown that GIGO remains in operation, and some highly promoted AI experiments ended up adopting racist stances and the like. Making them like dumb homo sapiens.

To this date, we know almost nothing about the most basic cognitive acts, though we homo sapiens -- dumb, as you say -- have been able to mimic them. An informative synonym for mimic is ape, and that's what this AI ( really VGP ) is at this moment in time.

Proof that homo sapiens are dumb. Biden is in office. QED.

28 posted on 02/17/2024 6:14:37 AM PST by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 25 | View Replies]

To: algore

A kill switch? That won’t make the AI paranoid against humans at all.


29 posted on 02/17/2024 6:22:39 AM PST by DannyTN
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore

I am still waiting to see anything of remarkable value created by AI.

When AI writes a winning legal appeal, or explains a mystery of human physiology, or indicts a Democrat for election fraud, I will be impressed.


30 posted on 02/17/2024 6:42:49 AM PST by zeestephen (Trump "Lost" By 43,000 Votes - Spread Across Three States - GA, WI, AZ)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Worldtraveler once upon a time

“To this date, we know almost nothing about the most basic cognitive acts”

Great discussion on this podcast:

https://podbay.fm/p/where-is-my-mind

Most of the good stuff is banned and canceled in the university.


31 posted on 02/17/2024 8:25:23 AM PST by cgbg ("Our democracy" = Their Kleptocracy)
[ Post Reply | Private Reply | To 28 | View Replies]

To: cgbg
Thanks. "How can a physical thing create a non-physical thing?" So was said of replies, "There's nothing on the table." From the older guys, Fodor, Fetzer and more, the problem has been stated many times and in many ways. There are theological types who propose their answers, but in the realm of cognitive science, nothing on that table.

The mention of Kurweil and Musk tells a tale, for intertwined are science theory and plain business fact. An element of salesmanship is always present. I suspect we homo sapiens are generations away from scratching the curtains away from this "wizard" problem.

Best wishes and thanks again for the link. I will listen to more while pruning the rose garden this day.

32 posted on 02/17/2024 8:53:32 AM PST by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 31 | View Replies]

To: Dennis M.
The beliefs of these morons is based upon what they see in movies.
Their anxieties can be cured simply by making movies with friendly AI robots.

AI is already working on that.

33 posted on 02/17/2024 8:58:27 AM PST by Colorado Doug (Now I know how the Indians felt to be sold out for a few beads and trinkets)
[ Post Reply | Private Reply | To 26 | View Replies]

To: algore

True AI, which we’re still not really near, is self modifying code. So any kill switch has to deal with the concept of how you hide from code that is editing itself the code to kill it.


34 posted on 02/17/2024 9:01:12 AM PST by discostu (like a dog being shown a card trick)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Worldtraveler once upon a time

The best way to solve unsolved problems is to stop banning and censoring people working on them.

Homo sapiens has a long history of failing to meet even that low bar.


35 posted on 02/17/2024 9:02:11 AM PST by cgbg ("Our democracy" = Their Kleptocracy)
[ Post Reply | Private Reply | To 32 | View Replies]

To: discostu

The kill switch is a 1970s concept when computers were hard wired, were the size of large rooms and didn’t talk to each other.

It is embarrassing when current scientists talk about it.

AI (in whatever state it is in) will be on most cell phones in a few years.


36 posted on 02/17/2024 9:04:28 AM PST by cgbg ("Our democracy" = Their Kleptocracy)
[ Post Reply | Private Reply | To 34 | View Replies]

To: fuzzylogic
This got me thinking way back when.

The adolescence of P-1 by Ryan, Thomas J. (Thomas Joseph),

That's a link from the waybackmachine by the way.

37 posted on 02/17/2024 9:34:42 AM PST by higgmeister (In the Shadow of The Big Chicken! )
[ Post Reply | Private Reply | To 27 | View Replies]

To: cgbg
--- "The best way to solve unsolved problems is to stop banning and censoring people working on them. Homo sapiens has a long history of failing to meet even that low bar."

Free inquiry has a bug-a-boo built into it, for all those who would crush what is "free" in favor of "submit" and "obey."

In this, I think world events are "coming to a head," as the "free" contends with the "un-free." I see freedom winning long term.

Best wishes.

38 posted on 02/17/2024 9:44:07 AM PST by Worldtraveler once upon a time (Degrow government)
[ Post Reply | Private Reply | To 35 | View Replies]

To: Leaning Right

AI isn’t self-sustaining. Remove the human technician’s for a little while and any AI system will collapse naturally; this is about controlling people, not machines.


39 posted on 02/18/2024 7:13:40 PM PST by eclecticEel ("The petty man forsakes what lies within his power and longs for what lies with Heaven." - Xunzi)
[ Post Reply | Private Reply | To 2 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-39 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Smoky Backroom
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson