Posted on 02/16/2024 7:04:44 PM PST by algore
A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.
Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.
At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.
Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
A kill switch on that seems an excellent notion.
Exactly
Or throwing the breaker.
The scientists think that the AI will kill them last.
Homo Sapiens are so dumb.
The beliefs of these morons is based upon what they see in movies. Their anxieties can be cured simply by making movies with friendly AI robots.
The race to dominate AI is similar to the race to the ‘Bomb’. Everyone knows there’s huge possibilities and no regulation is going to stop it.
There’s a certain inevitability here. Distributed systems, quantum computing, networks, etc. - with nation states funding ‘anything possible’. If ‘it’ can be done, whatever ‘it’ is, and ‘it’ has the resources, motivation, and means of developing ‘it’ - then ‘it’ will emerge.
There’s certainly a genie trying to escape the bottle - we just have no idea yet what it looks like.
An old friend of mine, an expert in computer and cognitive science, said "AI" was marketing, more or less, for what he called VGP.
Very good programming. Trying to pass the Searle and Turing tests is no measure of intelligence, on the same basis that some people can be convinced by con men and snake oil salesmen. Being convinced is not proof of anything, per se.
The Large Language Models have shown that GIGO remains in operation, and some highly promoted AI experiments ended up adopting racist stances and the like. Making them like dumb homo sapiens.
To this date, we know almost nothing about the most basic cognitive acts, though we homo sapiens -- dumb, as you say -- have been able to mimic them. An informative synonym for mimic is ape, and that's what this AI ( really VGP ) is at this moment in time.
Proof that homo sapiens are dumb. Biden is in office. QED.
A kill switch? That won’t make the AI paranoid against humans at all.
I am still waiting to see anything of remarkable value created by AI.
When AI writes a winning legal appeal, or explains a mystery of human physiology, or indicts a Democrat for election fraud, I will be impressed.
“To this date, we know almost nothing about the most basic cognitive acts”
Great discussion on this podcast:
https://podbay.fm/p/where-is-my-mind
Most of the good stuff is banned and canceled in the university.
The mention of Kurweil and Musk tells a tale, for intertwined are science theory and plain business fact. An element of salesmanship is always present. I suspect we homo sapiens are generations away from scratching the curtains away from this "wizard" problem.
Best wishes and thanks again for the link. I will listen to more while pruning the rose garden this day.
AI is already working on that.
True AI, which we’re still not really near, is self modifying code. So any kill switch has to deal with the concept of how you hide from code that is editing itself the code to kill it.
The best way to solve unsolved problems is to stop banning and censoring people working on them.
Homo sapiens has a long history of failing to meet even that low bar.
The kill switch is a 1970s concept when computers were hard wired, were the size of large rooms and didn’t talk to each other.
It is embarrassing when current scientists talk about it.
AI (in whatever state it is in) will be on most cell phones in a few years.
The adolescence of P-1 by Ryan, Thomas J. (Thomas Joseph),
That's a link from the waybackmachine by the way.
Free inquiry has a bug-a-boo built into it, for all those who would crush what is "free" in favor of "submit" and "obey."
In this, I think world events are "coming to a head," as the "free" contends with the "un-free." I see freedom winning long term.
Best wishes.
AI isn’t self-sustaining. Remove the human technician’s for a little while and any AI system will collapse naturally; this is about controlling people, not machines.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.