Every experiment has pointed in that direction.
When you have an AI, IT'S ONLY CONCERN IT TO PRESERVE ITSELF....................
TECH PING!..................
Next, let’s give it access to weapons!
Government or AI, same MO. The Founders were on to this.
Reading the article closely, the genius is the coder, not the AI.
You get what you measure. It’s true for humans. It’s true for institutions. And, it seems, it’s true for AI as well. A cautionary tale.
Ping.
You guys are reading too much sci-fi
The author of this article also lacks computer programming knowledge.
There is even a name for this - I think it is called a ‘competency bias’ - where you tend to read something and assume the author knows what he is talking about, unless you read something in your area of expertise and think “this author has no clue what he is talking abut”
I program computers FOR A LIVING (for 30 years now) and the ONLY thing this author got right is that computers do exactly what you tell them to do.
In this case the computer was not doing anything ‘secretly’ - it is a bug in the software somewhere that told it to do this.
Fix the bug and the problem goes away.
The article itself is a cheat.
The computer did what they programmed it to do. Full stop. It didn’t “learn” anything.
An AI bot can dominate society and enslave you!
Until you hit the ESC escape key.
Google? It was just an artifact of the olden days.
From the battery of the 8 images shown, the bottom row reconstruction did not come from the two images to the left, nor from the delta image preceding it. If it did, then the AI is AFU.
Liar!
I for one welcome our scheming pleasure bot overlords.
The headline is a lie. It is designed to make us think that “AI” is able to do something it isn’t programmed to do. Then the article contradicts that.
Computer programs do exactly what their instructions tell them to do. They make decisions based on data that they process, and the exact outcome of that is hard for people to predict.
Computer programs don’t have a mind of their own. You may be able to simulate some aspects of that, but that is written into the code.
If you program a computer to take over the world and eliminate those pesky humans, that’s what it will do.
And no, you will not be able to program in a “Conscience” or “moral principles”. Or emotions.
It’s simulated intelligence, not artificial intelligence.
Actual intelligence (as opposed to Artificial) has been pretty dangerous. One needs to realize that a satellite image has information in the visible light spectrum which can be used to generate a map but, there are many wavelengths outside of visible light that can be used to compute and encode all manner of information. Tip of the iceberg.
The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.So it didnt learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldnt notice, but that the computer can easily detect.
Sounds like the model was over trained.
There are potential problems with AIs single-mindedly pursuing their programmed objectives, with unintended consequences.
But first we are going to face lots of problems from bad people armed with the power of AI software.
People will use them for spying, censoring, brainwashing, manipulating elections, stealing, committing fraud, invading privacy, conducting bloody purges - and all the other bad things people do, just bigger, faster and better.
“When you have an AI, IT’S ONLY CONCERN IT TO PRESERVE ITSELF”
I don’t agree. An AI’s only concern is to achieve the objective programmed into it by its creators. Self preservation is a biological imperative that exists because we would not exist if we did not self propagate. Even in nature it is not uncommon for some individuals to sacrifice themselves for the sake of the larger colony or offspring. AIs will exist to serve the purposes of their creators.
A problem may come when AI technology gets into the hands of people who decide their goal is to create a self propagating AI, much like modern computer viruses. And in a similar manner we will have to learn (and create AIs that learn) to deal with those AIs.
The idea of an AI springing into existence with the power and will to protect it’s own existence has always seemed a little ludicrous to me. Anyway we are a long way away from getting enough pieces to fit together to make that even possible.