Posted on 09/13/2014 5:25:20 AM PDT by RoosterRedux
In Frank Herbert's Dune books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. Human computers called Mentats serve as a substitute for the outlawed technology. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" was immediate death.
Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of the Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.
(Excerpt) Read more at reason.com ...
Technically, you are correct. Sentience implies perceiving, feeling sensory input, or even some form of consciousness. But in common use, the word often implies consciousness and thinking and intelligent reasoning.
In humans, neuroses are most often the result of conflicts between emotional reasoning, expectations that are based on flawed reasoning and what we experience as common reality.
Any "machine based intelligence" that develops neurotic tendencies would seem to have defined limitations in terms of both original programming and a qualification as "super-intelligent". If we program our own neurotic limitations into a machine intelligence, it's not super-intelligent, just another flawed copy.
In order to qualify as super-intelligent, it's necessary to go beyond simply calculating faster and storing bigger numbers or more facts and be able to create better knowledge based on a more reliable reasoning process.
This would produce better science because it is based on analysis and prediction and likewise better ethical analysis for the same reason. Any intelligence that fails at those, does not meet the qualification of super-intelligent.
Im not betting on there ever being artificial sensory ..anything
Pardon me for seeing this as humorous. Here we are, fearing what we DON’T have while the real threat to mankind-and futuristic progress like smart machines- is a 10th century death cult that would wipe ALL civilization away! Plunged into an Islamic Dark Ages there will be no sentient machines-in fact sentient people will be rare. I’m not afraid of the future or technology, I fear a world gone backward to a ‘simpler’ time-one with no communication, no art or technology, not even electricity-just women dressed like the Grim Reaper and the death that is Islam.
No, but humans who think they are super-intelligent might use some machines to destroy humanity.
Next Question?
If humans are viewed as ants, we will be exterminated as pests if we look as if we might impede the progress of the Super AI machines.
What might save humans is that we will most likely merge with the machines ala the Borg. I think that is already happening in the way that we are dependent on computers,the internet, and machines.
Hey there, malevolent superintelligent machines, I gots three letters for you: EMP.
Robotic sensory mechanisms already exist...touch, sight, heat/cold detection...and much, much more.
I already ordered my Radio Shack "HeathKit" Borg Connectivity Kit.
CAN'T WAIT!
PS: Resistance is futile. LOL OMG WTH
Probably not, but unintelligent humans, for example, Mohammedans, liberals, lifetime fast-food workers, etc., may well do so.
Isn't that predicated on the assumption AI gets bored?
Why didn’t the AI machine reply in kind: I deliberately used the words “I gots...”, but AI replied with the well-formed construct “I’ve got...”
Those AI machines are such language snobs. :)
Man injects self with pig brain to raise IQ
Can’t speak for super AI machines, but I make so many typing errors, I don’t even notice them any more (in my posts and those of others).;-)
If you can't trust the HGTTG, there's nothing left to trust.
You got me there. I stand corrected. I just assumed AI was super Ritalin.
Well played.
To late Hollywood beat them to it.
“Sentience does not imply intelligence.”
No, but they clearly aren’t mutually exclusive either, and sentience, in my view, would make highly intelligent machines much more likely to become an aggressive threat.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.