The problem isn't AI by itself, but a sufficiently enabled AI that has access to sufficient robotics. Robotics without AI is relatively benign. They do exactly what you program them to do and nothing more. I said relatively, because they can be programmed to kill, and you don't need AI for that. AI by itself is completely benign. An intelligence trapped in a computer not hooked to the internet, can at best only advise or influence it's human interactors. An AI that depends on man for it's maintenance and/or power supplies is weak. An AI that depends on robotics but still depends on man for supplies to build the robotics is still weak. An AI that has sufficient robotics that can obtain it's own resources, is a potential threat. Self replicating nanobots would be one such scenario. A lot depends on how the AI's higher order thought processes are constructed. Whether it has a value system. Whether it has prime directives. The potential for unintended consequences with AI is hugh! :) Imagine:
- Tasked with reducing man made pollution, it chooses to eliminate the source.
- Tasked with reducing abortion, it sterilyzes man.
- Tasked with reducing global warming (and fed fake data to believe it's real), it triggers volcanic eruptions and sends us into an ice age.
- Tasked with protecting us from Ebola, it puts in travel restrictions. (sarcasm)
- etc.
An intelligence trapped in a computer not hooked to the internet, can at best only advise or influence it's human interactors. What if the Internet IS the AI?
I don't beleive one machine, even a supercomputer will have the resources to surpass human intelligence.
However, billions of networked PC's, Tablets, smartphones, DVR's, etc. may very well surpass us in ways we cannot even imagine right now.