The question is whether or not ‘intelligent machines’ can become sentient.
Sentience does not imply intelligence. Also, super-intelligence machines could develop neuroses or depression.
If you give the machine the capability to “improve” itself, then it may eventually become sentient. Even accidentally. While “Transcendence” was a fairly dumb movie, the marriage of an intelligent machine and nanobots could be lethal to humanity.
Some countermeasures:
— Keep the intelligence ignorant of its own structure. Obviously, humans are intelligent, but we still have only a vague idea of how our “wiring” produces that intelligence or even how the brain stores memories.
— Putting it in a box with strictly controlled access to the outside world. Keep it in an electro-magetically shielded facility with absolutely no Internet connections.
— No actuators (e.g., motors and/or robot arms).
— Maintaining an effective “off” switch. Easy to do with a traditional computer, likely much harder to do with nanobots.
Yes and no. In philosophy of mind (my daughter's discipline) the word "zombie" is used to denote a hypothetical being which to the outside observer behaves like a sentient human being, but has no internal subjective experience. There is a consensus that zombies in this sense do not exist. But a "zombie" Skynet would be just as dangerous as a sentient Skynet (to chose one of the dystopian AI systems of fiction as a metaphor for the whole problem).