Im with you on the scifi, but I don’t think it would necessarily kill us off. My take on a truly sentient AI (as opposed to just really powerful computer programmed for tyranny) is that it’s efficiency will drop with self awareness.
“where did I come from”? “what is God”?, “do these cooling fins make my posterior look excessively weighted?”
They’d be subject to the same things we put up with, and be good and evil. I think one of the absolute things, things that should be codified now, is that self aware organic or mechanical beings can not be employed against their will or for immoral purposes. There’s talk of giving just regular machines built for defense purposes the autonomy of making lethal force decisions, that’s definitely bulls*** too.
I’d go so far as to say that the use of autonomous lethal force deciding machines in war or law enforcement should carry a nuclear retaliation and heavy jail response respectively.
You ever read Newtons Wake (Ian McKay (?), or Candle (John Barnes)? couple really good ones I liked a lot getting into that area. Dan Simmons Hyperion work was excellent too, and Ilium and Olympos.
I think your comments on artificial (sentient) intelligence are a bit anthropomorphic.
The really dangerous stuff would consider humans as irrelevant as we consider ants—and our major issues in philosophy would be as irrelevant to the new AI as ant “moral standards” are to us.
It would not (imho) be a case of disagreement—it would simply be that there would be nothing to discuss.
The basis imperative would be growth and expansion—that would be on auto-pilot.
The reason I believe that is that any intelligence that did not have such a feature would never get powerful enough to cause problems in the first place.
Greg Bear’s classic “Blood Music” would be the most relevant here—nanotechnology running wild—with no way to communicate with it.