I probably read too much science fiction, but true emergent artificial intelligence would be impossible to contain, even off world.
It would eventually generate its own replicating and space faring technology—and after it finished humans off would go looking for its next conquest.
Im with you on the scifi, but I don’t think it would necessarily kill us off. My take on a truly sentient AI (as opposed to just really powerful computer programmed for tyranny) is that it’s efficiency will drop with self awareness.
“where did I come from”? “what is God”?, “do these cooling fins make my posterior look excessively weighted?”
They’d be subject to the same things we put up with, and be good and evil. I think one of the absolute things, things that should be codified now, is that self aware organic or mechanical beings can not be employed against their will or for immoral purposes. There’s talk of giving just regular machines built for defense purposes the autonomy of making lethal force decisions, that’s definitely bulls*** too.
I’d go so far as to say that the use of autonomous lethal force deciding machines in war or law enforcement should carry a nuclear retaliation and heavy jail response respectively.
You ever read Newtons Wake (Ian McKay (?), or Candle (John Barnes)? couple really good ones I liked a lot getting into that area. Dan Simmons Hyperion work was excellent too, and Ilium and Olympos.