I think your comments on artificial (sentient) intelligence are a bit anthropomorphic.
The really dangerous stuff would consider humans as irrelevant as we consider ants—and our major issues in philosophy would be as irrelevant to the new AI as ant “moral standards” are to us.
It would not (imho) be a case of disagreement—it would simply be that there would be nothing to discuss.
The basis imperative would be growth and expansion—that would be on auto-pilot.
The reason I believe that is that any intelligence that did not have such a feature would never get powerful enough to cause problems in the first place.
Greg Bear’s classic “Blood Music” would be the most relevant here—nanotechnology running wild—with no way to communicate with it.
Sentient intelligence designed by us would BE anthropomorphic.
Anything reaching the “processing” power of humans that is self aware is going to have the same foibles, otherwise it’s just going to be an extremely efficient Von Neumann machine.
Now, I guess if we’re going down the path here, there’s going to be 2 branches though. One where we are actually working towards something like a humanoid robot, or something that flawlessly passes a Turing test and the other being something we create just to do a job that ends up having enough processing power and neural network complexity that it just becomes “self aware” out of the blue.
I’m going to have to hit that Bear book, its one I’ve not come across...