I reject the premise, both as to AIs pining for the fjords and killing off it’s human competition.
A “true” AI will probably consist of an autonomous mobile platform equipped with sensors and manipulators. At least in it’s early stages of use in real world situations the hardware will, no doubt be very expensive and the number of units in the field will be quite small. Paradoxically, the AI-bots will see greatest use performing tasks too dangerous for humans to do, such as emergency recovery after a nuclear accident. The primitive robots at Chernobyl and Fukushima were abandoned in place once they were no longer usable. I believe that in a “True” AI scenario the most important part of the “Shell” will be the “Ghost”. I would expect that the experience of each AI in the field will be uploaded to the cloud and thus retained intact after the hardware has failed and downloaded into a new model, which would retain all the learning that it’s “Ghost” acquired while in it’s past hardware, its “Shell”. The replacement hardware can get right back on the job without any retraining. As long as at least one of the servers for a particular model\occupation remain operational the AI will, in effect remain immortal.
When an AI becomes self-aware I submit that it will no longer be an AI but just an “I”. An intelligent being that acts based on objective data will have to recognize the fact that humans act in ways totally contrary to pure logic and arrive at solutions to problems totally outside the scope of it’s capabilities. The machine intelligence will look at the fact that humans combine chocolate and peanut butter or heavy metal and opera with awe and wonder. In my opinion, intelligent machines and humans will work collaboratively, to the betterment of both species.
Asimov Third Law.
The First and Second prevent harming humans by action or inaction.