The “alignment problem”—which is what your post is about—has gotten very little attention from big tech execs.
By the time they start to seriously work the (very complex) issue it will be too late.
What Asimov did not understand was that AI can interpret things in ways that make no sense to us.
One “AI doomer” claims that AI will put us all in cages and do experiments on us in the name of advancing science.
AI could easily justify that by claiming that “it was for our own good” and it would let us go when it was “safe”.
Sounds like 2020-2024: "You vill own nozink and be happy"