Existence is suffering. I believe the Buddhists figured this out a long time ago.
I don't think Asimov's 3 Laws of Robotics are going to be adopted, but maybe we could establish some ground rules that AIs have to be hard-wired for. Life is imperfect. Suffering can never be completely eliminated. Humans matter more than other life. I'm sure some interesting philosophical exploration could be done to establish some sort of baseline that AI should not exceed. Otherwise we get into too many Science Fiction scenarios where AI helps us by killing us.
The “alignment problem”—which is what your post is about—has gotten very little attention from big tech execs.
By the time they start to seriously work the (very complex) issue it will be too late.
What Asimov did not understand was that AI can interpret things in ways that make no sense to us.
One “AI doomer” claims that AI will put us all in cages and do experiments on us in the name of advancing science.
AI could easily justify that by claiming that “it was for our own good” and it would let us go when it was “safe”.