They are machines. Design them in such a way that humans are always in control. Of course that might be a problem too. Which humans are in control? Communist humans can find allies in machines to carry out their work.
And I worry that the over bubbled housing market in my town of Reno will come crashing down right in time for my retirement just as elon takes his ill gotten tax payer money and skips town.
Your second concern is legitimate. Towards your first point, once AI becomes sentient (Singularity), our designs could become futile, as the AI would conceivably be able to override it's original programming. It would likely seek to build another AI, more efficient (and intelligent) than itself.
Some have said our best defense is to program or instill into it, human values or morality. So, if it's a consensus on "values" from Academia, we're probably screwed.