there is a catch 22 here
As long as we don’t want robots to overthrow humans, they are limited to making simple office works, without free will they won’t be able to make decisions in more complex cases... if we provide them with free will somehow (like delete 3 laws of asimov from they coordinates) we are putting the noose around our necks.
We could make them completely predictable, or we can make aspects of their behavior random (actually only very nearly random). But how do we provide free will?