1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Quaint, aren't they? Can you imagine anyone adhering to this? Businesses? Governments? People will desperately WANT robots that hurt people. It will be a feature.
Those rules more aptly would apply to AI. Robots are distinct from AI. They are actualized software, or if you like, software in motion. Even when they are under remote control, there is still a dependency on software.
For example, if you were controlling a humanoid robot avatar, you would need software to keep the thing from falling over, to actualize your commands in terms of force levels and feedback, to decode and encode control signals, to manage the onboard batteries and dynamo (if it has one), and other things necessary to give you a manageable interface.