In 1942 -- decades before anything we could call a "robot" existed -- SF writer Isaac Asimov introduced the Three Laws of Robotics:
And it does make a certain sense:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And it does make a certain sense: