Fans of science fiction master Isaac Asimov’s classic Foundation Series (or “I, Robot”) are familiar with his “Three Laws of Robotics.”
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But as we get closer to having actual artificially intelligent robots, would these laws really work or even be practical?
The Three Laws were a fail-safe built into robots in Asimov’s fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity.
Oh, wait, wrong author.
Although the laws seemed to work on a sliding scale. In some stories it seemed they were willing to injure humans to save others. Or where the robot following orders moving towards danger till the risk of damaging itself reached parity with the desire to follow orders and it froze (putting the humans in danger).
For example, if a robot sees someone about to jump out a window. Without knowing what's going on, the robot has no way of determining if preventing the human from jumping is preventing or causing injury to that human, or to other humans.
This problem applies closely to human decision making - we learn by example to make the 'best guess' in any situation, based on available knowledge, and we have a judicial system that applies a more generalized set of ethics and rules to encourage behavior that is generally in line with society's expectations.
It might be possible to build a neural network or equivalent 'brain' that could learn the same things we ourselves learn. But just as with humans, such a 'brain' would also be subject to mistakes, insanity, and even venality.
Therefore, the three laws can only be considered laws in the same sense as judicial laws, not in the sense of mathematical or natural laws. They are ethical/moral rules that a human, or a robot, must learn to apply in context, and do its best to follow or expect to incur some form of negative reinforcement.
1.A robot must protect its own existence.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot may not injure a human being or, through inaction, allow a human being to come to harm as long as such protection does not conflict with the First or Second Law.
Imagine the consequences of a programmer not double checking his work. We would get one interesting robot.