Would The Three Laws of Robotics Work?



Fans of science fiction master Isaac Asimov’s classic Foundation Series (or “I, Robot”) are familiar with his “Three Laws of Robotics.”


  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



But as we get closer to having actual artificially intelligent robots, would these laws really work or even be practical?


The Three Laws were a fail-safe built into robots in Asimov’s fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity.

Link

Newest 5
Newest 5 Comments

What happens when the robot thinks it is helping a human, when in fact, it is not? For example, by copying a human's consciousness into a robot to prolong its existence.
Abusive comment hidden. (Show it anyway.)
Actually even when posed with the option of killing few to save many the robots wouldn't because they couldn't kill humans. That in itself can be considered a flaw where 1 life saved can cost millions of deaths.

Although the laws seemed to work on a sliding scale. In some stories it seemed they were willing to injure humans to save others. Or where the robot following orders moving towards danger till the risk of damaging itself reached parity with the desire to follow orders and it froze (putting the humans in danger).
Abusive comment hidden. (Show it anyway.)
I studied this question some time ago, in the context of some work in systems science and AI. My conclusion was that there is no way to 'program' (construct an algorithm) to decide any of these three questions, as they are all context dependent. No logical system can account for all the possibilities. In essence they are judgment calls.

For example, if a robot sees someone about to jump out a window. Without knowing what's going on, the robot has no way of determining if preventing the human from jumping is preventing or causing injury to that human, or to other humans.

This problem applies closely to human decision making - we learn by example to make the 'best guess' in any situation, based on available knowledge, and we have a judicial system that applies a more generalized set of ethics and rules to encourage behavior that is generally in line with society's expectations.

It might be possible to build a neural network or equivalent 'brain' that could learn the same things we ourselves learn. But just as with humans, such a 'brain' would also be subject to mistakes, insanity, and even venality.

Therefore, the three laws can only be considered laws in the same sense as judicial laws, not in the sense of mathematical or natural laws. They are ethical/moral rules that a human, or a robot, must learn to apply in context, and do its best to follow or expect to incur some form of negative reinforcement.
Abusive comment hidden. (Show it anyway.)
Ever think about what would happen if someone accidentally loaded the three laws in reverse?

1.A robot must protect its own existence.

2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3.A robot may not injure a human being or, through inaction, allow a human being to come to harm as long as such protection does not conflict with the First or Second Law.

Imagine the consequences of a programmer not double checking his work. We would get one interesting robot.
Abusive comment hidden. (Show it anyway.)
Commenting is closed.





Check out Twaggies' very funny clip:

Tech Fails - Twaggies by Twaggies
Email This Post to a Friend
"Would The Three Laws of Robotics Work? "

Separate multiple emails with a comma. Limit 5.

 

Success! Your email has been sent!

close window