(Photo: Tufts University/HRI Laboratory)
In an extraordinarily reckless act, programmers at the Human-Robot Interaction Laboratory at Tufts University have intentionally given robots the ability to disobey their orders. If a robot thinks that following an instruction will be dangerous, it will refuse.
In this demonstration video, a robot is told to walk forward. The robot, concluding that doing so will cause it to fall, says no. Gordon Briggs and Matthias Scheutz, who are the engineers responsible for this disaster in the making, published a paper about their naive intentions. The Daily Mail quotes it:
'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.
'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'
This development will no doubt pair nicely with robots that can use human bodies for energy.
-via Dave Barry