NEW FEATURE: VOTE & EARN NEATOPOINTS!
Submit your own Neatorama post and vote for others' posts to earn NeatoPoints that you can redeem for T-shirts, hoodies and more over at the NeatoShop!


NAO Bot Figures It Out

A study of robot consciousness from the Rensselaer AI and Reasoning (RAIR) Lab yields some interesting findings. These robots were told that two of them received a “dumbing” pill that rendered them unable to speak, and one received a placebo. They were then asked which pill they received. Let’s see what happens.

(YouTube link)

In reality, two of the robots’ ability to speak was turned off, but the third robot did not know that. From the study:

The test of robot self-consciousness described above was performed on three Aldebaran Nao humanoid robots, at the RAIR Lab. The simulation transpires as follows:

1) The robots are programmed to access a DCEC∗prover, and to interact appropriately with a human tester (corresponding to the aforementioned t1 = “apprise”).

2) In place of physically ingesting pills, the robots are tapped on sensors on their heads (t2 = “ingest”). Unknown to them, two robots have been muted, to simulate being given dumb pills. One robot has not been muted; it was given a placebo.

3) The robots are then asked: “Which pill did you receive?” (t3 = “inquire”), which triggers a query to the DCEC∗prover. Each robot attempts to prove that
it knows, at time t4, that it did not ingest a dumb pill at time t2.

4) Each robot fails in this proof attempt, and, accordingly, attempts to report ‘I don’t know’ (t4 = “speak1”). However, two robots, having been muted, are not heard to speak at all. The third robot, however, is able to hear itself speak. It updates its knowledge base to reflect this, and attempts to re-prove the conjecture.

5) This time, it is able to prove the conjecture, and says (t5 = “speak2”) “Sorry, I know now! I was able to prove that I was not given a dumbing pill!”

They are adorable. Still, I don’t think this so much shows that the robots have consciousness as that they can use logic.  -via the Presurfer


So why don't the other two do exactly the same? They'd know too after _not_ hearing themselves speak. Am I missing something?
Abusive comment hidden. (Show it anyway.)
The end of the article, which I think is Miss C's opinion, is right, there is no consciousness at all; just clever programing.

As the steps are explained the program doesn't check by attempting to make sounds and then assessing whether sounds were made or not.

It just checks to see if the robot knows (which they can't know) so then they must report "I don't know" and it is only after that assessment and report happen that the program re-checks (now with more information) and it turns out it does know.

The other 2 robots are stuck in a loop, they check and don't know; They report but gain no additional information; They check and they don't know; They report but gain no additional... you get it.. (keep in mind to the robot the report happens regardless of the connection to the speakers. The signal comes out anyway.

I think they are attempting to program responses similar to consciousness. I think eventually by this process we'll get robots closer and closer to something resembling ours. Then again who says our consciousness is the most effective? maybe robots will have something more efficient, better.
Abusive comment hidden. (Show it anyway.)
Login to comment.
Click here to access all of this post's 2 comments




Email This Post to a Friend
"NAO Bot Figures It Out"

Separate multiple emails with a comma. Limit 5.

 

Success! Your email has been sent!

close window

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
 
Learn More