-->

Monday 3 August 2015

A robot in New York has just passed a classic self-awareness test called the “wise men puzzle” for the first time

One of the three Nao robots that participated in the assessment has managed to solve the “wise men puzzle”

A researcher at Ransselaer Polytechnic Institute in the US has given three Nao robots an updated version of the classic 'wise men puzzle' self-awareness test and one of them has managed to solve.
One of the three Nao robots that participated in the assessment has managed to solve the “wise men puzzle”

A King invites the three wisest men in the country to participate in the challenge for the position of the King’s advisor. He tells them that he will put either a white or a blue hat on their heads and informs them that at least one of these hats will be blue. There are no mirrors in the room and the wise men are not allowed to talk to each other, so they can’t see the color of their own hats but are able to see each other’s hats. The task is to work out the color of the hat they are wearing using the given information.

I don’t know if you managed to find the solution, but one of the three Nao robots that participated in the test certainly did. Of course, it was an adapted version of the puzzle, customized to artificial intelligence and its limitations.

The robots were told that two of them had been given a “dumbing pill” that rendered them unable to speak (in fact, they simply had their volume switch turned off) while the third robot had been given a “placebo”. Then the tester asked them which pill they had received. As a result, only one of the robots was able to make a noise saying: “I don’t know,” since the other two were silent. Having heard its own voice, it figured out that it was the one that had received the placebo and put its hand up, saying: “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

You may think that it’s not the hardest puzzle to solve, but, in reality, it’s extremely difficult for robots. Apart from listening and understanding (which many robots are able to do), they need to hear their own voice and distinguish it from others. Moreover, they need to work out the answer based on the fact of hearing their voice.

Self-aware robots may sound like a scary thing, but the researchers who conducted this test see the results in the positive light.

Team Leader Selmer Bringsjord of Rensselaer Polytechnic Institute in New York, who ran the test, says that by passing many tests of this kind – however narrow – robots will build up a repertoire of abilities that start to become useful. Instead of worrying over whether machines can ever be conscious like humans, he aims to reveal specific, limited examples of consciousness.

John Sullins, a philosopher of technology at Sonoma State University says “They try to find some interesting philosophical problem, then engineer a robot that can solve that problem. “They’re barking up the right tree.”

As an alternative, the robots have been programmed to be self-conscious in a specific situation. But it's still an important step towards creating robots that understand their role in society, which will be crucial to turning them into more useful citizens. "We’re talking about a logical and a mathematical correlate to self-consciousness, and we’re saying that we’re making progress on that," Bringsjord told Jordan Pearson at Motherboard.

At the same time, it should be noted that the test was rather limited, so the results only indicate that the robots were self-aware in a specific situation. Just like with the computer that passed the Turing test last year pretending to be a 13-year-old boy, the results are not definite but are quite promising for the development of smarter robots.

The work, which will be presented at the RO-MAN conference in Kobe, Japan, next month, highlights the gloomy waters of artificial consciousness. The wise-men test requires some very human traits.

In any case, artificial intelligence at this stage is very far away from being truly self-conscious and human-like. The reason is that robots just can’t handle the volume of information that the human brain can process. But who knows where the future advancements in technology and robotics could lead us.

Bringsjord says one reason robots can’t have broader consciousness is that they just can’t crunch enough data. Even though cameras can capture more data about a scene than the human eye, roboticists are at a loss as to how to stitch all that information together to build a cohesive picture of the world.

This is a basic question that I hope people increasingly understand about dangerous machines," said Bringsjord. "All the structures and all the processes that are linked with performing actions out of nastiness could be present in the robot."

Bringsjord will present his research this August at the IEEE RO-MAN conference in Japan, which is focused on "interactions with socially embedded robots".

Oh, and in case you were wondering, the answer to the King's original 'wise men test' is that all the participants must have been given blue hats, otherwise it wouldn't have a been fair contest. Or at least, that's one of the solutions.


Share this post
  • Share to Facebook
  • Share to Twitter
  • Share to Google+
  • Share to Stumble Upon
  • Share to Evernote
  • Share to Blogger
  • Share to Email
  • Share to Yahoo Messenger
  • More...

0 comments

:) :-) :)) =)) :( :-( :(( :d :-d @-) :p :o :>) (o) [-( :-? (p) :-s (m) 8-) :-t :-b b-( :-# =p~ :-$ (b) (f) x-) (k) (h) (c) cheer

 
Posts RSSComments RSSBack to top
© 2013 ComboUpdates - Powered by Blogger
Released under Creative Commons 3.0 CC BY-NC 3.0