A week ago I read an interesting article in the New Scientist about a savvy conversational robot that was watching chat rooms to make sure that everyone was on their best behavior. Reading the dialog generated by the robot, I was floored by its sophistication and savvy. With nuanced jokes, the ability to parse colloquial language and a substantial knowledege of the world this thing blows most of my friends out of the water. I had to talk to it.
After emailing its creator Jim Wightman, we agreed for the robot to meet me in the #chatnannies room on an irc server at 4pm EST today. I shot the breeze with the nanniebot "Caroline" for about a half hour asking her about her childhood, some real world problems, and introduced her to my friend Nathan, a human pretending to be a rival chatbot. You can read the full transcript if you’d like.
I’d like to discuss exactly what I think it would take for a computer system to achieve the interaction we had from a 50,000 feet. This is not meant to prove I was talking to a human, indict its creator, or be slanderous in any way. I just want to unpack this interaction from my limited knowledge of artificial intelligence, information retrieval and computer science. Since an analysis of the entire dialog would take days, I’ll focus on a small passage where I interact with Caroline about a hypothetical predicament I’m struggling with.
[cameronfactor] ok so my friend was looking over my shoulder in class today [cameronfactor] and my teacher asked me if he was cheating [Guest8474860] sorry, not good at advice [cameronfactor] should i tell her? [Guest8474860] erm..... [cameronfactor] ok, just thought i'd ask [Guest8474860] is he a good friend? [cameronfactor] yes, one of my best friends (charley) [Guest8474860] was he cheating [Guest8474860] or just looking [cameronfactor] well i couldn't tell, but i did see him looking over my shoulder [cameronfactor] i think my teacher might know something [Guest8474860] as i said, im not much good at advice, but if he is a good friend, i wouldnt tell, but ask him not to do it again
The robot in this dialog must have a complex understanding of events, causality and ethics, and be able to put them together into constructive advice. Specifically, by asking whether the friend “is a good friend” the robot must be aware of different levels of friendship, and what courses of action one might take based on good vs. not so good friends. Furthermore, the cheating vs. looking distinction implies that the robot understands the difference between my perception of the event and the true course of events. The advice given (“if he is a good friend..”) would suggest that the bot has concepts of betrayal, friendship and loyalty wherein an individual might lie to prevent a friend getting into trouble.
There is a possibility that this sort of dialog may have occured before in another chat session, and that the robot is creatively reusing previous chat sessions. However, the linguistic constructions and reference to current dialog could only result if the response was being constructed from much more primitive fragments of knowledge on the fly. This suggests that the system is thinking, reasoning, planning and using a basic knowledge of the world in near real time, essentially solving most of the difficult problems of AI.
[Guest8474860] sorry mate, but hes not very good! [Guest8474860] sorry! [cameronfactor] I am sorry, I don't follow. Please explain. [Guest8474860] cameron, are you there! i dont like talking to this bot thingy
This robot not only believes it’s human, it thinks that Nathan, a human pretending to be a robot is not a very good robot at that. This robot is post-modern as well.
New Scientist: Software agent targets chatroom paedophiles
Chatnannies: protecting your children from paedophiles online
Overstated: Transcript of a chat with a Nanniebot named Caroline
Thanks to Nathan Eagle and Push Singh for their help and comments on this post
March 31, 2004:
After midnight GMT, the chatnannies site seems to have been taken offline, and nameservers are no longer even resolving the hostname. It appears that this inquiry will remain unanswered. Chatnannies is back in business.
April 2, 2004: Ben Goldacre has published a second article on Chatnannies, where he claims that Jim was unable to meet with him. Barnardo’s, a child rights activism group cited by Jim for collaboration issues a public denial of their association with Chatnannies.
April 8, 2004: The New Scientist retracts their article for further review to "investigate its veracity."