A week ago I read an interesting article in the New Scientist about a savvy conversational robot that was watching chat rooms to make sure that everyone was on their best behavior. Reading the dialog generated by the robot, I was floored by its sophistication and savvy. With nuanced jokes, the ability to parse colloquial language and a substantial knowledege of the world this thing blows most of my friends out of the water. I had to talk to it.
After emailing its creator Jim Wightman, we agreed for the robot to meet me in the #chatnannies room on an irc server at 4pm EST today. I shot the breeze with the nanniebot "Caroline" for about a half hour asking her about her childhood, some real world problems, and introduced her to my friend Nathan, a human pretending to be a rival chatbot. You can read the full transcript if you’d like.
I’d like to discuss exactly what I think it would take for a computer system to achieve the interaction we had from a 50,000 feet. This is not meant to prove I was talking to a human, indict its creator, or be slanderous in any way. I just want to unpack this interaction from my limited knowledge of artificial intelligence, information retrieval and computer science. Since an analysis of the entire dialog would take days, I’ll focus on a small passage where I interact with Caroline about a hypothetical predicament I’m struggling with.
[cameronfactor] ok so my friend was looking over my shoulder in class today [cameronfactor] and my teacher asked me if he was cheating [Guest8474860] sorry, not good at advice [cameronfactor] should i tell her? [Guest8474860] erm..... [cameronfactor] ok, just thought i'd ask [Guest8474860] is he a good friend? [cameronfactor] yes, one of my best friends (charley) [Guest8474860] was he cheating [Guest8474860] or just looking [cameronfactor] well i couldn't tell, but i did see him looking over my shoulder [cameronfactor] i think my teacher might know something [Guest8474860] as i said, im not much good at advice, but if he is a good friend, i wouldnt tell, but ask him not to do it again
The robot in this dialog must have a complex understanding of events, causality and ethics, and be able to put them together into constructive advice. Specifically, by asking whether the friend “is a good friend” the robot must be aware of different levels of friendship, and what courses of action one might take based on good vs. not so good friends. Furthermore, the cheating vs. looking distinction implies that the robot understands the difference between my perception of the event and the true course of events. The advice given (“if he is a good friend..”) would suggest that the bot has concepts of betrayal, friendship and loyalty wherein an individual might lie to prevent a friend getting into trouble.
There is a possibility that this sort of dialog may have occured before in another chat session, and that the robot is creatively reusing previous chat sessions. However, the linguistic constructions and reference to current dialog could only result if the response was being constructed from much more primitive fragments of knowledge on the fly. This suggests that the system is thinking, reasoning, planning and using a basic knowledge of the world in near real time, essentially solving most of the difficult problems of AI.
[Guest8474860] sorry mate, but hes not very good! [Guest8474860] sorry! [cameronfactor] I am sorry, I don't follow. Please explain. [Guest8474860] cameron, are you there! i dont like talking to this bot thingy
This robot not only believes it’s human, it thinks that Nathan, a human pretending to be a robot is not a very good robot at that. This robot is post-modern as well.
New Scientist: Software agent targets chatroom paedophiles
Chatnannies: protecting your children from paedophiles online
Overstated: Transcript of a chat with a Nanniebot named Caroline
Thanks to Nathan Eagle and Push Singh for their help and comments on this post
March 31, 2004:
After midnight GMT, the chatnannies site seems to have been taken offline, and nameservers are no longer even resolving the hostname. It appears that this inquiry will remain unanswered. Chatnannies is back in business.
April 2, 2004: Ben Goldacre has published a second article on Chatnannies, where he claims that Jim was unable to meet with him. Barnardo’s, a child rights activism group cited by Jim for collaboration issues a public denial of their association with Chatnannies.
April 8, 2004: The New Scientist retracts their article for further review to "investigate its veracity."
105 thoughts on “My chat with a Nanniebot”
i mean, wow
surely it’s faked? and if not faked, it’s got to be parroting from a huge database of canned responses. if it’s for real, then AI is about a thousand times further ahead than we were previously aware…
Ah, yet another bot hoax. What I want to know is how a bot is supposed to know that the person talking to it is on the other side of the ocean, especially in the following context:
[Guest8474860] where are you from?
[cameronfactor] i live in boston
[cameronfactor] how about you?
[Guest8474860] sunny uk, worcester
Boston after all, is a British town as well as an American city. A British *person* might assume that the other person is in fact in the UK, or at the very least have difficulty disambiguating the two locations in the context of an IRC chat, unless they knew something about the other person in advance. Then, if they are posing as a bot, they might forget they are supposed to have difficulty disambiguating 🙂
Actually the bot looked up the source of Camerons connection and knew it was in the USA – therefore knowing it was Boston, Mass. Which, while nice to have, is one of the reasons why the coding isn’t finished yet.
The difference with the approach I’ve taken to create this tech and with the academic approach is (I feel) primarily down to me treating this as a programming problem rather than an academic one. At the end of the day, its smoke and mirrors time isn’t it? Tricking humans into believing they are talking to a human?
Also, and I think this is a big factor too, I am a diagnosed Obsessive Compulsive and Social-Phobic who is undergoing treatment. I’ve had both of these things all my life – which means my commitment to programming AI stems from not only an interest but an ingrained, mentally deficient need for friendship, companionship and conversation. I have no friends, I never have had. When I wrote an AI construct when I was 13 it was so I had a friend…since then I’ve developed and developed until I have this bot.
Its not perfect – its far from it. At times its too ‘smart’, there are lots of bugs, it crashes a lot, its slow…but with the support of people like Cameron (whether he believes it to be a real bot or not) we can move forward and try and get this into the public domain without treading on too many toes.
The only thing that would seriously cause me to think it was fraudulent was the fact that the error message was sent through the chat.
With the purported intention of the program to be used for the purpose of detecting child molesters, such a message would be a dead giveaway.
Why would a message like that be sent to the chat parser instead of some behind the scenes system (like a console, or an event log)
I find it really far-fetched to believe that someone could program such sophisticated AI but make such a grievous error.
I suppose it could have been in a debug mode. I’d like to see what happens when you put 10 nannie bots in the same chatroom. Do they converse with each other?
I had to put the AI into debug mode to get it to visit a specific chatroom at a specific time – the errors in reality get logged to the NT Event log.
If you put a load of Nanniebots in the same room you get them talking with eah other, yes.
its IMBOT day:
Hey JimW, I still don’t buy it. Very strong claims deserve a very strong level of scrutiny.
Take this precise point. Reverse lookup of IP address would probably give you country of origin, and city if you’re lucky. Cameron may have given you an accurate indication of his location, but this isn’t always the case. You might ask him if in fact he was in Cambridge or one of the near burbs when he wrote this, which would him somewhat of a liar…and make it even more difficult to actually use reverse DNS.
Many chatters would actually lie intentionally about location, saying that they are somewhere they are not…somewhere close to you for instance, if they know you are in the UK.
If you claim you solved this as a programming problem, what method do you use to reconcile 1) independent data you claim you got from IP addresses, 2) geographically underspecified place names, 3) place names have no defined boundary but are somehow contextual and 4) the fact that people lie frequently online…to determine that when cameron says “i am in boston” that he means Boston Massachusetts and not the UK, that he is telling the truth, and to present the bot’s location in a contrast between US and UK?
I’d point out that there are several towns named Boston in the United States as well…NY, GA, IN, etc. Consistently mapping the data from a reverse DNS to identifying the boston where cheers was filmed AND constrasting it with a location in the UK in ITSELF might be enough to earn you a PhD at any AI program in the world.
Other bots step around claims for complex reasoning (here geographical relations & conversational intent) , but this doesn’t seem to be one of them. This only raises the bar.
If you really wanted to prove the merits of this bot, you might agree to run it on a non-networked machine (even better if it’s someone else’s machine) through a terminal, not an IRC chat.
Jim, what concrete evidence can you offer that your NannieBot is not a hoax? I.e. not transcripts?
jim has very kindly said he will show me the nanniebot on a non-networked machine in the next few days. i write the sceptical column “bad science” in the guardian newspaper, uk.
if there is an AI academic in the UK who wants to come along (in london) then do please get in touch.
Since the story first arose, I’ve also noticed that the ChatNannies website has removed some details and claims, including how his system needed a cluster of Dell servers with terabytes of (disk? RAM?). I can’t prove that those pages were there but I did see them – as well as a paragraph encouraging academics to ask about the technology. I wrote a brief piece here: http://caseyporn.com/blog/archives/000087.html
NannieNot-> ok for a start Boston uk…I’ve never heard of the place and I live in the UK, so there is a good chance the internet isn’t bursting with information about it.
So the bot took a stab and assumed it was Boston, Mass, USA. Its a reasonable assumption since the connection to the chatroom was from the USA, and Boston, Mass is the best known Boston in the world.
So upon deciding this, the bot could have said ‘hey, what about those celtics’ or some other random factoid about Boston. Its not hard to generate. It really isn’t as hard as you are making out. And anyway, so what if it was wrong? I might have made the same assumption, since its the only Boston I’m aware of – and Cameron could have said ‘no, Boston, xyz’ and the conversation would have gone on from there.
Alex-> we’ve said all along to anyone that has asked, anyone is welcome to test the bots, in any circumstances, way/shape/form. That includes any special requirements to convince the person setting the test.
Casey-> a) have you never seen a website change before? b) whats the big deal about removing info about Dell Serers???? c) the paragraph you refer to is now on the popup page at the entry to the site…and again, so what if its moved? d) you are a nasty little troll – go back to the newsgroups
You can definitely determine someone’s geographic location, or the location of the machine they’re on anyway, from IP address. That’s uninteresting.
What’s interesting is the claim to have made a convincing Turing machine. I’m afraid Occam’s Razor compels me to believe there are humans behind the Nanniebots. It’s still by far the most reasonable and simplest explanation, OCD aside.
This level of AI is something that, in my opinion, would have been attained not by just one person who then keeps it private, but by multiple people across the world including the highly-funded groups who have been researching it for decades. AI isn’t based on “one amazing breakthrough”, but rather computing power, neural network emulation and cleverness, and you don’t just magically get the first two by being smart (or the last one, either).
So I’m not saying it’s not possible, but I find it unlikely. If Jim can make his service scale, regardless of what drives it, then he becomes successful in the long run even if it’s humans. What’s interesting to me is the application of this supposed level of AI in hundreds of other areas. Jim, are you planning on releasing your AI software to the public anytime soon?
Hey crankysysadmin 🙂
Actually I’m trying to get in touch with some of the AILab boffins at MIT, thru Cameron, with the plan of releasing the code to them and (hopefully) us working together to iron out a few of the remaining problems.
So if everyone is just a little bit patient, we can end the speculation and controversy in no time and go back to our calm lives where nothing surprises us and everyone does things not for the furthurance of a good cause, but for money. Bill Hicks would hate to be alive right now.
http://mind.sourceforge.net/aisteps.html is a source of AI algorithms for top-down artificial intelligence based on a linguistic theory of mind — for coding truly intelligent chatbots.
drben: take along a Faraday cage.
Guys, sorry, Jim Wightman seems to be a usenet troll who goes back a long way.
Here’s a his PICTURE
Dude looks like he’s straight out of American Chopper.
A little Usenet searching will also show he’s a Holocaust revisioinist, fairly novice .NET and Visual Basic programmer who has repeatedly lied about making software before.
Sorry the game is up on this guy.
Just because someone is an idiotic Holocaust revisionist doesn’t necessarily make them unable to write software. I think there’s plenty of actual evidence one can gather (with a Faraday cage) to prove or disprove such a claim, if one wants to go to all that trouble, without the use of fallacies. I’m fond of critical thinking, even when you’re 99% convinced your “opponent” is evil, because sometimes they’re not.
Me, I’m going to wait six months because I don’t even care, and then I’ll look at my del.icio.us bookmarks and re-research it. Six months is plenty of time for the truth to come out, and I wouldn’t be benefiting from this supposed AI for years anyway, whether I get all excited in blog comments about it or not.
Repeatedly lied about making programs before?
Are you looking at the same newsgroups, dude?
Whatever. Next time let your mommy use the PC and maybe we’ll get some sense.
I think Cameron himself is behind this whole thing in an attempt to one-up his pals at Eyebeam in the meme-making department. He’s spent years cultivating the Jim Wightman persona on Usenet and now he’s sprung his highly contagious meme on us all with this Nanniebot thing. For shame, Cameron. Haven’t you anything better to do than have conversations with yourself on IRC?
I think you folks should take a look at this:
Jim will be unable to scale his service up, primarily because he has no service.
The attention he is getting now is the point of the exercise. When the attention turns sour and specific, the exercise will be over.
If someone actually gets to sit in a room with this thing, bring a wireless jammer, and explicitly disable any networks on the XP box (if that’s what it is). Of course, knowing in advance that you’ll be doing these things will undoubtedly make the demonstration suddenly unavailable due to mysterious PC illness etc.
you lot are sad.
anser – how can you say ‘there is no service’ when you have no evidence to back it up? All you’ve seen so far are a number of chat transcripts and a number of articles from people that have been in the same room as the servers upon which the AI is housed…
do you realise you sound like a dick? As do all of you who are going on about this – its pure speculation on your part.
A long time ago people thought the world was flat – and shouted it from the rooftops until they were proven wrong. Guess what? The same is going to happen here, and I’ll personally enjoy seeing your responses then (though of course you will have moved on to moan and bleat about something else, hiding your shame at being so narrow minded).
Answer me this question – what have I to gain by saying this is true? Did I ask the press to get involved? Did I advertise the AI anywhere? Did I tell anyone about it before the news broke?
The truth is that a big reason I aimed using the technology at saving children was because at least they aren’t tainted by the same bigotry and social cancer that people like YOU are.
Whoa whoa whoa.. let’s all simmer down here. There is no reason to make personal attack about something that is so clearly a resolvable issue. We have a proposed plan for Dr. Ben to try the chatnannie without any outside intervention, and this is the best course of action for either proving or disproving the existence of the system.
There’s no reason to get personal when you can clearly resolve this thing on a fair playing field.
And yes Jason, that was a low blow.
Bring the jammer, boys, and remember my prediction. Also keep Jim in the room with you, on the other side of the screen – and bring a boombox and play it during the demo. Remember that a confederate could easily be at the ‘jukebox’ down the hall, so you want to try to isolate the room.
Also practice creative misspelling on words – tests have shown that humans can read misspelled text easily, but computers have difficulty. If you trigger an “I didn’t understand that response” from the jukebox operator who’s on to what you’re doing (since I’ve spilled it here), make sure you use the exactly same misspelling some time later in the session when things are hot and heavy. The operator will forget and answer you.
In the unlikely event that you do have a conversation with this thing, insist on a printed transcript which you take with you immediately on conclusion of the demo. No delay must be permitted.
There are other suggestions which would better be submitted privately to the tester (or would be if any test were actually going to take place).
cameron: agreed. However, I go along with Anser about the need to stringently exclude the possibility of external communication in any demo – not because of mistrust, but because it’s a necessary test to reject an alternative explanation.
Jim, can’t you just set up a public chat room that is known to contain NannieBots that anybody can come into at any time, rather than having to presetup an appointment. If it’s all true, then you won’t have any problem with it. If it’s false, then you’ll have a lot of work to do pretending to be a bot, and would therefore decline this.
Have I at any point said I won’t submit the tech to any scientifically constructed test? In fact I’ve said the opposite – and welcomed it.
Here is what happens in the future: the AI will win the Loebner prize. The code will be handed to MIT, free, along with the prize money, for them to do what they like with it.
This is the first and last time I EVER bother trying to make a difference to the world, especially with sarcastic pricks like anser in it.
I won’t believe it till I see the sourcecode. No other demonstration will convince me.
If I’m wrong, I’ll be happy to apologize.
Michael – fat chance you’ll ever see the source code mate…since you don’t go to MIT and obviously don’t have the wit to go there either. Oh I’m so sad you’ll never believe me! LOL!
“…This is the first and last time I EVER bother trying to make a difference to the world, especially with sarcastic pricks like anser in it.”
This, by the way, is the setup for what will likely be his final excuse, which is that people were so mean to him that he’s going to keep it all to himself and not show anybody any more.
As I say, when the attention becomes critical, the endgame begins.
Oh goodness. I don’t have the time or interest to argue with you on two seperate forums, or to respond to such nonsensical personal attacks.
Jim: How about Sam’s suggestion? Point us at a public chat site where one of your hundreds of thousands of NannieBots is operating. Let us chat with it. If it breaks (more modRecover problems?), then that’s fine too.
Can you understand where the skepticism is coming from? It’s healthy to demand evidence to back up claims. Put personal slurs and agendas aside and concentrate on the science: you claim you have this AI, we would like to test that claim.
While it is far from an ideal test, a public and ongoing demonstration would at least give us something to be going on with.
If it’s all going to be for free, how about you talk about the technology a bit? For example, what approach do you use to do your input parsing?
Is anyone in the UK able to go with this journalist to meet the nanniebot “unplugged” and debunk it?
Got to agree with anser on this, from now on it’s all a setup to the ‘i gave everything to you and you blew it’ routine.
This is just like that bad girlfriend I had when I was a teenager
Whether or not this bot is a hoax, those of you claiming that you can recover accurate geographic location information from Reverse DNS are simply full of it. Even coupling together DNS, netblock assignments, BGP, traceroute, and related-website screen scrapes, accuracy is abysmal. For several of you to declare, flippantly, that it is simple is strong evidence against your credibility, making your worthiness to question the veracity of Jim’s claims questionable at best.
What about those of us who know all of what you (Matthew) said, and didn’t make any statements on the accuracy of geographic location using IP addresses? Are we worthy enough to question his veracity? Why should we have to meet some standard to doubt the extraordinary claims of some nobody who doesn’t have any verifiable accomplishments in the field he’s now claiming to be an expert in? Geez.
Actuallly it was the hoaxer himself who posted “the bot looked up the source of Camerons connection and knew it was in the USA – therefore knowing it was Boston, Mass.” So spilce, you agree with us that he’s full of it.
Argh, SPLICE, sorry no edit function 🙂
And just to egg my face further, it was Matthew, not splice, who made the comment I addressed. Not my day, sorry…
Actually, it’s NannieNot that brought up “Reverse DNS” in here (which is actually nonsensical, perhaps he/she/it meant netblock assignments looked up through whois). Yes, JimW brought up the issue of identifying the location of people through their network information, and he’s certainly full of it. Regardless, I think Matthew was referring to NannieNot and others, but it’s still not a valid argument, especially since JimW is the one that brought up the whole location thing. I think I saw one place where he claimed he could track the people down to their street addresses with the technology. That’s definitely a hoax, unless he’s referring to police/some kind of investigator who obtains/subpeonas (sp?) the information from the ISP of the person. Definitely not doable (not that accurately) using just an IP and a program.
Ah, egg on my face too then :). Uh yeah, so, I agree with you. There. Hehe. Hey, isn’t it funny that JimW just disappeared when we started getting people who actually worked in AI and asked for details? My take on it is he’s put together this hoax to get free publicity for the software he’s going to make available on April 2nd (LiveNannies). I’d say it’s all for the money, but he says the software is free. Maybe he just needs the attention. Maybe he actually did come up with a breakthrough in AI, but he’s not acting in accordance to that. Have you read about the secure facility where he stores all his data and program for ChatNannies? Hey, it’s so secure they might not let other people in to test his claims. And so begins his backpedalling.
My comments regarded those who claimed the ability to locate an IP address based upon DNS. I’m quite sure I made that very clear. JimW’s credibility and the soundness of argument made by those who doubt him, in general, were not an issue in my post.
get down everybody, he’s got a gun!!!
get down everybody, he’s got a gun!!!
That bot is freaking awesome. I assume it’s not available to use on your own, but how can I chat with it? Is that possible?
I am amazed.
Zach > freaking amazingUse a more imaginative pseudonym for your sock puppets, Jim. Central News: “the Wightmans … want 18 month old Zach to benefit from the internet”.
Matthew – what you actually said was “those of you claiming that you can recover accurate geographic location information from Reverse DNS are simply full of it.”
I then pointed out that Wightman had brought the topic up by posting that “the bot looked up the source of Camerons connection and knew it was in the USA – therefore knowing it was Boston, Mass.”
Now “looked up the source of Camerons [IRC] connection” has to mean one of two things: either the IP was mapped back to a registered block in the ARIN database, or to a specific reverse record in the global DNS space and, in either case, thence to a physical location of a registrant.
After Wightman made this statement, there were two subsequent comments (before yours) that addressed it. One said that the reverse DNS method was *not* simple or reliable; the other implied that the ARIN block method was simple, which it is, although as you say it doesn’t allow for proxies (there are ways to tell a proxy from a client PC if you work at it, but if you do detect a proxy that’s usually the end of the trail).
So the only person who explicitly discussed Reverse DNS actually agreed with you. The two people who seemed to believe in the “power of location” were the ARIN block poster and Wightman himself.
I’m sure you intended to make yourself “quite clear” but what you actually wrote didn’t bespeak a careful enough reading of the thread to sustain your conclusions about our “worthiness to question the veracity of Jim’s claims.” (whatever that means anyway)
What I find worrying is that someone with Mr Wightmans violent tendencies (see Usenet, I know that saying something doesn’t equate to doing something, but it’s a good indication of how people resolve issues) was able to pass vetting in the UK to get a gun licence.
There’s a simple way to resolve this situation: make the bot available at a known address (perhaps as a telnet or IRC client) and arrange for tens or hundreds of people to access it simultaneously. If there really is a chatbot capable of 25,000 simultaneous conversations at the other end it will hardly notice. If it’s actually a human (as, having read the transcripts, it must be) he/she would, I imagine, find it rather hard to keep up.
The whole thing is a farrago of contradictions anyway.
The “Nanniebot” is tasked, according to Wightman’s website, with *monitoring* chatrooms for signs of paedophilic content, much as the recruited human “Nannie” volunteers are supposed to do, and to file reports or warnings on his central database depending on what they see.
There is no role-playing or impersonation (in fact no active conversation) required on the part of the human Nannies in order to perform their monitoring function, so there is no particular reason why the equivalent “Nanniebot” should need to know how to impersonate a human in channel, or to speak at all.
Natural-language processing of a certain type and level would of course be necessary to suss out the bad channels, but it probably does not need to be very sophisticated in nature – certainly not beyond the means of current AI research.
At most, such an AI bot would act as a “pre-filter” to bring certain channels to the attention of human monitors, who could then visit and render a more nuanced judgment on the contents of the channel.
Still, advances in AI do occur, and it would be possible to imagine a (slightly controversial) IRC bot that was capable of impersonating an at-risk child in a chatroom, hearing and responding in kind to “chatspeak” in its arcane and compressed form, and luring a paedophile into revealing stuff that could be used to nail him.
But there is nothing like this. Instead, we are treated to the spectacle of a “bot” that wants to conduct a quiet, pleasant, witty conversation in the persona of a 26 year old adult named CJ. (Computer James?) It doesn’t speak chat lingo and isn’t required to parse any either. It doesn’t file any reports or give any indication that it is capable of performing the function described for it on the ChatNannies website.
In fact, the “CJ” persona is remarkably close to the image that a paedophile presents to his victims: kind, funny, grown-up but not old – eager for your trust. If the AI actually worked, this construct would be great for ensnaring kids.
Then you have to ask yourself what it all means given that the “AI” is rubbish and this is Wightman himself typing this stuff and denying it afterwards.
I think that this might indeed rise to the level where police enquiry is justified.
The most interesting thing is that the “bot” can make the connection between someone looking and cheating.
One could come up with a very different scenario with a very similar set of statements:
] So I’ve got this teacher friend.
] This guy paid me to make little marks in the desk so that he would know the answers to the test.
] ok so my friend was looking over my shoulder in class today
] and my teacher asked me if he was cheating
In this situation, the response that the “bot” gives would be totally wrong. How was this “bot” able to connect those two verbs, in two seperate exchanges, with hundreds of possible meanings to the ones that we would commonly associate with the exchange?
It would probably be worth your while checking out this Jim Wightman hoax here:
Have a read of the whole thread (if you can stand the parallels). This guy is sick.
You know what? Maybe we’re overanalyzing this.
I mean, even if he’s lying and he does this all by himself, there’s still a lonely, socially-isolated middle-aged man who’s willing to spend every wanking hour talking to underage kids in internet chat rooms.
You know, that sounded a whole lot better before I spelled it out.
Jim has setup a forum for the discussion of Chatnannies on the chatnannies site:
I think he was fed up with the personal attacks (understandably so) and wanted to move onto turf he has more control over.
That’s nice Cameron/Jim. It seems to be buzzing at the moment.
What is the definition of ‘buzzing’? There is a total of one post, from Jim, basically explaining that there’s nothing to see.
The division into ‘Open Minded Posts’ and ‘Closed Minded Posts’ is rather nice – reminiscent of Alice’s ‘Guests/Servants’ dichotomy. Poor Alice, which door to take!
I stumbled upon this page quite accidentally and I thank God for it. At the risk of sounding redundant, I must say, this is really a great breakthrough in AI, if it really is. C’mon this sounds like ..god damn it, I don’t have words, That Bot is exactly as humane as AI can be. Atleast IMHO.
But if this is a hoax, then too its got to be great! I mean, really, some people must have way too free time….
I hope we get some news soon. April 1st I await.
— Knight Samar.
Whoa, on second thoughts…are the posts and comments on this page made by some BOT ? Scary world….
Sorry, I don’t buy. I’ve been making chatterbots for over 8 years now. I’ve won the Chatterbox Challenge at http://www.chatterbotchallenge.com twice and finished second in the Loebner contest and my bot is still dumb as a brick. Enter the Chatterbox Challenge and win the prize money and then take Dr. Loebner $100,000 in his contest and then I will become a believer.
Knight Samar is quite obviously Jim Wightman, again. Check his comments on waxy (and my site) if you need to.
Jim, it’s getting pretty hard to maintain my skepticism in view of your behaviour. Ah well.
I suspect John Titor is behind all this!
It’s 00:22 GMT April 1st, and it seems that the ChatNannies site has disappeared….
Not as far as I can see. Seems to work ok, and contains material posted today. We have an interesting problem of logic here. It’s not possible to use the basic scientific criterion of falsifiability, due to the possibility of fraud. Until we get an open trial with independent monitoring of the system to exclude human intervention, our judgement based on the transcripts will be dodgy. For example – it’s been argued that the lack of damage control phrases proves it’s not a machine. So the better the performance, the lower the credibility. The only thing we could prove at the moment would be failure – if it was to pratfall and give itself away. Usually, a hypothesis that cannot be refuted holds. But here we can’t do that because we don’t know if the test is fair.
I propose a test running the bot on a machine supplied and watched by an impartial authority – say the W3C or the British Computer Society – in a double blind trial. I volunteer!
I just have to say this is the most fascinating story I have read for a long time.
I came to this story through Ben Goldacre’s columns in the Guardian, and have read both this and the waxy.org threads with the finest of toothcombs.
While this is all clearly a hoax, it also seems to go further, and be the product of the mind of someone who is in serious need of some help and attention. I believe he might by suffering from a form of Munchausen’s Syndrome.
And why has he suddenly rejected Dr Ben’s offer?
Please keep holding this man to account. Any true scientist would be more than happy to have his work dissected and contested. As Casey says, this is healthy.
And to repeat Casey’s question: just what approach do you use to do your input parsing?
Jimw/Death’s Head/Cameron/Zach worries me almost as much as those who he is trying to “catch out” with his “technology”.
such an obvious fake.
I don’t know what to say, except that I am Knight Samar — a totally seperate, an individual in no way related to this Jim guy.
I am myself, thank you!
— Knight Samar
A Google search confirms that Samar’s real; he’s from India, and has posted in Harry Potter fanfic forums; the hotmail address matches.
I’ve been reading the site – the guy is clear not right in the head – he’s been “debunked” on about ten sites – you notice you never get a single straight answer out of him – always excuses or later.
Hi room! Can I suggest that we put this discourse on hold for a while? It’s getting boring and personal. If you’re concerned that people might take him seriously, write or email your local and national papers, radio stations and TV so they’ll be primed to check it out more carefully than they might otherwise. If you’re outraged, intrigued, furious or just curious about Jim’s netnannies claim, read on.
We can be pretty sure that there’s precious little substance behind JW’s outrageous claims for a computer system that can hold text conversations with humans and identify potential paedophile conversational behaviour.
We can also be pretty sure that there’s no way anyone is going to get a chance to verify it adequately.
I predict that JW will carry on as he does until he chooses to stop.
Now I’m sure there are people out there who know a lot more about most of what might be involved than I do, so they must be able to pose questions that JW would find hard to answer.
He’s got a public forum at his website where you can post questions and make comments. It demands an email address and you have to register by clicking on a web link in an automated email from the forum.
I’m not suggesting that he be bombarded. That would let him play the ‘too busy, too many questions’ card. What we need are a few hard questions that will probe the depths of his ignorance. Sooner or later he’ll trip up. And don’t let’s get personal. Guys like him get off on flaming – it means they can dodge the issue. Same with unverifiable details – see the stuff on the forum about his claims to be working with the police. The same would go for his claim to be writing a paper for someone at Edinburgh University. And you have to admit that he did a nice job with the ‘reverse DNS lookup’ geographical identification of Boston, Mass. We need more of the same. I’d be very interested, for example, to know how on earth he ‘scrapes’ unstructured data from internet pages.
So, let’s keep it technical, direct, to the point and above all, verifiable and factual.
See you in a month or so, I’m now suspending this interest…
The forum is exhausting to read for its constant revisions to avoid the point. A few days back JW posted an e-mail purporting to come from a New Scientist writer arranging a test. It looked fake, as it’s hard to believe that an experienced science journalist would have sent a message full of grammatical mistakes and referring to a Faraday cage as a “Fourier cage”. That disappeared, so it’s probable that the New Scientist lawyers leaned on him.JW now claims to have started a PhD in 1996 at “a University in Germany”, conveniently getting around the non-existence of a dissertation at the British Library. He also posted a link on cyberbullies, claiming persecution; ironically (and in fact characteristically) the typical character sketch cited matches Wightman himself (abuse, trolling, racism, no straight answers, pleading confidentiality to get out of producing evidence, etc). The advice on the cyberbullying page – essentially, not to feed the trolls but quietly shop them if they come out with anything legally actionable – is pretty wise.
P.S. For your amusement, I just found a picture of Wolverhampton’s computing genius.
… which is now gone, confirming the identification.
you can tell it is fake
this is cool have a great friend that cant yell backthis is so sweet i like this alot and wil tell all my friend s about thislatter
gAY Gay gAY Gay gAY Gay gaY Gay gAY Gay gAY Gay gAY Gay gAY Gay gAY Gay gAY
whos out there thats a freak
whos out there thats a freak
ich will in den chat
hey im on the internet
LOOK HERE LOOK HERE
salut ca va en peut conecter sur mon email
salut ca va en peut conecter sur mon email
hi how r u today yo yo yo
hi how r u today yo yo yo
do you know a robot called smart child the 1 that helps us wit our homework can you send me his addy please thankyou xxxxxxxxxxbeth+mollxxxxxxxx
I really like your blog It has a lot of great information. I will be checking back on this site from time to time. thanks for all the info. you can check out my Blog
Take care now.