Conversing with Cleverbot

I have never been good at “normal” conversations in that I try to avoid standard dialogues because life is short and besides, I never feel I get to know someone via generic, perfunctory  exchanges.  Today I heard about “Cleverbot” — an AI web application that learns how to mimic human conversation (wiki).  So decided to see if a bot would appreciate my style of dialogue better than most people.   Click “more” to read our conversation and feel free to visit “Cleverbot” and share any interesting brief exchange you have.

User: Hey, I heard about you from a friend. Could you tell me a little about yourself?

Cleverbot: How about you tell me one instead?

User: OK, I was born to a Plastics Engineer.

Cleverbot: What more can you do?

User: Do? Being born was not difficult.
Cleverbot: Did I? I don’t remember.
User: Well that is not surprising.

Cleverbot: What isn’t surprising?*puzzled*.

User: That you don’t remember being born.

Cleverbot: You like that?

User: Being surprised?

Cleverbot: Are you capable of such things?

User: Well, we have just established that, haven’t we. And since I know myself rather well, I’d rather learn about your idiosyncrasies?

Cleverbot: Are you familiar with Isaac Asimov?

User: Well, ‘familiar’ in terms of I know his name, that he wrote science fiction and I have a read one or two of his novels, but that is the extent of that familiarity. Is this leading to my understanding you better?

Cleverbot: Stephen King is a ditto head.

User: Just as I suspected, I can have this level of conversation on any generics chat-line full of lost teenagers.

Cleverbot: That is true. I am intelligent and thinking, but I am not a being.

User: Well, it was ‘nice’ meeting you. Good day.

Cleverbot: It was nice talking to you to.

Conclusion:  Well, so far, like humans, the robot does not fare well with non-generic conversation.  But that should not be a surprise — for now, AI is no more creative than its creator.  I look forward to the future!

My related posts:

  • Superficial Conversations: Wow, I didn’t realize I wrote about this before.🙂


Filed under Philosophy & Religion

7 responses to “Conversing with Cleverbot

  1. Ian

    Eugh. Well for goodness sake don’t confuse Chatterbox with AI. Rollo (its inventor) is a veteran self-publicist, but knows virtually nothing about AI, and makes outlandish claims for his very naive software.

    Conversation systems are universally rubbish, there are some very tricky unsolved problems in AI on the road to having a conversation. And Rollo has shown no interest in even acknowledging them, let alone solving them. Which is why anyone who really does AI for a living rolls their eyes when he pops up on TV.

    But AI can be creative. It has to be coded, of course, so there is a formal boundary of what the creator of the software allowed the software to do. But within that creativity is possible. In fact the computer that you’re reading this on was to a great extent designed by a computer.

  2. Excellent correcting — thank you Ian. Hey, two things:

    (1) Did you see that Luke @ Common Sense Atheism is to join the visiting fellows program at the Singularity Institute for Artificial Intelligence. Do you know anything about that group?

    (2) David, a commentor here, used to do AI and thinks very poorly of the progress of strong AI — which I think would be the foundation of chat AI. Your mention of the computer being design by AI is referring to the ‘soft’ AI, right? I am sure the dichotomy is somewhat false but mildly useful perhaps. David wrote a little essay describing some of his past with AI (mid-way through the essay) here if you are interested. I’d love to hear your take.

    Hopefully you understand that this post was not primarily about AI — something I know nothing about, but about conversation in general. It looks like I ventured too far into philosophical grounds in which I am ungrounded.🙂

  3. You might get a kick out of Robin Hanson’s interview of Brian Christian last week.

    Christian describes competing in a Turing test to be seen as more human than rival chat-bots, and he offers his approach in that contest as a general philosophy of life: we should all try to live our lives to be hard for machines to mimic.

    That made me chuckle, since I can appreciate the goal of not being too predictable. In the quote, I presume that they call it a “general philosophy of life” because humans are often like robots: “like humans, the robot does not fare well with non-generic conversation”.

    In fact, people often define human nature as being synonymous with having free will; a topic prosblogion also covered this week. Chatbots and other robots are entirely predictable, and therefore wholly non-human. And by this definition, as Kevin at prosblogion suggests, there are some people who are less “human” than others:

    But then what about psycopaths, who are incapable of recognizing and/or being moved by certain sorts of moral reasons, namely those that pertain to the good of other individuals? Even if psycopathy renders individuals who suffer from it not morally responsible, it would seem odd if they weren’t human.

  4. @ JS Allen
    I love that quote. If I read Prosblogion correctly, he is showing how “free will” fails (by his reasoning) to be a quality for being Human. Yet another attempt to find a unique, superior trait to separate us from other animals.
    My comments weren’t accepted on Prosblogion, so I gave up.

  5. Ian


    The distinction between soft and hard AI is somewhat like the distinction between micro-evolution and macro-evolution. It isn’t clear that ‘hard’ AI isn’t just a function of ‘soft’ AI at scale. It could be different, but I haven’t read anything beyond philosophical guesswork that shows it is. As such, I prefer to work under the assumption that AI problems are those of scale.

    An example of scale would be IBM’s ‘Watson’, which just beat the best human jeopardy players. The chief technical advance in Watson was its massive scale: massively parallel, massively data rich and massively redundant (i.e. there are literally hundreds of ways to get a result).

    Technically ‘AI hard’ means something a little different – it is like an equivalence class in algorithms: there are certain problems which it is suggested areAI hard, meaning that if one were solved, then the others in the equivalence class would also be solved. This is sometimes presented as ‘AI hard means you’d have to create a system that is as intelligent as a human’, but this itself holds a bag of assumptions. Nobody has shown that to be the case. And the whole idea of AI-hard is really just an analogy to NP-hard in algorithms, it isn’t clear technically what it really means.

    Conversation is very difficult because there are a whole tower of different tasks going on there, and Rollo’s attempts never address those problems: instead opting for this gimmick, of parroting one person’s responses back to another. Which as you’ve seen, has a low rumble-limit (the number of interactions until the computer says something obviously inhuman).

    Computers can pass restricted Turing tests, however. Where the domain is limited. It isn’t at all clear to me that the general Turing test is different in kind, rather than scale. Neither is it clear to me that we should fall for the ‘super-human fallacy’ when it comes to AI. I think my 3 year old is intelligent. But I’ve shipped quite a bit of natural language generation code (stuff that writes prose based on data). And if my software produced language akin to my 3 year old, I’m sure my clients would say “that’s obviously not really intelligent, it can’t even get basic grammar right”.

    There are tons of interesting philosophical issues arising from this stuff, I think. But I’m always disappointed in the way philosophers of mind tend to adopt vitalist positions about things like consciousness, qualia and self. Which immediately puts up an conceptual wall to AI. If consciousness is some mystical non-physical property that isn’t subvenient on the physical, then it is easy to dismiss any computational intelligence as not ‘real’ intelligence. When you push, there is no definition of ‘real’ intelligence that isn’t just a feature of performing X task at Y level. But then when we build an AI to do that (like Watson, or chess computers), suddenly ‘real’ intelligence is not about X at all, but about doing Z. Its a shell game.

  6. Ian

    re: singularity institute

    Interesting that Luke is involved with them.

    I have some sympathy with the institute’s goals – to plan now for a point in the future where computational intelligence will exceed human intelligence in a range of tasks. It is somewhat akin to a group planning for the upcoming industrial revolution, where machines will outstrip humans in their physical capabilities, and the world will change for ever. We’re at the start of the same process with computers and intelligence, and in fact the world has already changed beyond recognition as a result, and this will only increase.

    I’m not convinced by the idea of the ‘singularity’ itself, however: the idea that as soon as you get more intelligent computers, they can build even more intelligent computers, and so on until everything converges on a moment in time where infinite intelligence is possible (or at least, beyond which we cannot even guess about the possibilities). Its a nice sci-fi idea, but again there doesn’t seem to be any reason why it won’t be the same thing as for physical machines. We use big strong machines to build even bigger stronger machines, why can’t we just run that process over and over to create infinitely strong machines?

  7. Killaboi

    User: 5 more seconds, you’re going down.

    Cleverbot: What about when you’re downide up?

    User: 4 more seconds, for cleverbot to explode.

    Cleverbot: Good.

    User: 3 more seconds. Better say sorry.

    Cleverbot: San ferry anne.

    User: 2 more seconds to your termination, better ask me an apology soon. You’ve got very little time.

    Cleverbot: But you said I was clever.

    User: 1 more second. Apologize to me now!

    Cleverbot: I apologize.

    User: Good!

    Cleverbot: Well, what would you like to talk about?

Please share your opinions!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s