Usually simple-minded programs that respond to users in games or chat rooms by Eliza-like rephrasing of the question ("Why is that you think I am not intelligent?"), or by a digression ("Would you like to hear a joke?"). Chatbots typically make no attempt to understand language, and are blissfully unaware of the 50+ years of research that continues in ComputationalLinguistics. Chatbots usually only try to make the user think they are intelligent, and they typically attempt this by using a host of simple tricks that mask the fact that its not attempting to understand you. This actually works a surprising amount of the time --- who hasn't been part of conversation where you nod knowingly as someone talks above your head? That's what it's like to be a chatbot --- constantly struggling to appear smart, that they are understanding, but with the full realization that they have no clue what the other person is really talking about. You can argue that, at the limit, there's no difference between really understanding language and pretending to understand language. The argument is similar to arguing that there's no important difference between a song played live by a band, versus a recording of the song. But that sort of argument is not relevant to current chatbots, which are typically trivial to unmask, and are collections of shallow tricks that are sometimes hard to distinguish from straightforward uses of regular expressions. See also: The LoebnerContest, a limited sort of TuringTest, AliceBot ---- A frequent winner of the LoebnerContest is MegaHal, which uses a slightly different approach than Eliza. AliceBot won last year. This isn't to say that understanding human language is totally beyond a computer. NaturalLanguageInterface''''''s go as far back as CobolLanguage and ZorkGame. They may be incomplete subsets of English at best, but they can be useful. ''There's no doubt that NaturalLanguageInterface''''''s have their uses, and the field of ComputationalLinguistics has been researching these ideas for decades, but the chatbots I've seen are too linguistically naive to be pushed very far beyond parlor tricks. Zork is interesting; I seem to recall that it used augmented transition networks (ATNs --- a standard parsing technique), which are more sophisticated than most of the techniques chatbots use.'' What lacks to date in ChatBot''''''s is that they have no DeepHistory. See StateMachine. ''Which, of course, is not to say that people haven't tried writing DialogAgents that dynamically model dialog in a structured way that allows for interesting reasoning. It's just so much harder than writing a ChatBot, and the results are usually not as entertaining. A fascinating aspect of dialog is that the utterances have relations, e.g. you respond to a question with an answer, or maybe a clarification questions, or you could try to change the subject, etc. People often refer to this dialog structure, such as when they say something like "What's the first thing you said?", or "Why are you changing the topic?". Linguists have done a lot of work on the structure of dialog, and you can come with things like dialog grammars, which can give a tree-structure to dialog, where the leaves are utterances, and the nodes represent the "type" of the utterance, and interior nodes represent dialog relations between utterances, and sequences of utterances. You can then walk the tree and reason about the dialog, draw inferences, make predictions about upcoming utterances, and so on. It's far from perfect, but it's a good way to store the structure of a dialog. ChatBot-style reasoning typically involves chopping-up an utterance it pieces according to, say, a fixed set of regular expression rules of the form " --> ". Such surface-level reasoning has it uses, and it is sometimes a superb match for how people actually do speak in some situations, but once you start thinking seriously about the intricacies of real dialog, such rules start seeming less and less useful.'' Also note that environment has a major impact on chat bots. In a chat environment, lots of humans are talking about a wide variety of subjects. Even a chat room with a specific topic (such as, say, #StarWars) will see numerous topic changes over the course of a day. Programming a bot so that it can respond to and interact with this ever-shifting conversation in an intelligent way is extremely difficult, especially compared to laboratory experiments that are tightly focused on a particular type of discussion.