This is a question I've been toying with in the last week or so, as all of this LojbanLanguage discussion has popped up here. Is language ''really'' the barrier for ArtificialIntelligence applications? I can see how one would write a ProgrammingLanguage that used Lojban as the syntax. It would be terribly verbose, though, and the semantics would not magically improve over the semantics of present day languages. And just because a Lojban parser can parse a statement equivalent to, "Computer, get me two tickets to the Cubs game next week and book me a flight to New York for Thursday," doesn't mean it'd understand what you're asking it to do. Or am I missing something? G''''''regorianCalendar dt = new G''''''regorianCalendar(); dt.add(Calendar.DATE,7); B''''''aseballGame bg = new B''''''aseballGame(B''''''aseballTeams.CUBS, dt); bg.tickets(2); // flight to JFK/LGA left as exercise to reader Let me put the question another way. Would the TuringTest be trivial if it were conducted in Lojban? Has the CobolFallacy reared its ugly head again? ---- From what I've read of Lojban so far (19-Jun-01), it looks as if there are certain aspects of the language that might make it a little more conducive to bridging the human-computer gap, for two reasons: First, a human speaking it needs to state things a certain way. In English, if you need to issue a command, there is more than one way to do it: directly, indirectly, dropping hints, etc. In Lojban, the "ko" pronoun is used in place of the "you" in a declarative sentence to produce a state which must be brought about. This could be "Ko gets me two tickets to the Cubs game" or "Three gallons of fuel is put into Ko's car." This could allow the AI to break down any problem to be solved into a statement using the "ko" pronouns; "The cashier gives ko the tickets." "Ko pays the cashier fifty dollars." Second, the Lojban language, unlike most ProgrammingLanguage''''''s, specifies declarative statements. This enables an AI to absorb the kind of cultural information that they have lacked for so long, from the same kinds of sources we humans learn from. Instead of writing a list of commands specifying how to buy a baseball game ticket, you could explain how it's done, with statements like "When the stadium is out of tickets, sometimes there are scalpers selling them on eBay." The AI that confronts a sold-out Cubs game would know that the game isn't over yet, unless it knows that "Master doesn't like to use eBay" or something to that effect. It also allows one to ask, and answer, questions, meaning it might do a little better in the Turing test. Of course, if you ask it "What color is grass?" it might not know unless someone had been so kind as to tell it. Of course, Lojban only solves this problem as far as information parsing. The logical processes of inference, deduction, etc., still need to be addressed. Perhaps Lojban could allow that to happen. -- NickBensema ---- Well, I would have it able to figure out what you were asking/telling/ordering it about, and it would have a long list of sentences at its disposal. It would then use its parsing and knowledge of gismu to figure out what to do about it. Knowing that would, of course, be a far cry from having a computer pass the TuringTest, depending on how you set up the test. My understanding was that Turing was vague, and deliberately so. If it had the option of not responding without attracting suspicion, it might work. It could be undetectable on a LojbanLanguage wiki, but not on an instant message where if you don't respond, people wonder. Combine that with an EnglishLanguage/LojbanLanguage translator, and it could already exist. ''Okay, you've caught me.'' ---- Lojban seems to have two advantages for simplifying computer understanding: a simplified grammar, and a system of affixes that allows each word to essentially contain its own definition. The downside is that you have to learn a whole new vocabulary. I would think that it would be possible to capture the same benefits using a simplified English vocabulary with a formal grammar. I mean, considering the effort it takes to learn lojban, you might be better off speaking Prolog. :) -- MichalWallace ---- Another possibility is that, rather than the CobolFallacy, or in addition to it, the idea that Lojban would enable AI fails due to the MythOfMetadata. Sure, Lojban allows a computer to pick out all the words and identify their parts of speech, but what beyond that? Sure, there's the idea of ko-resolving I described earlier, but there's all those words other than "ko" that the computer has to know what to do with, if anything. And it all seems kind of pointless when ZorkGame already understands imperative sentences in plain English. -- NickBensema No, it does not, no more than a car "understands" when you "tell" it to turn right via the steering wheel. This is a deep, deep misunderstanding of language, albeit a very common one. I don't know how to correct such misunderstandings short of studying AI and linguistics. Take my word for it, almost any computational linguist would say that it does not "understand" anything. -- DougMerritt ''OK, so Nick used language loosely to convey an idea; I suspect he meant something like "ZorkGame reacts to imperative sentences in plain English in a manner that is both predictable and intended." Were you trying to convey a serious point about ambiguity in language?'' A serious point, yes. And I'm not unwitting about Loglan nor Lojban (in response to a deleted comment). It's just that it involves a lot of complex issues that are unintuitive and misunderstood by essentially everyone except specialists (merely meaning, people who have put in serious time into unlearning their misconceptions, not in the logically fallacious appeal to authority sense) in the relevant fields (linguistics, AI, etc). Nick is correct that merely using Lojban isn't enough to enable strong AI, else we'd have had strong AI as soon as a Loglan/Lojban parser were implemented (roughly 1980 for Loglan, early 90s at latest for Lojban, skipping issues of degree of completion). Nick deeply misunderstands the issue in even bringing up the Zork parser, however, which was cool in one sense, but just a small hack in another, and did not by any stretch of the imagination parse anything more than a toy subset of English. His comparison makes as much sense as pointing to C or Java parsers; it's got very little to do with parsing full fledged natural language. Loglan/Lojban makes some things easier, like picking out parts of speech, which is extrordinarily hard to do for English, despite the exaggerated claims one sometimes sees. That's not the hard part of strong AI, though, it's merely an incompletely solved research problem for English, whereas the heart of strong AI is a completely unsolved research problem. -- DougMerritt ''I agree completely that there are many complex issues here, but I observe that people naturally anthropomorphise, even when they know that what they are saying is not literally true. I agree that Nick seems to have has over-stated his case by bringing up Zork, and personally despair of machines ever understanding human languages.'' I agree that anthropomorphisation has value in general, it's just that it interferes with understanding cognitive science issues when taken too literally, which, alas, is more typical than not. In particular, toy "English parsers" such as Zork, Cobol, SQL, and even the impressive TerryWinograd SHRDLU English parser all tend to give the impression that natural language parsing is easy, and a solved problem -- none of which is true, by any stretch of the imagination, even though there are respects in which they are interesting in that regard. -- Doug ---- Lojban would help AI by making it easier to write a natural language parser, allowing the designers to worry more about other things. It might be a problem, however, in forcing testers to worry about translations themselves. ---- AugustZeroFive CategoryArtificialIntelligence CategoryLojban