This is simply the proposition that, at least up to some (hard-to-guess-at) point, no software will be considered "ArtificialIntelligence" once it has been written and works. For example, DeepBlue "merely" used BruteForce calculation; this does not fit with people's image of what AI is supposed to be, so they don't consider it such. This is true despite the fact that game-playing algorithms are still described in AI textbooks. ''I don't think I would call DeepBlue "brute force". As far as we know, the human mind also uses forms of brute force. It is true that DeepBlue probably went deeper in the search tree and used less pruning heuristics than most human minds, but it is a matter of degree in both. It is normal in humans to make up for a deficit of one area with improvement in another, such as the blind using more sound clues than the sighted. Similarly, DeepBlue goes deeper because it is not as good at pruning heuristics as humans. But, it still uses heuristics.'' For another example, OpticalCharacterRecognition exists and works (sort of...); often systems implementing it incorporate ArtificialNeuralNetwork''''''s, which are an AI technique, but OCR is popularly considered trivial. An optimistic way of looking at this would be to suppose that researchers are very forward-looking and are less concerned with classifying what exists than with deciding what to invent. This also ties into the debate(s) about StrongArtificialIntelligence and WeakArtificialIntelligence. If a task which is generally associated with humans can be performed reliably and reproducibly by a program, that program could someday become a component of a strong AI; people who do not believe in strong AI might therefore defame it (not to rile anybody's feelings). -- DanielKnapp (but feel free to delete this signature if you add anything) [I find that anything that computers end up being able to do is often reclassified as "not AI". Chess, as you said above, is a good example - it was the holy grail of AI (at least as popularized) for a long time. Learning is another one - "learning" was considered a human trait, but now that we have trainable computers, and even software that can learn emergent behavior from basic concepts (like navigation), it's not "AI" anymore] ''DeepBlue does not fit with my image of "intelligence" because it is hardcoded for a particular problem (chess). It excels within its domain but offers little insight beyond it. The same goes for computer vision, speech recognition, expert systems, and the rest: impressive work but how to generalize them? Even systems that learn or develop emergent behaviours are still bound within a limited domain (though it is promising when the shape of the domain cannot be entirely predicted in advance, or when techniques are effective across seemingly unrelated domains). Whereas intelligence in a deeper sense means being able to solve something new, something that wasn't already in your repertoire. You can't get such adaptability just by building up an arsenal of special cases: the arsenal is finite, the problem supply infinite. Other people may be moving the goalposts but personally I think it was naive to put them anywhere short of actual adaptability in the first place. I do not think anything less than Strong AI deserves the 'intelligent' adjective, except superficially.'' * so a human that cannot solve all possible solvable problems set before them is not intelligent? Deep Blue is an idiot-savant for chess, not intelligent necessarily, but most of the way there for that domain. And most of the creatures we consider intelligent have issues solving many of the problems they are faced with. : ''I didn't mean to imply that an intelligence should be able to solve all solvable problems. That would be a tall order! I just meant that some of the problems an intelligence can solve should be unexpected, i.e. problems outside of the domain the designer had in mind. That's why emergent behaviours are promising (and perhaps I was too dismissive of them above; by my own definition I must admit they do show some intelligence).'' : ''So are humans are intelligent by this definition? Humans have solved many new kinds problems in the past century, things like general relativity and quantum mechanics that could not specifically have been hardwired in by evolution. I think evolution long ago produced a more general capacity to reason as an adaptation for solving very different problems (hunting, gathering, social). This capacity turned out to be general enough to do unpredictable, seemingly unrelated things, and that makes it intelligence.'' ** but those solutions were produced by different individuals at different times - possibly by individuals that would have issues solving other problems (Paul Erdos is famous for a number of quirks that would suggest he would not solve social problems well) - so perhaps AlbertEinstein is only slightly more general than DeepBlue? : ''There's an important difference. DeepBlue was specifically designed to play chess; AlbertEinstein was not specifically designed to discover relativity. (OK, evolution doesn't literally design things but you know what I mean, right?) Even if their accomplishments are of similar scope, Einstein's work has the distinction of being unpredictable with respect to the nature of his mental hardware.'' *** Certainly there is a difference in "intent" for a computer, because we are designing it, as opposed to organic intelligence, which is serendipitous, but as recent events (2/14/2011, IBM's Watson on Jeopardy), show, it's not inconceivable to get from a very specific task to a more general one, with sometimes oddly unexpected results. From there, it may not be far to true AI. To be fair, we have yet to show a good example of a program learning from its experiences. ---- You have a point, but it's not what you put above :). I would say that ''(a/the ?)'' primary goal of AI is in fact an "Artificial Intelligence". One day we will have software that another human will recognize as an intelligent being. Something that (at the very least) passes the TuringTest. [Real humans consistently fail the TuringTest in real-life tests. It's useful as terminology but as an actual "test" that a machine would "pass" it fails.] This raises a few questions: * What exactly is intelligence? * Are all humans by definition intelligent? * Are any humans in fact intelligent? But on a more serious note... :). Perhaps the ultimate expression of a working AI would be a robot capable of interpreting the ThreeLawsOfRobotics. Along the way there are a number of hard sub-problems. Because those problems were "hard" and originally far from easy practical solution, they came under the category of AI. Those that come to mind: * Vision * OCR * Speech generation * Speech comprehension * Complex goal oriented behaviour (chess) * Do what I mean, not what I say * Write a popular series of novels (Doubtless there are other sub-problems worth mention). Some of those problems are now solved (to varying degrees), and perhaps are no longer really a part of AI research. It is important to distinguish between the main goal - a real working ArtificialIntelligence - and the sub-problems that need to be solved along the way. ---- Someone once defined AI as that which we haven't done yet. They were being humorous, but it is true that nothing we've done so far is good enough. And ''this is in the judgement of the people doing it''. Those who are actually trying to do these things often have the best understanding of just how hard it is. You're too generous. With AI, only those who are actually trying to do these things have ANY understanding of what they even are, much less how difficult they are. This opinion of mine is based on many conversations with programmers, system administrators (our industry's version of educated laymen), and non-computer-types. -- DanielKnapp Artificial Intelligence is simply anything we don't know how to do yet. Definition. -- JonGrover ---- The subject of this page is more about the human tendency for MovingGoalPosts when it comes to things like DefinitionOfIntelligence, DefinitionOfLife, etc. Having good definitions is the best defense against MovingGoalPosts, but definitions of intelligence and life and whatnot are notoriously hard to get a consensus on. -- RobHarwood ---- Suggestion: Call what most people term Artificial Intelligence "Automated Information" - more descriptive of what is implemented under the name! ''Why bother? Just discard the term as meaningless MIT advertising hype and get on with solving problems.'' The term is used by people other than MIT advertisers, and giving a more appropriate name removes the inference, first, that it is "artificial", and second, that it is "intelligence". Intelligence is real and derived, not extracted and compiled. ---- A columnist for ''Byte'' once quipped that he believed "''All'' intelligence is artificial." '''Q:''' Why does the military invest so much in Artificial Intelligence? '''A:''' Because it has so little of the real thing. ---- I don't think the TuringTest is a good definer of AI. Actual AI may be "smart" by most standards, but still fail the TuringTest because it might not know or care about human issues like TV sitcoms or romance, for example. It might have a distinct thinking and output pattern that gives away its identify, yet be able to answer brain puzzles as well as any human. ''This is very true - not all intelligent creatures could necessarily imitate a human perfectly, so the Turing test is a fairly arbitrary standard for intelligence. Turing knew this. The Turing test is more a philosophical standard (if it acts exactly like a human, how can you say it isn't intelligent?) than a practical test. One could easily imagine an alien race far smarter than humans which would still have trouble imitating us over a teletype.'' Anyone who has seen typical chatroom talk will agree that being smart and being human are not necessarily related. (Except maybe there is a "social IQ", but that is bone hard to measure. Otherwise, nerds would have more dates. :-) ---- When we are observing artificial intelligence we are observing our own perceptions of AI. Therefore, all AI does is confirm our ability to perceive, and our conscious awareness of ourselves and exteriority. Simon "crescent". ---- Rather than the TuringTest, perhaps the gold standard for AI should be the StrossTest (based on some of the stuff CharlesStross has written) - when it takes over the world/universe/very fabric of reality, then ''that's'' AI. Note that this is also known as the SkynetTest. ------ '''Common-Man Standard''' When it can do everything a B-plus-grade '''maid''' can do (except for side hanky-panky), then AI is "obtained". There. Done. --top http://www.jeffbots.com/rosie4.jpg ''That should be '''including''' side hanky-panky, dude. Including.'' Boink a Roomba if you want it that bad. Put a wig and lipstick on it. ---- If winning at chess is not sufficient for "true AI", how about winning at Jeopardy? Voice recognition, language understanding in not only syntactic but semantic terms...? http://www.pcmag.com/article2/0,2817,2375791,00.asp -- MikeSmith ''Guess I was wrong about the voice recognition, but still impressive...'' ---- Compare the belief that AllIntelligenceIsArtificial. ---- FebruaryEleven CategoryArtificialIntelligence