Latin for "appeal to ignorance". A subtle but all-important fallacy. An ''ad ignorantiam'' argument is a positive claim advanced because it cannot or has not been disproven. For example, "You can't prove that Stonehenge wasn't built by aliens, so that's just as good as your theory that it was built by druids." A positive claim is one that asserts real, material existence, like the existence of a certain physical object, the occurrence of a certain deed, etc. (This is not about math.) The best-known practical example of the importance of this fallacy is the principle, in civilized courts, that a defendant need not prove his innocence in order to be acquitted. It would be ''ad ignorantiam'' to interpret a lack of proof that the defendant ''didn't'' do something as reason to think he did. ---- Surely, equally, it is ''ad ignorantiam'' to interpret lack of proof that the defendant ''did'' do something as reason to think that he ''didn't'', yes? Your example is illustrative to a point, but perhaps the following person's comment is a more accurate explanation...ultimately your example confuses the issue with BurdenOfProof, and our legal assertion of 'innocent until proven guilty'. --DanKane Nope, it isn't equally ''ad ignorantiam'' to take a lack of evidence as sufficient to judge "it didn't happen". The ''ad ignorantiam'' is indeed a fallacy of BurdenOfProof, and the legal principle is a practical application of it. Demanding evidence to prove non-existence is the ''ad ignorantiam'' in a nutshell; it directly violates the basic BurdenOfProof principle that only positive claims need be justified, and only by positive evidence. This doesn't mean that new evidence can't change your mind, of course; just the contrary. Note that this principle is relevant only when you need to make a decision within a well-defined framework, such as the legal system's decision about whether to imprison someone. Another way to put this: some people say that if you don't have evidence as to whether something exists or not, you should give it a 50% probability of being true, until evidence comes along pushing you either way. But that is ''ad ignorantiam'' since it takes a lack of evidence as a basis for 50% probability. BurdenOfProof says you should give it a 0% probability, and that no evidence can possibly prove its non-existence. If you want to see why this is so, just try giving non-zero probability to any and all assertions of existence that are outside your present knowledge and see what implications that would have for how you should respond. Pascal's Wager would become valid reasoning, proving that you should obey ''all'' religious ideas (even ones like "God will punish all non-matchbook collectors in the afterlife"). There would be no way to make intelligent decisions without this principle. We all ordinarily follow it implicitly except in matters where sophistry tends to reign, like religion and character assassination. Note that the non-obviousness of the BurdenOfProof principle makes the ''ad ignorantiam'' devilishly effective sophistry. You can really whip up an uneducated crowd with it or use it against competitors as part of a sales strategy, where it's known as FUD ("fear, uncertainty, and doubt"). BTW, I wrote both the opening text and the next section (though it may have been edited a bit since then). I intended it to all make the same point. -- BenKovitz ---- Nicely clarified, thank you I seem to have arrived at a rather abstracted understanding of ''ad ignorantium'' (AI, for brevity ;-). If I understand you correctly, the lack of proof (in an AI argument) is only weighted in one direction, that being towards the ''negative''...I ''cannot'' prove that God ''doesn't'' exist, therefore he does...as a classic example. Maybe there is an alternative expression for my (mis)understanding. Regardless of the ongoing implications (you cite Pascal's Wager), I meant to highlight an underlying flaw in logical mechanics (which I, you say, have incorrectly named AI). You are perfectly correct to state that this can be used to great effect against the ignorant, but I was more concerned with the actual intellectual misstep than it's ''application''. Thus I interpreted your second section in a more abstract light. ''...lack of evidence as if it were evidence...'' could equally be applied to an innocent man or a guilty one, however, the ''ignorance'' still lies in the lack of evidence one way or another. You appear to define AI with a certain ''polarity'' of proof, which is interesting...that the lack of evidence to the negative drives the conclusion to the positive...falsehood to truth, inaction to action, absence to presence...but what if the argument is not so binary or linear? Consider "Which fruit is healthier, apples or oranges?". Both sides could resort to an AI argument to discredit the other fruit. This logical flaw creates an impasse that is rife throughout the scientific world, where competing theories are indeed at a standoff until proof is presented that swings the argument. There we find a logical framework built around the "assumed zero probability" premise. --DanKane Indeed the phrase "a lack of evidence as if it were evidence" now does seem misleading. I put that in mostly because it's a pretty commonly used phrase from lots of places that explain the AI. Maybe it's not such a good idea. (But now I want to keep it because it's led to such an interesting conversation.) About "Which fruit is healthier, apples or oranges?", the AI is not possible to apply to either alternative, since neither alternative is a claim or denial of existence. "Apples cure warts" would be a positive claim, for which one would reasonably require evidence; "apples do not cure warts" would be a negative claim, which is the default belief (the "presumption"), the one you've got for probability estimation until you get some positive evidence. "Apples are healthier than oranges" would require you to stack up the health-related effects of apples and oranges, define some measure of those effects (it could be a ''very'' rough measure), and just compare the stacks. The AI would be an intellectual misstep in getting an effect into one stack or the other, like "Well, no one's proven that apples ''don't'' cause lung cancer, so we have to put it on the apple stack weighted with 50% probability." The essential mistake of the AI is that it jumps out of the realm within which reasoning is possible: the realm of existence. We come to know of the existence of things via their effects, so we logically need to be affected in some way by something before we can add it to our catalog of things we think exist. Non-existent things, though, leave no effects. They cannot be discovered because they, well, don't exist. For example, you cannot discover a non-existent burglar by checking the mud outside, because non-existent burglars leave no footprints. The AI boils down to applying the standard for positive claims to negative claims: demanding to see non-existent footprints before allowing the non-existent burglar into your catalog of things you think don't exist. It's only within the realm of existence that we reason. We don't keep a catalog of non-existent things; it would be a vast yawning void. I think there are all sorts of subtleties, though. For example, it would be nonsense to say, "I will regard this program as having no bugs until I see positive evidence of the existence of a bug." But neither is it AI to say, "We can't be confident that this program really works until we've put it through some tests." The trickiness starts to dissolve when we understand that claiming "no bugs" is actually a positive claim: that the program ''does'' behave according to certain rules. A program is actually just a bunch of switch-settings in a machine, so really the claim is "when the switches are set like this, the machine will behave like that." That is actually not a claim of existence. It's a deductively provable claim, distinguishing between non-material alternatives, just like claims of existence in mathematics. BTW, as I currently understand the situation in science, how the AI applies is even more tricky. Without going into detail, a ''theory'' like Newtonian mechanics or Einsteinian relativity is "justified" only on the basis that it seems like a good guess and it hasn't broken yet. A theory is the very medium within which we reason, so the AI applies to scientific theories very differently. There is no "probability" that relativity is true, just as there is no probability that the man accused is guilty, that there is or is not a God, etc. Assigning probabilities to these things would involve comparing them to alternatives outside the spaces within which we make inferences and comparisons. Science is tricky, then, because it ''is'' the attempt to create and improve those very spaces. -- BenKovitz ---- Righto...I think I'm finally on your page now ;-) Indeed it ''would'' be nonsensical (mathematically impossible?) to assign probability to such scientific theories, as the alternative 'theory-space' (of both the reasonable and the utterly gibberish) is seemingly infinite...thus we work within constraints bounded by our own rational understandings of the real world...which you correctly state are what science continually strives to refine. Would you now consider me correct in understanding that AI takes this abstract logical misstep (lack of evidence as evidence) and applies it to situations where you have substantive, rather than comparative, competing claim(s) that attempt to move the solution into this infinite alternative theory-space? What a wonderful learning experience this Wiki stuff is! ''I'm a newbie if that isn't glaringly obvious'' ;-) Highly intellectually stimulating! --DanKane ---- The name comes from the fact that such arguments treat a lack of evidence as if it were evidence--ignorance as if it were knowledge. New, positive facts advance your knowledge. The fallacy treats an absence of facts as if it were itself a relevant, positive fact. Properly, a lack of evidence leaves you in the null state of knowledge, which is equivalent to disbelieving in the existence of the thing in question. ---- ''Properly, a lack of evidence leaves you in the null state of knowledge, which is equivalent to disbelieving in the existence of the thing in question.'' This seems like an entirely Western point of view to me, which doesn't handle the null state of knowledge very well (defaults to negative, evidently). Properly, it is equivalent to neither believing nor disbelieving in said existence. In other words: MuAnswer. :> ---- Not to be confused with appealing to the limits of someone's imagination, as in, "I can't imagine how the eye could have evolved naturally, therefore it couldn't have." ---- See FallaciousArgument ---- CategoryCommunication