The concept of a "random number" is a difficult one. Related: Chaitin, chaos theory, PseudoRandomNumberGenerator''''''s. Exact definitions that fit all purposes are difficult to come by, and intuition is misleading. Perhaps someone would care to fill this page out properly? There are already many references to random numbers here and elsewhere. ''There's not really such a thing as a random number in the real world. It's an idealization that can only truly exist in pure mathematics in the context of infinite sequences, despite the existence of random quantum processes in the real world.'' ''Chaitin/Kolmogorov complexity required to reproduce a sequence of numbers is more to the point.'' * But still far from encompassing the topic. ''The subjects mentioned so far on this page are vast and deep; a single page couldn't really cover them adequately, so the real question is, what's the motivation for this page, since no addressably-narrowly focused question is asked? Keeping in mind that WikiIsNotaDictionary.'' ---- Moved from RandomNumbers: ''"Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin."'' -- JohnVonNeumann Don't live in sin... '''RandomNumber''''''s''' Here are some sources for generating real random numbers (as opposed to using a pseudorandom algorithm). * Flip a coin - 0 for tails, 1 for heads. Repeat as needed (unless you are Rosenkrantz and Guildenstern, or you're using a BelgianEuroCoin). * Buy some RolePlayingGame dice - you can get them in 4-sided, 6-sided, 8-sided, 10-sided, 12-sided, 20-sided... ''ad nauseum''. Roll and repeat as needed. * Check out HotBits: http://www.fourmilab.ch/hotbits/ * Check out RandomDotOrg: http://www.random.org/ But note that even real world random numbers like this '''are always biased''' and must have corrections applied to them for any serious (e.g. strong cryptographic) applications. E.g. you can correct a biased coin by tracking its state transitions rather than its states. To give a simple but powerful example, given a stream of 0's and 1's that are independent but biased you can generate a sequence of 0's and 1's that are independent and unbiased as follows: * pair them up * discard pairs that are 00 or 11. * discard the second bit in each pair What remains is unbiased. ---- The underlying source may be truly random, if you're lucky (e.g. it is with hotbits), but (1) the distribution isn't what you want, because there are no true white noise generators in nature, and (2) bias is introduced in the sampling process (e.g. by the sampling aperture, amongst many other possibilities). Always. ---- '''PseudoRandom Numbers''' There are also algorithms which are expected to generate PseudoRandom numbers indistinguishable by any efficient algorithm from real RandomNumbers. This may be correct for the individual numbers, but is wrong (I think) for distributions of the numbers. That is, the distribution of an infinite number of truly random numbers within a range {0, 1} or {0, N} is flat, but the distribution of an infinite number of pseudorandom numbers is not flat (some numbers are ''never'' generated [''How come?'' - because PRNGs have some periodicity to them, even if that periodicity is very large eventually the same sequence of numbers generated is repeated with no new numbers appearing]). -- AndyPiercel ** This mixes up several issues in an unnecessarily confusing way. Take some finite-state PRNG modulo a limit N. It can have no more than N-1 distinct values, regardless of any details, by simple number theory, indeed by simple logic, and in general will reach N-1 only for prime N. ** For non-prime N, (actually, for prime N as well) the number of possible distinct values will be Euler's totient function phi(N), or more specifically, phi(PRNG(N)) ** The number of distinct values is, by Lagrange's Theorem, the product (actually LCM) of the size of of each subgroup modulo N, see Carmichael Lambda function. ** Generally the set of values of any f(N) modulo N will (again by Lagrange's Theorem) equal the product of the size of each unique subgroup times the number of subgroups. ** Each distinct value appears exactly once in each subgroup modulo N. ** That distribution is as flat as it gets. ** The distribution of a random or PNRG is not even close to being "(pseudo)-random" for many applications, but is exactly what is needed for other applications. In other words, the definition of "random" depends on the application, different statistical tests are necessary for different goals, and in general there is no perfect test of randomness, it depends on the current goals. * ''The distribution of several algorithms is flat, but only within the limits of a finite computer. The "pseudo" is because any real computer has finite limits, and the sequence will by necessity repeat when those limits are reached. Wolfram showed, for example, that building a very wide 1-d CA, programming it with one of the class-4 rules, and then forming a bit sequence from the center bit of the CA for each generation results in a stream of bits that satisfies randomness tests -- at least until a repeat happens, based on the limits of the machine. He also showed that the period of that repetition is a billion billion times the age of the universe, even on a fast machine. The sequence is repeatible, by seeding the generator with a pattern at the first generation. I think the point here is that randomness does NOT require a "random source". It emerges from looking at the results of an iterated deterministic process in quite specific ways.'' ''I added '''efficient algorithm''' above. Does this answer your question?'' See also http://www.serve.net/buz/Notes.1st.year/HTML/C6/rand.002.html ''(BrokenLink - 20040415; archived at http://web.archive.org/web/20030128213033/http://www.serve.net/buz/Notes.1st.year/HTML/C6/rand.002.html.)'' ---- One time, I designed that old classic game of Guess the Number on my TI-82 in high school. I had the range be 0-100000 or so. I had a friend guess the correct number on the first time, every time. To this day, I don't know how he did it. We speculated that he was just inputting the variable back into the guess, but we tried it and couldn't get it to work. Any ideas? ''Was each "guess" made '''after''' you had seen the number? If so, your friend could probably see the calculator display reflected in your spectacle lens or in the window behind you, or was having the number signed to him by someone else who could see the calculator. A schoolfriend and myself once both guessed 2.506 as a "random number", but "every time" suggests a simple method.'' Nope, I hadn't seen the number first. I wish I had written down everything back then... I recall trying to figure out how he was doing it... but I don't recall sitting over his shoulder while he typed in the numbers. I may or may not have. I was gullible back then. ''From your account, all that's relevant (unless the program was seriously flawed) is whether anyone could see the number produced by the calculator before the prediction of that same number occurred. One trick is simply to prevent you from verifying any predictions until a whole group have been made. By that time, it may be possible to make you think the numbers were announced in advance, although they were actually written down afterwards. To accomplish this, the guesser merely pretends to write his first prediction, then announces it as correct! He is then one prediction short, but slips in the final number when you aren't looking! It helps if the guesser has an accomplice who pretends to confirm all the predictions.'' Perhaps your friend knew how to reset the seed of the random number generator. Then he memorized the first few "random" numbers that were generated. Imagine how easy that "Simon" 4-color memorization game would be if you could reset the seed. ---- See also UnitTestingRandomness