Coincidentally, or perhaps not, the scientific process and natural selection share a similar algorithm pattern that resembles: form new candidates randomly (or by some other means) while not satisfied with existing candidates or the universe hasn't ended form some new candidates by mutating existing candidates form some new candidates by cross-mixing existing candidates (sex) form some new candidates randomly (or by some other means) evaluate each candidate per fitness metric to produce a fitness score remove bottom N% of candidates from the pool, per fitness score end while (This is oversimplified in that which existing candidates participate in mutations and sex is often probabilistically determined by their fitness score in practice. For simplification, we have a "blunt" cut-off point above. But it still more or less leads to the same thing since the better of the passing candidates are more likely to have offspring that survive the next generation since near-borderline cases will typically have fewer passing offspring, being based off a marginally passing design. In science, "removing candidate" means ignoring an unlikely hypothesis, which means resources are given to the more promising set.) There are still some key differences. Science is guided more by "intelligent guessing" than natural selection to produce new candidates. Natural selection relies more on randomness itself for fresh stock. Further, new observations (fitness tests) are often chosen based on the nature of the top candidate hypotheses in science. Specific tests are often devised or performed that are explicitly selected to distinguish between top candidates. This is because scientists prefers to find a single top candidate, while evolution is "satisfied" with multiple elites. --top