The title says it all, really. No one has ever produced any real, hard, independently verified, universally accepted, objective evidence that Python, or indeed any other wussy, so-called "high-level" language is better than good old, concrete, easy to understand every instruction MachineCode. HowCanSomethingBeSuperGreatWithoutProducingExternalEvidence? Clearly we should all program by setting toggle switches and hitting the "Load" button. Watch the lights blink. ---- The above seems very much tongue-in-cheek, but it is worth noting that while there is also no universally accepted definition of "better", there are some well-reasoned ones. Portability and code-migration across machines and (more importantly) operating systems would be first on my list for objective reasons that python is better than machine code - I could probably write a machine-code interpreter (a machine-code = a fairly standardized byte-code, even if it varies some for different CPUs in the same line), but figuring out how to port all the interrupt instructions and such from one OS to another is a task with far less standardization. ''You say it. There are some well-reasoned ones. Or rather: There are infinitely many ones. One for every individual. "Better" for programming languages is a weighted (and that's the problem) combination of lots of aspects among them the cited portability but also readability (split that for different audiences), speed, compactness, abstraction facilites, documentation, theory, proofs, whatnot. There is NoUniqueBetter. And thus one can only AgreeToDisagree on these LanguagePissingMatches.'' Not really... there are quite a few poorly-reasoned and sometimes even inconsistent sets of 'better' for a variety of individuals. And there are even more individuals who have never given it much thought. Besides, even if every individual did have a well-reasoned understanding of 'better' in the context of programming languages, that would only mean a little under seven billion, which is somewhat less than 'infinitely many'. It is quite possible and realistic to come up with a set of universals that can be reasonably considered 'better' in the context of programming. I.e. if X is good, then the ability to demonstrate X is good and the ability to guarantee X is also good. That's a principle. Alongside it, one might say that performance, correctness, security, etc. are good. Relating 'good' to 'better', one can have a reasonable axiom that says: having a 'good' property is 'better' than not having that same 'good' property - which won't help much when trading one 'good' for another, but is clearly useful in other purposes. Another reasonable axiom is: MurphysLaw - an assumption that any way people can create an error, people will create an error. Accepting MurphysLaw actually leads to conclusions in support of YagNi and RefactorMercilessly since one way error can be introduced is in the creation of unnecessary features and the maintenance of identical ones, and so allows for theorems that refactoring is good and unnecessary features are bad. Additionally to that, one can reasonably accept as axiom that typographical and logic and design errors are going to occur for every programmer because nobody has ever been observed to completely avoid making them - this would be a major mark against WriteOnlyLanguage''''''s (i.e. languages that aren't easy to read and comprehend and thereby maintain) because mistakes in WriteOnlyLanguage''''''s are more difficult to locate; it would also favor CodeChangeImpactAnalysis and CouplingAndCohesion as a measures of 'better' (it is 'better' if code and behavior is easy to change without breaking other components). Another reasonable axiom is: costs are 'bad', and the absence of a 'bad' property is better than its presence. That one can be followed by: CPU-time, memory, and disk-space are all costs. So is programmer time, tester time, and user time. A reasonable assumption might be: assume an order-of-magnitude increase in costs to fix software between testing and deployment - as this corresponds to a number of real-world measurements presented by the CMMI group. There are a great many more reasonable assumptions one can and should just accept as true for the sake of arguing what can reasonably, and universally, be considered better vs. worse. ''On the other hand, refactoring mercilessly can introduce errors into an application or break an existing codebase.. because due to Murphy's law, someone refactors the code the wrong way instead of the right way. Even simple spelling mistakes, especially in dynamically and weakly typed languages, can cause an application to go very wrong. And due to Murphy's law - if someone is going to refactor code, they are going to make many mistakes while doing so. Sometimes subtle but very bad mistakes.'' True, to a degree. MurphysLaw is statistical, about many hands in the pot making every possible error. Individuals don't make every possible error, but individuals do make typographical, logic, and design errors as a fact of life (the above amended to reflect this). That would be among the reasons that programming disciplines that promote refactoring also promote automated regression tests such as unit tests and programmer tests (though demonstrated correctness is good in and of itself). It would also be among the reasons that static typing is liked by many (since it can locate various classes of logic errors). ---- Wow, a topic about "objective" and "better", yet I have nothing to do with it. Feels odd. --top You failed to infer that this was an indirect and tongue-in-cheek response to your usual claims. Saying you had "nothing to do with it" would be in error. I didn't realize it was new. Anyhow, it makes an interesting question, similar to the Goto question: do we tend to pick techniques/languages because they fit our mind or because they are universally/objectively better? I suspect that CodeChangeImpactAnalysis would favor Python, but cannot present hard evidence right now. --top ''Can you prove that a keyboard or mouse is useful? There is no really good evidence or statistics available. And even if there are statistics available, I can still pretend there are not - because people on this wiki won't look it up, and I can pretend to have won my argument by ShiftingTheBurdenOfProof. Debating about such issues as "are keyboards useful?" is futile and pointless. Certain issues are so obvious to wo/man, that we do not waste time researching. One should focus his efforts studying issues that are more important. One could spend 48 years trying to cook up evidence that '''keyboards are useful'''. By that time someone probably would have registered him in the mental institution. Focus your efforts on studies that are '''important''', not ones that are '''silly''', such as "Are goto's proven to be worse? Who's got evidence?". Asking such ludicrous questions and dragging old dead goto debates into the future is a waste of time. And does cheese really come from cows? Where is the actual evidence? The photos from the farm could be faked. Until proven otherwise, I don't fall for cheese! (ShiftingTheBurdenOfProof)'' Huh? ''Well one could argue that ThereIsNoObjectiveEvidenceThatKeyboardsAreUseful and ThereIsNoObjectiveEvidenceThatCheeseComesFromCows. For example we only have a few pictures on the internet, but no proof on this wiki, no hard evidence on these topics. We could travel to farms and travel to people's homes and do some statistical analysis and get some empirical evidence - but this exercise would waste time, because we already know that keyboards are useful and that machine language is not as productive as python. In other words: some people throw the objective "evidence" topics around just for the primary reason of ShiftingTheBurdenOfProof.''