Code that you can ReadLikeMath will be very terse, giving a large logical content in a small space, but requiring intense study. As an anecdote, when I was in mathematics graduate school, one of our first year text books was very clear, and someone gave it the high accolade that it ReadLikeComicBook. Sometimes this code will qualify as BlackMagic. There's nothing wrong with this, but there ought to at least be a citation out to literature or an engineering notebook with the relevant derivation or proof. ---- On the other hand, an ignorant programmer may try to be too terse in attempt to present a one-line algorithm. Consider the following ignorant code: double[] solveQuadratic(double a, double b, double c) throws Illegal''''''Argument''''''Exception { double discriminant = b * b - 4.0 * a * c; if (discriminant < 0.0) throw Illegal''''''Argument''''''Exception(); double rootDis = Math.sqrt(discriminant); return new double[] { (- b + rootDis) / (2.0 * a), (- b - rootDis) / (2.0 * a) }; } This code is mathematically correct; it's even clear. Unfortunately, it is wrong in practice. Computer arithmetic is not mathematical arithmetic. The floating point number system is but an approximation of the real number system, and its arithmetic operations are rarely exact. If you use this routine to solve the equation x^2 - 100 x + 1 = 0, the second root will be (100.0 - Math.sqrt(9996.0)) / 2.0; the subtraction is catastrophic, and loses 4 decimal places of accuracy. If you use this method many times in a calculation, the errors will propagate. The key to calculating this well in all cases is to use the identities: (- b + rootDis) / (2.0 * a) = (2.0 * c) / (- b - rootDis) (- b - rootDis) / (2.0 * a) = (2.0 * c) / (- b + rootDis) and use an if statement to choose the calculation on each line that avoids the catastrophe. In the case I gave above, the result is 2.0 / (100.0 + Math.sqrt(9996.0)). Unfortunately, people who learn the quadratic method in high school and then think no more about computer arithmetic regularly fall into this trap. Losing 4 digits of accuracy may not seem important, but for the equation x^2 - 10000x + 1 = 0, the loss would be 8 digits of accuracy. -- EricJablow ---- Another twist on ReadLikeMath is not so negative. In languages with operator overloading, like C++, it is possible - sometimes even attractive - to spend a bit of time hiding implementation details so that client code will ReadLikeMath. For example, the details of matrix multiplication may be hidden to yield something like: Matrix A,C; Vector x,b; ... C=A*x+b ... If done appropriately, this can greatly aid understanding of a piece of code. It can be especially useful if you want someone (scientific programmer) to implement an algorithm for you that involves mathematics they will not understand deeply. Unfortunately, this is often necessary. The more the code resembles what you write down on paper, the better - assuming that the underlying implementation is solid. ---- APL (AplLanguage) and its later incarnation as JayLanguage was famous for this. It lead to famous OneLinePrograms that for example - sorted an array. A common game was to take you APL code to a colleague and challenge them to figure out what it was for. --DickBotting ''To sort an array in APL doesn't take a OneLinePrograms - the grade up operator (delta overstruck by vertical bar, which looks like an up arrow) yields a sorted index vector for the array. In J, an array X is sorted by /:~X'' PerlLanguage's culture also for a time held the OneLiner in regard. -- DavidSaff ''See PerlGolf'' See the ObfuscatedPerlContest at http://www.tpj.com/ for examples. Also, see http://catb.org/~esr/jargon/html/O/one-liner-wars.html for the JargonFile entry. -- EricJablow ---- See also ReadsLikeProse, CriteriaForGoodMathOrCompactNotation