One of the old chestnuts of mathematical algorithms. Problem: Find x^n where x is real and n is a positive integer, using a minimal number of multiplications. Algorithm in Scheme: (define (square x) (* x x)) (define (power x n) (cond ((= n 0) 1) ((evenp n) (power (square x) (/ n 2))) (else (* x (power (square x) (/ (- n 1) 2)))))) For example, x^{10} = y^5, where y = x^2 = y z^2, where z = y^2, = y w, where w = z^2. This requires 4 multiplications, to find y, z, w, and finally yw. If the binary expansion of n is a_{k}a_{k-1}...a_{0}, then this takes k + l - 1 multiplications, where l is the number of one-bits in the expansion. The expansion of 10 is 1010, and in that case k = 3, l = 2, and k + l - 1 = 4. -- EricJablow A few remarks about this algorithm. '''1''' It doesn't always do the fewest possible number of multiplications. For instance, when n=15 you could do (x^3)*(x^5) = (x*x^2)*(x*x^2*x^2) requiring 4 multiplies, but the algorithm above will do x*(x^2)^7 = ... = x * x^2 * (x^2)^2 * ((x^2)^2)^2, for 6 multiplies. '''2''' Something equivalent can be implemented iteratively rather than recursively. This isn't likely to be important unless you have either a bad language implementation or a bias against recursive functions. '''3''' Although it looks like this offers an ''n''/log(''n'') speedup compared to the naive one-at-a-time algorithm, life's a bit more complicated in the real world where multiplying larger numbers takes longer. If you implement multiplication in such a way that multiplying an ''M''-bit number by an ''N''-bit number takes time proportional to ''MN'', then the elegant algorithm above may be no faster than just multiplying by ''x'' ''n'' times; it might even be slower. '''4''' On the other hand, if you use a more sophisticated multiplication algorithm or you're doing modular arithmetic, an algorithm like the one above can be a very big win. (You need modular exponentiation for doing RSA cryptography and primality testing, for instance.) ---- Yes, I could have also added an auxiliary variable. In C, iterative style: float power(float x, unsigned int n) { float aux = 1.0; while (n > 0) { if (n & 1) { \\ odd? aux *= x; if (n == 1) return aux; } x *= x; n /= 2; } return aux; } or in Scheme, functional style (TailRecursive) (define (multiply-with-power aux x n) (cond ((= n 0) aux) ((= n 1) (* aux x)) ((even? n) (multiply-with-power aux (square x) (quotient n 2))) (else (multiply-with-power (* aux x) (square x) (quotient n 2))))) (define (power x n) (multiply-with-power 1 x n)) Your x^15 algorithm was wrong; you had (x^3)*(x^5) when you needed (x^3)^5. To compute x^3 from x requires 2 multiplications, while to compute y^5 from y (= x^3) needs 3 multiplications. The total is 5. You are right, however. The squaring method is not necessarily the best sequence. More striking would be x^27, which would need 8 multiplications by squaring, but 6 from cubing. It's actually an interesting, and so far as I know unsolved, problem: given a positive integer n, let f(n) be the smallest k such that there is a sequence 1 = a_0, a_1, ..., a_k = n, such that each element other than a_0 is the sum of two earlier elements in the series. Is there a good algorithm for f(n)? What are its arithmetic properties? Neil Sloane's OnLineEncyclopediaOfIntegerSequences, at http://www.research.att.com/~njas/sequences/Seis.html, references this as entry A003313, and cites TheArtOfComputerProgramming, vol. 2, p. 446. The sequence is given as: 1--10: 0,1,2,2,3,3,4,3,4,4, 11--20: 5,4,5,5,5,4,5,5,6,5, 21--30: 6,6,6,5,6,6,6,6,7,6, 31--40: 7,5,6,6,7,6,7,7,7,6, 41--50: 7,7,7,7,7,7,8,6,7,7, 51--60: 7,7,8,7,8,7,8,8,8,7, 61--70: 8,8,8,6,7,7,8,7,8,8,... ---- Though the above may make it seem that using such a system improves the performance of raising powers, quite the contrary is true. I have a theory, and some lemmas, but I don't have a mathematical proof yet. Perhaps someone can help me with it, I'll give the basis here: AdditionSequence. The point is that finding how to multiply will be of the order O(sqrt(N)), while using the power system used at the beginning of the page is of order O(log(N)). Clearly for the general case, the system used at the beginning of this page is better. -- ChristophePoucet ---- Yes, multiplication is a curious task in programming. We assume that it's all done in hardware and we ignore its complexities. You are right; a naive multiplication algorithm that multiplies two N-bit numbers in O(N^2) time would be slow. However, modern algorithms use FFT-like methods, and require O(N log N) time. It's tricky; the constants the O-notation hides are crucial. -- EricJablow ---- See * AlgorithmsRoadMap ---- CategoryAlgorithm