All problems that are solvable with recursion are solvable with loop, and vice versa. ---- Is this statement right or proven at all? ''If you maintain your own stack it would be correct. Otherwise recursion can do things loops can't, like walk a tree.'' Loops without recursion can definitely simulate recursion if given a stack, but what about recursion, can it solve whatever problems loops can solve? ''Yes. Recursion to implement loops is a common idiom in a FunctionalProgrammingLanguage.'' Then are pure FunctionalProgrammingLanguage''''''s (languages without assignment) as algorithmically expressive as ImperativeProgrammingLanguages? (ImperativeProgrammingLanguages could simulate FunctionalProgrammingLanguage''''''s) ''Yes. They are both TuringComplete. For example, many interpreters for FunctionalProgrammingLanguage''''''s are written in CeeLanguage.'' ''This is closely connected to the ChurchTuringThesis. Church used a "functional" programming model (the LambdaCalculus) and Turing used an imperative model (the TuringMachine). The ChurchTuringThesis is saying they are of equivalent power.'' ---- How about blowing the stack? This is a problem with recursion that does not affect loops. My point is to indicate that different solutions are different and subtle differences can be catastrophic. If you don't know how something is to be used, it is dangerous to imply equivalence between different approaches. ''In languages where recursion is used instead of looping, TailRecursion is commonly optimized out during compilation -- so no additional stack frames are used, and it won't blow the stack.'' E.g. the following SchemeLanguage code for, say, a HTTP server, will not exhaust the stack. This is guaranteed by the language standard (see ProperTailCall). (define (server-loop) (let ((request (get-request))) (handle-request request)) (server-loop)) The cases where this is not possible are exactly the cases where the stack is used for real storage - things like tree walks, or QuickSort. A loop version of quicksort must simulate a stack manually. ''But using an explicit stack or data structure (see ExternalizeTheStack) in general (not necessarily QuickSort) allows interesting inspections and alterations to take place. One has more control over it. For example, a database table(s) perhaps can use disk if it gets too large. Further, it makes it easier to stop the process in the middle and then come back later to finish it. Generally markers are used to mark paths or nodes that are completed so that one can continue to work on the non-completed records. (Sometimes there are two marks: one to indicate that we processed something and another indicating it's links have been logged.) This more closely mimicks how most humans would navigate a giant maze: mark where you have already been with ribbons, spray-cans, or bread crumbs. Efficiencies are tacked on by finding various ways to avoid retesting or reduce the cost of retesting finished items. -- top'' You can use continuations to do that. [True, but credit where credit is due, Top is correct that it "allows interesting inspections and alterations to take place"] ---- If anyone is up to a challenge, I've spent many fruitless hours noodling for a way to do AckermannFunction without recursion (or an indefinitely expandable stack). I'm now working on a proof that it's impossible, but that seems just as difficult. -- MarcThibault * Your wish is granted, see IteratedAckermannExample. ** ''I don't think this counts. The AckermannFunction simply isn't PrimitiveRecursive (which amounts to 'recursion or [expandable] stack' and there are lots of proofs out there (that being 'computability 101'. See the refs on the Ackermann page.'' ''Study the SicpIterationExercise and then use the same technique (called memoization http://en.wikipedia.org/wiki/Memoization).'' [First off, memoization does not eliminate recursion by itself, it just eliminates '''redundant''' recursion.] * Memoization in its simplest and most elegant form '''is''' a loop: data_points_to_be_calculated = { initial_point(s) } known_set = while {there are data points to calculate} do if (can calculate more data points) then { add the newly calculated point to the known_set } else { add more missing points to the set to be calculated } end if done * Of course one can always transform any loop into a recursion and any recursion into a loop, so the argument is trite. * [I see what you're driving at, but note that memoization can be used directly to eliminate redundant recursion without changing any calls into loops. Memoization is merely remembering the result of previous computation. It does not directly have anything to do with flow of control; it's independent/orthogonal. In your example above, basically you started with a looping solution, and then added memoization to it. One could do the same to an initial recursive solution.] * [Indeed, one could use memoization simply to speed up straight line calculation that uses neither recursion nor loops, but only used conditionals, although I've never seen that in the wild.] [But also, memoization doesn't help in any real sense here; try it. Whereever the fully recursive version "fails" (you get tired of waiting for it), so too will the memoized version.] [Mathematical induction (by a human) can be used to reduce any single level of Ackermann recursion into a closed form, which vastly speeds up that level, however level+1 needs a brand new proof and a new closed form. So basically, it's still slow.] [Just as an exercise, however, Ackermann can be implemented nonrecursively using PowerLoops. -- DougMerritt] ** PowerLoops are defined as "recursively nested ''for'' loops". The author understands that recursion goes beyond recursive call syntax. Memoization still eats up memory at a fast and unpredictable rate. Transforming recursion into a run-time equivalent loop isn't changing the problem, just the syntax. You still run out of memory before you run out of bits to represent the result. I guess I'll have to keep noodling. --mt * [True, which is why I said "just as an exercise". Time is more the issue than space, btw, although I don't disagree with what you said. Also I have every reason to think that the complexity of Ackermann's is essentially irreducible, although I don't recall what is known theoretically. If that's true, then noodling won't help. I '''might''' be wrong, of course; I'm primarily going off of my own noodling with it, years ago.] * [Refreshing my memory from http://en.wikipedia.org/wiki/Ackermann_function, it seems to me that, since it grows superexponentially on the first parameter, that the bulk of its complexity must indeed be irreducible, considering merely the number of digits that it must emit on each increment of the first parameter. That page does mention (near bottom, see "benchmarks") how memoization can help -- it speeds things up on the second parameter, whereas above I was pretty focused on the first parameter, where memoization doesn't help. If you are mostly interested in the second parameter, on the other hand, maybe there's much noodling to be done. :-) ] ** If I understand Wolfram , that Ackermann is not primitive recursive (can't be computed with ''for'' loops), that pretty much closes the subject if the proof is sound. -- mt ** It cannot be computer using a composition of fixed length for loops as in PascalLanguage for loops. It can be computed with a while block. Just follow the pseudo-code above for memoization, it de-recursifies any recursive function. ---- On Memoization: I consider it a CodeSmell since it forces you to carry state between iterations of the loop. In a loop that by nature has to carry state, then it's no big deal. But in a loop that otherwise wouldn't carry state, it just seems ugly. -- MichaelSparks ''Any looping construct carries state - in the very least the value of a counter or any state referred to in the exit condition. That's the nature of imperative loops as opposed to functional recursion. If we follow this train of thought, we declare imperative programming a CodeSmell as opposed to functional programming. Do I sense a HaskellWeenie, waiting to come out? -- CostinCozianu'' Yes, you sense correctly. I think there are lessons to be learned from all kinds of programming, including functional. You are right that all imperative loops carry state (aside from the infinite ones, naturally), but I think a reasonable distinction can be made between the two cases I wrote about. There are some loops that just have counters so that they can dereference different sets of data on each iteration. I try to think of those as big SIMD statements. The instructions and outside state stay the same for each iteration - only the input changes. Loops in general can take on lots of nasty forms if the programmer wants them to. Just because the nature of imperative loops allows it, doesn't mean we should all do it. A well-accepted principle of structured programming is to localize state as much as possible (i.e. no globals). I think that should apply not only across function calls, but across iterations of loops. That seems reasonable enough to me. -- MichaelSparks ''Ok, so I wrote the IteratedAckermannExample. As I stated on the page, I'm very willing to listen to criticisms and improvements. So how can it be restructured more elegantly ? By the way, I also have a fully tail-recursive Scheme version, but that is not an improvement: only the form looks different the essence is the same.'' ''Also I'd like you to ponder at this idea I found in one of EwDijkstra's writings (I don't have a quick reference that I can give you, so I paraphrase from memory). He says that iteration and transitive closure are more fundamental than recursion, and he goes on to say that in many examples published by recursion advocates recursion looks like trying to crack a nut with the sledgehammer. -- CostinCozianu'' I'll take a look at IteratedAckermannExample. I doubt I'll have any breakthrough solutions to present. I readily admit that some loops work out better if you carry state across iterations. -- MichaelSparks ''It was a 30 min exercise at the request of somebody who raised this problem. I always used this type of loop for memoization strategies before I even knew it was called memoization. I'm curious to see if there's a "better" way, but I find it curious that you find a CodeSmell in what I found the natural way to solve this type of problems. -- Costin'' I see only one tiny thing in this discussion that even argues vaguely in the direction of CodeSmell: that unnecessary state is bad. But that's vacuuous, it either means, as already pointed out, "I'm a HaskellWeenie / SideEffectsFreeWeenie / ReferentialTransparencyWeenie", or it's inherently redundant/circular, since the word "unnecessary" assumes the conclusion. -- DougMerritt I took a crack at seeing if I could simplify Costin's IteratedAckermannExample, btw, and did not in fact find a simpler approach in the time I alloted to it (somewhat more than 30 minutes). I would find it interesting if it turns out there's reason to think that there is/isn't a smaller/simpler approach, since after all he said it was just a quick "finger exercize". I kept getting distracted by wanting to add HelperFunction''''''s that always turned out to need to be recursive. :-) I've never played with implementing the InverseAckermannFunction, anyone know if it's interesting to fiddle with in a different/same way? -- DougMerritt ---- For a simple example, here is a C# implementation of the DOS 'dir /s' command: using System.IO; private void recursiveDir(string currentDir) { foreach (string subDir in Directory.G''''''etDirectories(dir)) recursiveDir(subDir); foreach (string file in Directory.G''''''etFiles(dir)) Console.W''''''riteLine(file); } and for iterative: private void iterativeDir(string startingDir) { Stack stackFrame = new Stack(); stackFrame.Push(startingDir); while (stackFrame.Count > 0) { string currentDir = (string) stackFrame.Pop(); foreach (string subDir in Directory.G''''''etDirectories(currentDir )) stackFrame.Push(subDir); foreach (string file in Directory.G''''''etFiles(currentDir)) Console.W''''''riteLine(file); } } You can clearly see the skeleton of the recursive method inside the iterative method. It simply wraps a stack around it instead of relying on the language to maintain the stack frames. You can also see how much more 'elegant' the recursive version is, and how it requires fewer lines of code. ---- Far from being elegant, where I work we consider recursion to be the 'goto' of the 21st century. The first use of recursion by a developer gets a verbal warning, second use a written reprimand, and we fire on the third use. ''You should encourage switching to Erlang. ^_^ All I can say is that I'd be using the '''stack''' class quite often in such an environment. That said, we are encouraged to avoid recursion in our current programming environment, where the language-provided stack frames are somewhat limited and recursion can be very deep (C++ has no TailCallOptimization and Win32 threads only get a few megabytes stack space), and so I do end up using the 'stack' class quite often for performing visitors on our object database.'' [To the "[f]ar from being elegant" poster: I'd be curious to know ''why'' your workplace has banned recursion. I can guess, but that would just be speculation.] ------- Somewhere I placed some pseudocode for table-based traversal as an alternative to recursion, but I cannot re-find it. Basically you have a table of "nodes" with a "visited" flag. You loop through the table (or select up to N non-visited nodes) until all the visit flags are set via processing the nodes. If you encounter a branch (folder), you read the branch and load the first level of its nodes and/or any branch references in to the table. Later loops will take care of the deeper folders on that branch. Changing the sorting criteria can affect the nature of the traversal pattern, but for large sets you probably want to considering sticking to the "native" order for efficiency. Don't forget to index the visit-flag column. Arguably it can be called a "processed" flag instead of a visit flag, but that depends on what you are doing. Sometimes processing is done after the traversal (cataloging), and sometimes during. --top ---- See also LoopingConstructs