This is a way of building an application that is most suited to programs whose primary goal is to perform a computational task. ''Ummm, don't all programs "perform computational tasks?"'' ---- Example: I have a situation where in order to compute A, I have to compute B first. Now I ''could'' write code for A with calls to the (not-yet-existing) code for B. But if I want to be able to test during the whole development, then it is better to write B, complete with UnitTest''''''s, first. Then I can test B until I feel confident that it works. After that, I can code A complete with ''its'' UnitTest''''''s. If I do it the other way, I would not be able to test A during its development. ''' If B doesn't produce the right answer, A won't either (and thus not pass its UnitTest''''''s), even though A might be correct. ''' So in this situation, I write the code bottom-up. ---- '''Counter argument:''' But you could write A first you wrote a well-behaved stub for B. Perhaps using NullObject or some such idiom. This will work if: the stub-B produces the answers that the B UnitTest''''''s would demand, but without doing any computation, for whatever part of A's functionality you want to build next. ''True: but the UnitTest''''''s test with several inputs, and therefore expect several outputs. Moreover, the correct output is often not exactly ''known'' by the UnitTest; rather the UnitTest validates its correctness. (Example: compute the inverse of a matrix. The UnitTest feeds some matrix M to the code, and then checks that the result N satisfies N*M=I.)'' Err, so, you would write your matrix inversion test cases to generate a random matrix each time? How does this ensure that your inverter works in any reasonable amount of time? Surely you'd have a (in some sense) representative set of matrixes. And, more generally, a set of cases that examine the boundaries of the input domains etc. etc. for whatever A does. ''I have a fixed set of test matrices. I don't generate them at random.'' OK, and presumably they are too big to bother working out the inverse to put into the test case results? In this sort of case, when working out what the stub would have to return is very time consuming, then maybe this bottom-up approach starts to have value. Of course, you are only testing that your ''particular'' matrix inversion method is correct with respect to your ''particular'' matrix multiplication method, as determined by your ''particular'' matrix equality method. It may be (unlikely, though) that they are all wrong, but wrong in such a way that m.times(m.inverse()).equals(I) is still true for all your test matrixes. But I'm sure you knew that.--KeithBraithwaite Anyway: wouldn't you start with A's test cases having only the "simplest" cases for which we could expect simple behaviour required of B, thus bootstrapping ourselves, before moving on to more complex cases? ''It still could be done, but it is ''often'' (not always) easier to write B first.'' Hmm, I might go as far as possible but very unusual. What domain are you working in? Maybe it has some peculiarities. ''Numerical analysis -- StephanHouben'' ---- '''Real example, please, where this is useful or necessary?''' Well, in the ''other'' window on my desktop, I now have code for solving a boundary value problem. I first needed to have a function for solving the initial value problem, because my code needs to call it. How can I test my code, if it doesn't even compile? ---- This is useful when there are large gobs of uncertainty hanging around the project. ''Isn't that normal? I thought XP was all about dealing with risk? -DanBarlow'' Top down is great for getting an overview of what's going on, but in the end all the low level bits have to be in place. Building upwards from the bottom can be slow, but it's great for generating confidence that it's working. A variation of "if it ain't broke, don't fix it" is "if it's working, and your changes don't break it, you won't have to fix it". ''[Does this refer to top-down or bottom-up?]'' -- RichardCollins ---- What happens when you're writing code to compute some unknown result? You have no way of knowing if either your code or the result is correct. What I try to do in this circumstance is to try computing the result again using a different technique or algorithm. If you get the same result in two different ways, you can have more confidence in its correctness. ''"If you are computing an unknown result, any program will do."'' Certainly not. The answer also needs to be correct. On the other hand, computing a ''known'' result is a trivial problem, which can be solved by only using printf. ---- ''"Everything should be built top-down, except the first time."'' -AlanPerlis in EpigramsInProgramming ---- BottomUpProgramming is a way to control risk. Sure, you know you have to twiddle some bits on that hardware device, and the data sheet says it's easy. But until you try it, you won't know that the bits don't twiddle exactly as the sheet says -- there's a bug that you have to work around experimentally, causing you to have to code for days instead of hours, or even to change the architecture of the program so you don't have to twiddle them in the way that exposes the bug. That's the type of thing you are trying to get a handle on with bottom up. An XP SpikeSolution is another way to control that risk: Create a throw-away "end to end" slice (from GUI to bit-twiddling, for example) of the application to find out what works, to discover architecture, and to gain experience to base estimates upon. ''After reading the SpikeSolution and SpikeDescribed pages, I don't see that there's a lot of difference between a spike and a dose of bottom-up experimentation. Unless 'end-to-end' is necessary for something to qualify as a Spike, of course. I'm not sure that it always is'' ---- With Bottom Up it can be hard to be sure your code is necessary or what it should look like. If you haven't implemented A yet, you might think A needs more of B than it actually does. Not a fatal problem but you need to be vigilant about YouArentGonnaNeedIt. Top Down has the opposite problem - it tends to produce a B which is too exact a fit to A's requirements and so not reusable elsewhere. However, UnitTest''''''s and refactoring mitigate this. ''Why is being an exact fit a problem?'' Section 1.4.1 in 'The Art of Computer Programming' has an interesting take on this. Except at the very micro level, I don't really do either. I tend to use "Sideways In." In general, I would do a little bit of A, then do the support needed in B, then add to A, then add to B. I would have editor screens open to both files and essentially work them in parallel. I don't really see the usefulness in working either A or B in isolation. ''Sounds like Spike and Iterate.'' ---- I would consider this a personal preference. When implementing a single feature, do what works best for you. If multiple people are involved, you will need to compromise and reach agreement within your team on how to proceed. In the end, I don't think it really matters a lot which approach you take, or even if you are consistent. ---- See also: DomainSpecificProgramming, TopDownProgramming