Scenario 1 - You design ahead. You spend $10 today and make a future change for $5. Scenario 2 - You design just in time. You spend $5 today and make a future change for $20. 1 is better than 2, right? Not so fast. There are a number of factors that may make scenario 2 better: * As the probability of the change happening approaches zero, then 2 is better * If the change happens far enough in the future, the cost of money makes scenario 2 better * If the extra design slows development down enough (more tests, more costly tests, bigger teams, more overhead, and all the other compounding effects), then scenario 2 is better * If the extra design adds enough to the risk, then scenario 2 is better This doesn't mean you have to be stupid. In fact, you have a fair amount of control over the cost of rework. By carefully ordering the stories to be delivered in a release, you can influence big changes so they happen sooner, while they are still small changes. My goal is when I see a team taking a path that I know will be expensive, I try to boil my insight down into a new test case that they can only implement cleanly if they change the design in the direction I imagine it should go. Or not. More often, I have something to learn. -- KentBeck ---- I don't think I'm convinced that "If the change happens far enough in the future, the cost of money makes scenario 2 better." The fact that this ''might'' be the case for $5 vs $20 doesn't persuade me to think the numbers are necessarily realistic or even typical. Also the "far enough in the future" doesn't make things concrete for me because you can always project it out 50 years or however many years you want and claim you're right, when in fact the projection is sufficiently far enough away from reality as to not be pragmatic at all. IMHO you make the trade-off sound like it's too simple and straightforward. You certainly don't give enough credit to when scenario #1 is better, It looks too one sided the way you say it above. There are so many other relevant and commonly occurring factors, both good and bad. Id like to see a more rounded out discussion that makes it look multi-dimensional rather than 1D or even just 2D. In fact, sometimes the decision of "design ahead" versus "just in time" isn't strictly an either/or. Sometimes you can do just a little bit of both at the same time, and in the right way, so you get the best of both worlds, without all the drawbacks. -- BradAppleton ''OK, Brad, let's do some math. What has to be true in terms of probability of actually needing the thing foreseen, cost of extra design now vs cost of restructuring later, overall system modularity, and other dimensions you feel are important, to make extra design, now, worth more than the cost of a change not designed for, tomorrow? Here's a start: CostOfFutureChange. -- RonJeffries'' But that wont address what I said up above. I'm not saying the math is faulty, or that one way is more often correct than the other. I'm saying that, IMHO, the discussion above doesn't give a rich enough "slice" of scenarios that show not only either of the two alternatives, but also the range of alternatives in between. What if I simply spend some time to think about what is needed in the design for the change, but don't implement it? What if I implement some fraction of it, but leave the rest for later? What if I simply make sure I leave a "hole" where the feature might conceivably added, and then make sure I don't pave the hole over? What if I go ahead and implement the change, and I was only partially wrong about its inevitability (a similar change turns out to have been necessary, but it's not quite the same as what I implemented)? What if doing the change now costs more when compared solely against that one feature, but doing it now allows me to do so in a way that makes the entire design easier to change and extend in the future than slapping it on as a band-aid later and then refactoring it? So there are lots of other scenarios besides "do it" and "don't do it" each with circumstances that could make either one the "right" or "wrong" choice depending on the particulars. And just because the "future value of money" may make it less expensive to do later rather than earlier even though it was drastically less effort to do earlier, should that ''always'' imply that I am better off saving $500 by waiting five years and spending 5 weeks of effort rather than doing it now in 5 days? What if the product I am building is not a custom-order application solely for the client but is something that I will retain ownership of and try to sell to numerous other clients? Are we assuming that if it is "implemented now" versus "implemented later" that the resulting code will be the same at time "later"? (probably not) Can we even assume that the essence of the design or architecture will be "approximately equivalent"? Can we assume that when we add it later that we can make the result be just as easy to change/adapt/extend as if we had done it earlier? How different might "the simplest thing that could possibly work" be now versus later and how will that affect our ability to refactor it, and the way we refactor it, and the resilience of the resulting architecture? Should I be comparing only the cost of the change now versus later or should I try to measure the overall difference in the cost of the entire project? There's just way too many other considerations that need to be mentioned, as well as assumptions being made that are perhaps implicit which need to be made explicit. Showing only four possible "factors" in the bulleted list that favor "don't do do it" over "do it" simply doesn't do justice to the full richness and complexity of the range of alternative scenarios and outcomes (much less the possible factors). I was a math major as an undergrad, but IMHO doing the math here isn't going to help much here except perhaps as a possible vehicle for realizing the full scope and magnitude of the problem space and all the factors, assumptions, and constraints. -- BradAppleton ---- ''Should I be comparing only the cost of the change now versus later or should I try to measure the overall difference in the cost of the entire project? -- BradAppleton'' The overall cost, of course. (At last, a point of agreement, I hope!) In what ways might the overall cost of the project be different from the integral over all project time of the cost of all the changes? Perhaps the meat lies in the answers to that question. -- RonJeffries Good question! I think another factor is problem+solution domain knowledge. Some shops keep evolving the same product for more and more customers, others keep rebuilding essentially the same kind of product from scratch for each customer (and the customer is given the ownership rights). In these cases, and others, it is not uncommon for the developers to be far more knowledgeable and experienced in the problem domain and its solution, then the users. This can dramatically increase the likelihood that a given developer is correct in thinking a certain change will be needed eventually. If one can actually come up with a percentage of how often this "hunch" turns out to be correct, it could be an important factor deciding whether to change it now, or wait until later (if you're doing the math, then this would involve the notion of conditional probabilities of dependent events, which is what Bayes theorem addresses in probability theory). Also, the depth and breadth of the customer base could be a factor as well. The more different customers you have, the more likely they are to use the software in "mysterious" ways, and demand features that werent given much weight in the beginning. And it can also make a difference if putting the change off for now means waiting until a handful of customers start screaming for it, versus hoardes of customers that start demanding it all at once. This can impact schedule pressures, and competitive market-timing. Most would say that "just in time" means having it ready at the point it is needed, ''not'' waiting to start work on it until then. Minimizing that window between when it's needed, and when it's ready, isn't apparently addressed by what's above. But I would bet you could make a convincing argument that when you RefactorMercilessly to code things OnceAndOnlyOnce so that OaooBalancesYagni, that not only is the code ContinuouslyReadyForChange, but that it actually lends itself to anticipating such changes by leaving things "open" in the right places and InsulatedYetMalleable most everywhere else. -- BradAppleton : ''[BTW - I don't know if this was intended to be an XP page when it was first started, so I couldn't tell if it was safe to assume the thoughts expressed were in the context of XP. It might be helpful if you did something similar to what the folks authoring the ComponentDesignPatterns are doing and put an XP reference or two inside square brackets at the very top of each page that describes things to be considered within XP's context]'' ---- In following up on this thread, I reread SoftwareEngineeringEconomics by BarryBoehm. Page 40, Figure 4-2 has the infamous graph, ''increase in CostToFix or change software throughLifeCycle''. Main line shows Requirements through Operation with a difference of around 200 to 1 with error bars out to 1000 to 1, so as consultants we always talk about the 1000 to 1 worst case. But there is a second set of data on the graph. Here is the relevant quote ''for two smaller, less formal software projects analysed [snip] the 4 to 1 escalation in CostToFix between the requirements phase and the integration and test phase still supports (the) premise SequentialApproachToProcessSubgoals''. But this data is from way back in 1976. In the intervening 20 years I'm willing to bet that the newer tools allow us to do a lot better than the 4:1 reported by BarryBoehm. We may not be at 1:1, but the experience reports to date from XP suggests that we are getting close. But say we accept 2:1 as representative of best XP practices. Then we need to add a third scenario. Scenario 3 - You design just in time. You spend $5 today and make a future change for $10. Surprise surprise, you are now equivalent to Scenario 1, and you got your first increment out faster. -- PeteMcBreen ---- I think there's more to the scenarios: Scenario 1 - You design one feature ahead. You spend $10 today and make a future change for $5. All other features will cost $1 more, regardless of whether you need this one or not. Scenario 2 - You design one feature just in time. You spend $5 today and make a future change for $20. All other features will cost $1 more only if you implement this feature. The big drawback of "might come in handy" features is that they make the application unnecessary complicated and large. This (I think, it's just a feeling) is the major problem with the "Big up-front design": It's Big. This results in more code, therefore more construction time, and more maintenance time. (By the way: all resemblances with real amounts of money are purely coincidental.) -- WillemBogaerts