'''"OnceAndOnlyOnce (OAOO) balances YouArentGonnaNeedIt (YAGNI)."''' ----- The XP rule, YouArentGonnaNeedIt (YAGNI) means that we always implement only what we need right now, not what we foresee that we will need. This strategy, if it works, eliminates some costs because they are never borne (things we didn't need after all), defers the costs of the things we do need to some future date, brings in the date on which we gain benefit from the code, and reduces the risk that the benefit won't ever happen. A counter argument often offered is that it will somehow be less costly to implement generality now than it will be in the future. Possibly it is an argument from momentum. ''(Polya, I think, argues that it is sometimes easier to solve a more general problem than it is to solve a specific one -- BT)'' SpecificAndGeneralProblemSolving. Since we implement everything incrementally by invocation of YAGNI, the system might become increasingly difficult to evolve. This would be especially true if every time one implemented a feature, one just slammed it in the simplest possible way. We DoTheSimplestThingThatCouldPossiblyWork, that is, we implement each new feature as simply as we can. However, we then reapply the Simplest rule to require that the system be left in as simple a final state as possible. This means that we refactor the code until everything is done OnceAndOnlyOnce (OAOO). This refactoring, by centralizing any given function to a single class or method, leaves the system well positioned for the next change. Whatever you need to do next, or six months from now, you are almost certain to be able to do it by enhancing or replacing a single isolated object. OnceAndOnlyOnce balances YouArentGonnaNeedIt. Without OAOO, YAGNI would slowly strangle you. With OAOO, you always have a system you can enhance, endlessly. --RonJeffries ----- Very nice and tidy description. Reading the title, I somehow thought of Mowgli meets Old MacDonald - Aoooo! balances EIEIO! (I am told James Martin really did invent an OO extension of Information Engineering, which he called Extended Information Engineering Including Objects: EIEIO) -- AlistairCockburn ---- I saw a comment here recently that we want systems that are well positioned for future change. XP wants systems that are well positioned for current, actual change. --KentBeck Thanks, tried to improve it, feel free to do so as well. --RonJeffries ---- I think there is something missing from the above description that I suspect XP is in fact doing, but isn't indicated above. While on the one hand, doing the simplest thing that could possibly work is good (and not just in a minimalist way), there is of course valid motivation behind making things general and flexible. It's easy to overdo however, or to do it in the wrong places (incorrectly guessing where the soft-spots or "hot-spots" will be). I sometimes refer to the overflexibility approach as "Too many toppings on my ice cream syndrome": Ice cream is yummy. Ice cream with some hot fudge is yummier. Ice cream with hot fudge and some nuts is even better. But ice cream with hot fudge, nuts, caramel, pineapple, marshmallow, strawberries, and coconut goes too far. Even if your palate can handle the clash of some of those conflicting flavors, the fact is that you really cant taste the ice cream anymore (which was kind of the whole point in the first place). When I was a newbie programmer, I was one of the worst offenders when it came to overflexibility. So I learned to take a dose of "do the simplest thing that could possibly work." Its okay to anticipate where flexibility might be needed. It doesn't mean you have to put it in yet. Do something simple instead; just make sure that you aren't doing it in a way that will unreasonably constrain you if you do turn out to add the flexibility later on. There is a difference between leaving your options open, and exercising them all at once. To some, the "flexercising" may seem more general, but it actually isn't because it leaves fewer options open than a more minimal approach that still leaves ample room for the anticipated changes (what could be more flexible than something that is still possible, but not yet confined by an implementation). So I think there is more going on than just doing what is simplest, I think there is something to doing it in a way that doesn't preclude future change and adaptation, (typically hidden or encapsulated behind some kind of DesignShield). In some sense, its a form of advance laziness (see LazinessImpatienceHubris - I thought they had their own page somewhere but they don't, though they do appear in WizardsAreLazy and on the QuotePage). Am I wrong in thinking that you XP guys are doing this, even when making things simple? Does "simple" include keeping adaptability simple by not precluding such possibilities? Or does it truly mean code for the moment (carpe diem ;-) with no regard for what might or might not change next month (at least not until next month rolls around ;-) ?" I've read the other XP pages, and I guess the notion of "simple" still seems fairly vague to me. Does it mean the most mindless thing that could possibly work? Does it mean the simplest in terms of minimal dependencies or impact? Does it mean the simplest functionality? Does it mean least amount of effort? Least amount of effort *right* *now*? Least amount of effort now *and* least amount of maintenance effort later? --BradAppleton I think I can finally answer this one. The simplest design must (in priority order): 1. Run the current test cases 2. Say everything OnceAndOnlyOnce 3. Contain no classes or methods that can be removed or inlined. Methods whose purpose is to explain should remain in the system. Does this help? --KentBeck Kent, yes it does help me better understand what "simple" means in the context of XP. But it still doesn't give me any indication as to whether or not it encompasses a deliberate effort to ''not'' preclude future possibilities (or at least minimize it). I like the way Kiel described it as ''not building in limitations.'' I think this might be the difference between not thinking about the future, versus thinking about it, but with a lazy/minimalist approach. In other words it tries not to burn bridges by not building them in the first place, but leaves open the possibility of adding them later should the need arise ;-) --BradAppleton ---- I used to think that systems could be made more flexible by adding things. Now I realize systems become more flexible when you take things away. The area around the code has much more potential than the code will ever have. --MichaelFeathers I like that! The single biggest problem with most "reuse programs" is that they are oriented towards ''building in'' all that flexibility that will be needed. I've always felt that step one was ''not building in limitations''. As for step two, exposure to XP has eliminated any desire in me to go that far! Kiel's First Law of Programming: Any program can be improved by ''removing'' code. (I suspect that there are some here who could provide counter-examples, but it applies in most parts of the world. Besides, stated as is, it jars people enough to occasionally lead to real insight.) --KielHodges Antoine de Saint-Exupery wrote : "in anything at all, perfection is finally attained, not when there is no longer anything to add, but when there is no longer anything to take away." -- Gery Derbier Sometimes I throw in a quick stub that prints a message (or something) if what I've done encroaches upon the "hole" I'm trying to leave in place. In some ways its sort of reminiscent of working with positive & negative space: I'm not adding any features or functionality I don't need; I'm putting in an assertion which makes sure that not only don't I add it, but that I don't add anything which might preclude that possibility in the future (if it seems a likely addition based on my experience). Its like a test case which ensures that certain behavior is ''absent'' rather than present --BradAppleton ---- But seriously, folks. YAGNI and OAOO do not work against future flexibility. Nor do we do any investment to provide for future flexibility. Here's what I think is happening: code which is properly factored (i.e. OAOO) is inherently "ready for anything". That is, my theory is that properly factored code is inherently capable of being pushed in any direction that may come up. I could be wrong. --RonJeffries Is it the case that the frequent refactoring actually causes the code to "converge" onto a proper modeling of the problem domain? It is my experience that a good domain model leads to extreme flexibility to deal with new cases, provided those cases largely overlap with the subsection of the domain you have already modeled - which should often be the case. --PieterNagel Just because code is properly factored doesn't mean it cant necessitate significant changes if a coded interface or behavior or usage metaphor made a limiting assumption that turned out to be unnecessary. It will still be easier to change than badly refactored code, but will be substantially more work to unconfine than code that didn't make the assumption to begin with. And it can be the case that some deliberation about not confining future options could have avoided this effort (meaning you would only have had to enhance rather than replace the well factored offender - and maybe clients of it, if the interface was impacted). For someone reading about YAGNI and OAOO and perhaps trying to apply them, there is no indication that this is going on, or that it involves deliberate forethought to ensure that it does. It would be easy for someone to think they are doing the "simplest" thing when they are in fact precluding lots of future possibilities. I think that somewhere in their descriptions, the notion of "simple" conveyed by DTSTTCPW and OAOO and YAGNI needs to expressly communicate the difference between "stupid simple" (being "lazy" only for the moment) and "advance/elegant simple" (being "lazy" in a way that might involve more thought, but perhaps less work, especially down the road), and thinking about how I can keep tomorrow unconstrained while I attempt to DoTheSimplestThingThatCouldPossiblyWork today. --BradAppleton ''To recapitulate: first, we really truly do the simplest implementation that we can think of that can do what our present requirement says. Then, second, we refactor the system so that the resulting code is the simplest code, in the sense of OnceAndOnlyOnce, supporting what we have implemented. We emphatically do not build in hooks or any other form of generality. --RonJeffries'' I didn't mean to suggest you "build in hooks or any other form of generality" (I don't believe that anything in LazinessImpatienceHubris implies it either, though I can see how one might mistakenly infer it). What I've been suggesting (and what some others have echoed) is that when adding required functionality, one deliberately considers doing so in a way that wont pigeonhole you into subsequently needing MajorSurgeryForFutureChanges whose nature could have been anticipated. It's not about building anything "in"; it's about making sure you ''don't'' build in limitations that could easily come back to haunt you. It's not building generality in, it's making sure you don't build it "out". Its trying to increase the potential, not by adding, but by subtracting a decision you were about to add. (Does that make sense?) Sometimes the "simplest implementation" builds in such limitations. Sometimes making fewer limiting assumptions requires a bit more code. What do you do when the "simplest design" is at odds with the "simplest implementation"? Is that simply not possible given the way you choose to define "simplest"? If so, why? --BradAppleton ''Can you give an example where simplest implementation and simplest design are in conflict.'' Its difficult without getting a more precise notion of what ''you'' mean by "simple" (which is why I've been trying to elicit more information on that point). If I end up putting together a well factored set of objects/modules that assume, say a "push" model of use, and the interface does the same, it may make things easier to implement. If that "push" assumption was wrong, the interface and its underlying metaphor may have to change, which affects all the clients. If those clients had to pass parameters that they got from their clients, than if new/different information now needs to be supplied, the clients' clients may be affected as well. A "simpler" design might be considered one that makes fewer assumptions. If that is the case, than a design which considered how it might be either (a) independent of "push" vs "pull" and/or (b) able to accommodate both while doing as much OAOO as possible, might be the simpler design. However, it may take more time/effort, and implementing the result might turn out to be more code (or it might not -- c.f. Abe Lincoln apologizing for the "long speech" because he "didn't have time to make it a short one" ;-). If something requires less code, but more thought/effort, do you consider it to be simpler? What if it requires more thought and more code but the design makes fewer assumptions/decisions or contains fewer relationships between design elements? After you've passed all the test-cases, achieved OAOO and a good "factoring", are there any other criteria for helping decide which methods or objects could be removed? Do you look for things that you would be better off saying ''less than once''? (How?) Are you making things "more simple" by spending that extra effort? (In what way?) The way I read YAGNI, it talks about declining to add some extra feature that was thought of (because YouArentGonnaNeedIt). The stuff above is talking about something similar, but from a different angle. Or maybe its just applying YAGNI to something different. Instead of saying don't add this feature, perhaps its saying don't code in this assumption. The assumption might be a simplifying one that makes the implementation more straightforward, but also more limiting and brittle. So we might consider leaving out the assumption/constraint because we aren't gonna need/want it. (that would make it YAYAGNI - Yet Another YAGNI ;-) --BradAppleton ---- I'd appreciate if an XP guru could give an interpretation of the following scenario : A client asks for a system to hold ten objects. Ok, so I go away and make code featuring an array of size ten. A week later the client says that they may want to add more objects on the fly. So I rebuild using a Vector (sorry about the Java - I don't know how it works in SmallTalk). I could have used a Vector first time round, but didn't because of yagni, and an array is by most definitions simpler - instead I've got to start from scratch. Only a very crude scenario, but surely one that could scale up? -- DannyAyers Not a guru, but I'll give it a shot. Why is changing from an array to a Vector forcing you to 'start from scratch'? What is it about the code that will cause this change to be a pain in the butt? Would wrapping the array/Vector in accessor functions help to make such a change transparent? You ''are'' going to make the change (since the client asked for it), so by definition 'you ''are'' gonna need it'. ReFactor the code so that making the change is easy. Do it so that if you had to change it back to an array it would be effortless. In other words, make the (difficult) change OnceAndOnlyOnce. -- RobHarwood Wouldn't it be nice if Java allowed you to call iterator() on an array of Objects, returning a (proposed new java.lang.ObjectArrayIterator)? --JamesDennett It sort of does: Arrays.asList( myArray ).iterator() -- BrianDuff ---- It seems to me that all of the above ignores the experience (and intelligence) derived ability to balance the current cost of building unrequested functionality against the Expected Value of future refactoring costs. If there is probability '''p''' that a customer will expand his requirement at a refactoring cost of ''C'' then as long as the current cost '''c < p * C''', the time for building the extra functionality is now. It ignores it by virtue of the idea that RidiculousSimplicityGivesRidiculousResources. In other words, it's the experience of the XP folk that '''p''' is lower than we'd like to think, '''C''' is very low (with good code), and '''c''' is higher than we'd like to think. This idea is very well laid-out on the "YouArentGonnaNeedIt" page. ---- ''Moved from YouArentGonnaNeedIt'' Now to some specific questions: * This is clearly (to me) a gray area. How do I deal with the difference between needing something and thinking I need something? To talk of guesswork and "one day" is to over-state the opposing view. -- DaveHarris ''If the functionality you are supposed to be implementing (your EngineeringTask supporting your UserStory) will run without it, you just think you need it. '' For me, designing a class that ''can be'' extended and modified facilitates reuse. Some call it PiecemealGrowth (see PatternsOfSoftware). If you are adding methods and features because you think it will ''one day'' be useful, it is at best a good guess. But, you are still using guesswork in your design process. And, just about anything you add to a class that isn't going to be used (right now) contributes to ObjectBloat. Keep it lean, simple and ''growable''. You are gonna need to evolve it and you never get it right the first time. -- ToddCoram ----- ''Moved from YouArentGonnaNeedIt'' On OaooBalancesYagni, RonJeffries wrote: ''"Possibly it is an argument from momentum."''. It is more an argument from dependencies. When you put in a new module, few other modules depend on it so changing it is easy. If you wait 6 months and then change it, by then many other modules will have been built on top of it and your changes will upset them. Your changes won't be confined to one module any more, they'll ripple through the system. You will actually have more work than if you'd got it right in the first place. * ''Don't think this can happen. Whatever the "module" is (I presume it is an object), its users are sending it messages that express clearly what they think it does, i.e. its interface. Any implementation that supports the interface will do just fine. The number of users an object has doesn't affect the ease or difficulty of replacing it as far as I can see. Elucidate? -- RonJeffries'' * Yes; the problem is when the interface changes. You have to work harder to preserve the old interface, and sometimes that's not feasible and you have to change clients or subclasses as well. -- DaveHarris * ''Sorry, I still don't get it. We change interfaces all the time, whenever they are unclear or insufficiently powerful. And if the module was what needed replacing, why would the interface change at all? Keep trying, I'll get it ...'' * For example, I have a system where users can do stuff. Later, software agents will be able to do the same stuff. Lots of routines which currently take an argument of type User will have to change to take type Agent. If I'd started using Agent from the beginning, before those routines were written, I'd have less refactoring to do. (But see comments below about how the cost of refactoring is low. It's low, but it's there.) -- DaveHarris * I think what RonJ is saying is that a well designed interface should never have to be modified; What constitutes a well designed interface? Any interface that does what it is supposed to do (works as advertised by its public methods) qualifies completely as a well designed interface. The implementation of that interface can certainly be modified in whole or in part at any time without effecting any other objects. If you are modifying the interface methods themselves, and the interface has been in 'production' level code, what you are actually doing is removing the original interface and its entire solution domain and replacing it with an entirely new set of functionality (that just happens to resemble the old, now non-existent interface). Specifically, the original interface no longer exists, and it and the new interface do not share any commonalities. You might be better served leaving the original interface intact, and adding a new interface for new features. For the example given though, no modifications should be necessary, because if, as you say, Agents can do what Users can do, then Agents are merely another flavor of User, and thus would be based on the User object. -- RodneyBarbati I'm making this point because I don't think it's adequately presented elsewhere on the page. Without it, XP comes across as a no-lose proposition. It's not. You have to balance the increased cost due to later changes against the difficulty of predicting the future. * We find that it is a ''winning'' situation. We find that the reduced cost of doing less building for the future more than makes up for any increased difficulty of making them later. Our finding is that there is no increase in cost from waiting, while of course there is cost from doing what you don't need. In 2 1/2 years of C3 we haven't ever once wished we had built more for the future - and we have many times wished we had done less. -- RonJeffries * I agree that it is ''winning'' in the XP context. That's what I meant by my last paragraph (below). It wins because you have a good environment for refactoring, which includes no code ownership, UnitTest''''''s and the rest. -- DaveHarris I'm not trying to revive the debate. I already know the counter-arguments - roughly, that on the one side we have good techniques for minimizing the ripple effect (ie OO and XP), and on the other we have no good techniques for predicting the future. -- DaveHarris ----- If the class structure has a huge ripple effect, maybe the code is trying to say something. The CycleAbstractionPattern may be relevant. -- CayteLindner ---- See HillClimbingDesign