ExtremeProgramming includes as much or more design as other processes. It differs in when it does design, and in what artifacts are produced. Waterfall processes use BigDesignUpFront, and look kind of like this: * design design design code code code test test test Incremental and spiral processes look more like this: * design code test design code test design code test ExtremeProgramming looks like this: * dtc dtc dtc dtc dtc dtc dtc dtc dtc dtc dtc dtc ''[Minor edit: I changed dct to dtc in the previous line. My apologies if I am wrong. -- interlock.arinc.com]'' ''[Or, if I understand RefactorMercilessly, it should be "tcd"??? -- JeffGrigg]'' ''dtc is best. a little carding, or a moment of thought, then write test, then code. In refactoring, there is generally no new test-writing, though of course one runs them incessantly.'' Perhaps the order shouldn't be fixed? Perhaps you design when you need to design, and not when dictated by a heavy methodology. ---- In ExtremeProgramming, we admit that we aren't smart enough to invent a full-scale design that will actually be optimal. This means that doing a BigDesignUpFront will be wasteful. We also admit that we aren't smart enough to do a lot of coding and have it all work. This means that doing a lot of coding and only then testing will also be wasteful. Instead, we do a little bit of design, we code it up, we test it, again and again, many times a day. In the middle of the project, we're faced with a new UserStory. We need to define the EngineeringTask's that will implement the story. If it's not obvious (and often it is), we will do a quick CRC session to determine what needs to be done. A handful of developers will sit down, usually with a customer, and put down cards for the objects relating to the new story. They'll add cards representing the new inputs and outputs, and cards representing the objects that transform the inputs to the outputs. In a few minutes, the developers will know what is to be done. The next step is to throw away the CRC cards and implement the tests and the code. Developers step to the machine, write a UnitTest for the method or object they're about to create, run the test (oops, it doesn't work), implement the method or object, run the test, fix the method, add a method, run the test, until done. This should take a few minutes. Then the cycle repeats. Where's the design? Part of it is in the CRC session, if one was needed. Most of it comes during a "dct" cycle. ''[Write the test then...]'' Go the object that needs a new method. Write the method, giving it a name that expresses your intention. Implement the method by refining it, also by intention: ''Add the test first right? I think it would look something like this... -- TimMackinnon --'' testGetTaxDeductions aTaxObject setTaxDeductions: self taxDeductionsTestData. aTaxObject getTaxDeductions. self assert: aTaxObject getDeductionTotal = self taxDeductionsKnownTotal. ...other assertions.... Then write the missing method... getTaxDeductions [taxDeductions atEnd] whileFalse: [deduction := taxDeductions next. self addDeduction: deduction] Now either you're done, because you used only methods that already exist, or you need to write another method: addDeduction: aDeduction taxDeductions add: aDeduction. taxDeductionTotal := taxDeductionTotal + aDeduction amount. Pretty soon you've implemented the feature. Run your tests and make sure it works. ''Question: When did a test get written for this second method #addDeduction:? It may be that the initial test correctly covers its scope, but usually more testing is required for it. I've heard Kent say SitOnTheOtherCards, I take this to mean that I should also queue up another card to remind myself to write a test for #addDeduction:. I wonder if SitOnOneCardDiscussion covers this? -- TimMackinnon'' Now examine the code you have created. Look for violations of OnceAndOnlyOnce, names that don't communicate, methods that seem to be on the wrong object, other infelicities. RefactorMercilessly, until the code is a neat as a pin. Run your tests often during this process, and of course, when you're done. * How often do you integrate your changes into the source base? In my ExtremeProgrammingForOne (using Java and RCS), I will commit my changes every time the unit tests come out OK and the functional tests show I haven't lost ground - so what I get is ''code commit refactor commit refactor commit code commit", with test runs before every commit and often during the coding. * Recall that all the UnitTest''''''s must run in order for us to release code. This takes about ten minutes for all 3,000 of them. And we release code from a specific machine, to ensure that we serialize release, which prevents folks from accidentally stepping on each others' edits. So we release at the end of a chunk of functionality - typically once a day per developer. There's a lot of design in this process. When you express your intention, you're designing how the thing should be done - the meaning of the objects and methods involved. Then when you refactor, you are ensuring that the overall system remains cohesive and that the objects relate to each other well. These are the main things you want to accomplish with design. If design is done more up front, the discoveries during coding either have to be fed way back into the design, or the design will be allowed to drift from reality, or (god save us) the design will be adhered to even though the code tells us the design is wrong. If design is done more up front, it becomes harder and harder to do design, so less and less is done. But since "design and coding aren't the same thing", the slack will not be picked up during the code/test cycle, so the system diverges from good design. XP does more design than most processes; it does it longer; and it does it closer to the point where it matters: closer to what goes into the code. ---- Here is the deal. You get to make the code as good as you can. You will always be able to make the code be as good as it can be. You can take as long as you need to do this. It is built into the schedule. As you learn more, you are allowed, even expected, to add that to the code. But there is a catch. You aren't allowed to write code to solve problems you don't yet have. If you start solving problems that you might have then you loose the ability to estimate because it becomes a contest to see who can think up the most hypothetical problems. This do you/don't you design business is a bunch of bunk. If we get caught up in semantic arguments about what is/isn't design then we will quickly forget what we do know: it takes effort, but not unbounded effort, to make a clear program. -- WardCunningham ---- This might be a bit off topic... refactor if necessary, and thanks in advance. (Oh, and please correct me if I'm out to lunch, I'm obviously not an engineer. :) What's wrong with the idea that the blueprints for a bridge match with TheAlmightyThud of paper, and the grunts with the shovels match with the coders with their ide's? Where does the compiler/interpreter fit into this? Is a compiler not the grunt with the shovel that does the laborious work of turning a fully specced design into a finished product? Did this not have to be done by hand, years ago? Grunt work is so called because it doesn't require creativity, just a doggedness to finish the job, and the willingness to ask a question if unsure (think compiler errors). The 'up-front design' therefore seems more like an engineer's notebook, where he scratches out thought experiments, works out the math behind a design, and records why a decision was made. Note that the blueprint he creates can still be understood by both a grunt and another engineer without the aid of that notebook; it's a transient item. The math behind the design, although also stored in that notebook, is akin to the unit tests in XP. If the design is changed so that they no longer fit in their constraints, the unit test fails. If the grunts build the bridge without due regard to the plans, the unit tests will again fail. -- WilliamUnderwood ''It seems to me that there are a few basic differences between building a bridge and writing a program. They can be pretty easily summed up: * The life cycle of a bridge is a very long time. The Brooklyn Bridge has been standing for nearly 100 years now, and has not gotten any truly new requirements since its inception. On the other hand, the life cycle of a program is a very short time. Programs get used for a while, then somebody realizes that the program isn't good enough because it doesn't do some particular thing (I remember using Visio (pre-MicroSoft) after using an old copy of MacDraw for 6 years. The number of improvements was incredible - constructive plane geometry, more advanced grouping sutff... and I still wanted more), and discard it for something better. The only way to keep ahead of the life cycle is to continuously design in features that the users demand. How often do you see construction on the Brooklyn Bridge other than maintenance? * A bridge serves one basic purpose: Allow ground transport to cross a given body of water. It doesn't actually change anything about the things moving across it - it just allows them to change themselves. It does things without changing, with essentially one mode. Anything more complicated than that and it's not just a bridge anymore: it's a bridge with a traffic light, or a bridge with a weigh station, or something like that. An application, like, oh, Emacs, to use an extreme example, has to edit text, provide syntax coloring, count brackets, indent intelligently, check spelling, maintain a clipboard, allow multiple views of multiple files, provide access to the filesystem, resize itself to the window it's in, pipe its buffer through various utilities, connect to sockets, etc. In a certain sense, a bridge is like a data structure, not like an application. A stack or a priority queue only has to do one thing, and does not manipulate data so much as allow other things to manipulate data using it. * A bridge is incredibly expensive to design and build, and you can't reimplement the supports while you're building the rest of the bridge, or even when the bridge is complete. Not easily, anyway. If I'm writing an application, I can easily swap out one implementation of a stack or queue for another. And I don't have to take down the stack that's already there until the one I am writing is ready. And, further, if my new stack implementation doesn't work the way I planned, I don't have to call in another wrecking ball... I can simply undo my change. * I don't know all the requirements for the application I'm writing right now. It would be absurd to wait for all of them, too - because people don't know what they want until they have something close to it that doesn't cut the mustard. The moment I do something visible, people will say "Oh, why doesn't it do this?" or "Oh, I don't like that. Can it do this instead?" And then I have to go back to the drawing board, and see what I can do to make it do that, do it, listen to the next thing they want, and so forth. This applies to other users of my stack or queue, too. Using XP, I am always at the drawing board, and can put the new features in without going through a large-scale development cycle the way that BigDesignUpFront or incremental designs force me to. -- DanUznanski (Woohoo! My first edit here! :D)'' ---- See also ExtremeDesignArtifacts, also ExtremeHumility [CategoryExtremeProgramming]