Moved from SitOnOneCard ---- This afternoon, while waiting for the 3rd 29 minute compile-and-link of the afternoon to finish, I contemplated SitOnOneCard. As it happens, I was working on two related GUI tasks; one a bug fix, the other a new feature. In Smalltalk the two tasks would have taken about an hour total, including unit testing and source code control overhead, and the difference between doing the tasks in parallel or serially would been minor. However, in the environment in which I am now working (VC++ and ClearCase), the overhead imposed by the tools is substantial, and the tasks were to take two days. Had I chosen to work the two tasks ''in seriatim'' I would have spent an extra day (spent mostly in overhead). So, I submit this question to the proponents of SitOnOneCard: Can you predict or imagine a point at which increasing the overhead imposed by the development environment would have you relax the SitOnTheOtherCards rule, and allow two related tasks to be worked in parallel? If so, please characterize the cross-over point. --DaveSmith ---- In a batch compile environment we can still concentrate our mind on only one of the tasks at a time. We're talking more about what WE think about than what the computer thinks about. One of the extreme rules is the shoulder-moving thing. "Doctor, it hurts when I do this." "Don't do that." What's hurting is a 29-minute compile/link. So we'd like not to do that. Do we need to upgrade our 286? If we are already running a giant Pentium, are we sure the smallest test harness those changes could have been made to work in really needed a 29-minute cycle? Would smaller source modules help? Better factoring of objects into modules? A different test harness from the entire system? More use of DLLs to speed linking? Much of the power of Smalltalk is in the cycle. In C++/Java, we can try to get the cycle time down by smaller modules and separate test harnesses so as to get faster links. We can readily get at least 90% of the defects out in unit testing, which can be done in the small. When all that's done, there may still be some crossover point where compiling/linking/testing multiple things makes sense. But we have to remember, two bugs are four times harder to find than one. To our tiny little minds, the system's behavior becomes more and more bizarre and unintelligible the further away we get from the actual errors. --RonJeffries ---- In DesignInXp there is a useful example of a #getTaxDeductions method being written, which also introduces a new method #addDeduction: - I find that this commonly occurs to me, having written my test, I begin coding up the solution and discover I need a new method or a new helper object. What do I do at this point? 1. I know that I will need a new test for this object/method, but I am currently caught up in the details of the current test. I think that the right thing to do is to sit on a reminder to write that other test? 1. Alternatively I pause what I am doing and write a test for that new method/object, and then go back and continue what I am doing (but this context switch can be distracting). Which is the correct answer, do I scribble a reminder and sit on it, or do I write the test? A hypbrid might be to fill in a reminder test that fails? I asked RonJeffries about this and he said the following: ''Because you're not supposed to release code until it's well-factored, writing the little object is probably the right thing to do. If you write it, testing it is the right thing to do. The thing would be not to elaborate the object beyond the actual use made of it in your original task.'' ''You may find that the best thing to do is to finish the original object without the little helper, get all the tests working, then factor in the helper. Try it both ways, feel what it's like, then go with what seems to work best ... but always releasing "enough" tests for whatever you do.'' I guess the answer is - ItDepends, and so far I have been doing the first option (but you have to remember to write that ReminderCard). --TimMackinnon