Part of the AtsDiary. ----- ''6 March 2000'' Today we finished estimating the UserStories and had our PlanningGame meeting. We weren't able to do a SpikeSolution for all of our cards, unfortunately, so some remained high risk and difficult to estimate. The planning meeting went very well, though: we got through about 40 cards in an hour. Even so, I think the process could have been streamlined some. More about it in AtsPlanningGame. After planning, we came back to our office and defined AtsEngineeringTasks. We didn't estimate them for two reasons. First, one of the developers who's supposed to be working with us hasn't arrived yet. Second, the remaining developer isn't familiar enough with the application to feel comfortable estimating tasks. So we estimated the tasks for just one story. We'll do those tasks and then compare our estimate to the actual number of IdealHours we spent on each task. That will allow us to calibrate our estimates so we can be more accurate in the future. The last thing I did today was to create the first AtsStatusReport. Although it took me about an hour and a half to complete, I think the time was well spent. Doing these reports will keep the users up to date on the progress of the application, but its primary purpose is to satisfy the project's GoldOwner''''''s that things are progressing smoothly. Without this regular feedback, since the GoldOwner''''''s won't actually be using the app or participating in planning meetings, the GoldOwner''''''s wouldn't have the confidence in our process they need to continue funding development. ----- ''7 March 2000'' We started development today. There's not much to report; we spiked a high-risk problem successfully, which has made me feel much better overall about the risk factors on the project. We also discussed the risks on the project as part of AtsRiskManagement. Then we started in on our first major development task and set up the AtsUnitTests. (We're using JavaUnit.) We doing AtsPairProgramming, but I'm a little ambivalent about it so far. It does help me focus a bit better, and we ''did'' get a ton of stuff done today, but it's hard for me to see my PairPartner sitting there and just watching. Because I have much more experience with ATS than my partner does, as well as more Java language experience, there wasn't much in the way of collaboration between us. On the other hand, my partner is very enthusiastic about it and says he's learning a lot. I own this task and am sitting at the keyboard; maybe the next task, which my partner owns, will go more smoothly. (There's only two of us on the project at this point, so we're pretty much stuck with each other.) ----- ''8 March 2000'' Today we did nothing but write a Perl script to automatically deploy ATS. ATS is a distributed application consisting of an applet front-end and a "servlet" back-end that allows our in-house distribution protocol to run on HTTP. In the first phase of ATS, deploying ATS was always a day+ ordeal involving dark rites and chicken blood. For this phase of ATS, I knew that we had to release more often, because when a distribution problem did arise, it was almost impossible to track down. ExtremeProgramming's ContinuousIntegration seemed like a ''very'' good idea, but I knew that if we had to continue to sacrifice goats to deploy ATS, ContinuousIntegration wouldn't work. So today we wrote a script that compiles everything, UnitTest''''''s the build, packages it up, deploys it, and then unit and function tests the application in its distributed state. So far, we don't have very many AtsUnitTests (the idea was introduced in the first phase of the project, but it never caught on), and my partner is definitely ''not'' TestInfected, but he's humoring for now. We only have one functional test -- to see if the application is truly deployed (i.e., is the 'distributed' flag turned on?), but that's a good start. I've used UnitTest''''''s extensively on other projects, and I'm looking forward to having the feeling of confidence they provide available on this project. ("We," by the way, is just Al and myself. We were supposed to get another developer last week, but he keeps getting delayed.) ----- ''10 March 2000'' Our third and final developer arrived yesterday. We spent a good deal of time reviewing the situation, re-estimating and redistributing AtsEngineeringTasks (remember, developers must estimate their own tasks), and setting up a second machine. As a result, our AtsLoadFactor skyrocketed. I calculated it for the first time today at lunch and it came out to '''5.8'''! Our schedule commitments had been based on a load factor of 3 (and I had thought that was high), so I panicked a bit. I posted the figure prominently on the AtsTrackingWhiteboard, underlined the "Committed to 3.0" section a few times in red, and went to lunch. This afternoon, we all buckled down and got some work done. I don't claim it was the whiteboard that did it, but my notes certainly did draw attention and people discussed it. After an uninterrupted afternoon's work, I recalculated our load factor -- it had dropped to 3.7. I guess we're early enough in the iteration that we can still have a significant effect on the load factor. I was concerned that it would be hard to change. Our initial development efforts were a little fragmented. Rather than estimate all of the AtsEngineeringTasks in the iteration, we just estimated the tasks for one AtsUserStory. The thinking was that we weren't really sure how long the tasks would take, so we'd do one story's worth and use that to calibrate our estimates. In retrospect, that wasn't necessarily a good idea. The tasks have a lot of dependencies on each other, and they were spread across two developers. (I have very few tasks, as I'll be spending most of my time mentoring.) When we ordered the tasks, we concentrated more on risks (WorstThingsFirst) than on dependencies, and we ended up thrashing around a bit. We set up our tests, then went to code, and discovered that we'd end up stubbing in more code than we'd actually write, and that we'd have to redo most of our work once the stubs were implemented. So we got together again and reprioritized our tasks based on dependencies. Since we reprioritized, dependencies haven't been too big of an issue. There's been some cases here or there where we've had to stub in a class that we knew the other developer was working on, but since we're using a revision control system that supports concurrency, it's not a big deal. The tests give us confidence that merges won't cause problems. A lot of our initial development effort has been spent on the test framework. For the most part, though, we're still coming in under our estimates. Most of the stuff we're working on right now involves database work, so our tests have to tweak the data in the database. (We could use a MockDatabase, but for us that's not DoTheSimplestThingThatCouldPossiblyWork.) We're gradually factoring out a nice set of methods that will set up our database tests for us. In the process, we've also added some nice simple methods to our Database class and identified additional refactorings on it that we can do in the future. The developers are starting to become TestInfected. One developer mentioned that the tests give him more confidence in the code. They're having more difficulty with CodeUnitTestFirst, with both developers mentioning that they're having trouble with that mindset. I'm fortunate, though, in that they're willing to humor me for now. Next week, we'll review the methodology and discuss what's working and what's not. One problem we have had is that we can't seem to get our debugger to recognize breakpoints that are behind JavaUnit tests. (I suspect it's related to the reflection JUnit does.) This hasn't been too big of a problem, fortunately, since we've rarely needed that kind of debugging, so println has sufficed. ''(Note: This turned out to be a case of PoorUserInterfaceDesign. JUnit wasn't at fault.)'' Overall, this is the most ''fun'' I've ever had on a software project, and I think the other developers agree. There's a lot of back-and-forth banter between the two pairs, typically "complaining" about how I asked for a test or changed a test to break somebody's code. The tone is light-hearted but we're getting a lot of work done. The team's definitely starting to jell. One thing that's been absolutely critical to this process is that our offices are across the hall from each other, and we can shout back and forth at each other. (No doubt greatly annoying the other residents of this section.) It would have been even better if we could have gotten a single large room to ourselves, but I count myself lucky to get what we did. So far, the process of DictactingByOsmosis has worked well. I'm being incredibly nitpicky about style ("May I type for a second? I've got a few tiny changes..." or "I'm just going to clean up a few things..."), but it's paying off. A developer refactored some code while I was away, and when I came back, the style was so close to the one I had been using that I was vaguely surprised to see it but figured I had done the refactoring and forgotten it. I've also been using polite questions as a form of DictatingByOsmosis. When I pair up with a developer who's working on some code I haven't seen before, I ask to see the test. ("So, what's the test for that look like?") I also emphasize the importance of the tests and accept them as final authority on whether a task's done or not. ("Do the tests run? They do? Great! What's next?") Again, though, I'm very fortunate in that both developers are flexible enough to try a new way of doing things. There's a little skepticism about some things, but almost no resistance.