Part of the AtsDiary. ---- ''31 March 2000'' We're scheduled to deliver our iteration four release at the end of the day tomorrow (Monday, actually). I'm not at all concerned about making that date. We've finished 15 user stories and the only one remaining is to fix a little bug that we have estimated at two ideal hours. All of our major risks have been resolved. The hardest/riskiest AtsUserStories were done first, of course. We've finally gotten the authentication system to which we have to connect to work, although it was giving us problems up until Tuesday, three days ago. I finally got the application to work in its deployed state (that is, running as an applet, not as an application in our development environment) today. Just before that, I turned on the distribution switch and use the AtsUnitTests to exercise all of our distributed objects in their distributed state. Unsurprisingly, since we rarely turn on the distribution switch during normal development, that process found a couple of bugs, but they were easily fixed. It would have been a much bigger headache if we didn't have the unit tests. There is one dark cloud on the horizon: something's happened to the ability to associate action items to projects. I'm not sure when it happened, since that functionality was developed in phase one of the AtsProject and doesn't have any unit tests. Hopefully we'll be able to debug that problem on Monday. If not, I've prepared a story card for the users to prioritize in our AtsPlanningGame meeting on Monday. Even with that problem, though, I'm feeling very happy with our current state of development. We're going to be able to meet our deadline without any problem, and I have a lot of confidence in the quality of our code as well (action item bug excepted). I'm quite proud of our work; we've created a real security architecture that is enforced on the (theoretically uncrackable) back end, not just a whitewash job of disabling buttons on the front end. In our manual tests, we occasionally would turn off some permissions after the screen had been painted, and then click a button that would normally be disabled or missing. I always got a kick out of seeing the "You don't have permission to do that" security error dialog come back after the back end rejected our request, demonstrating that a cracker's subversion of the GUI wouldn't have any effect on his access rights. So, why is everything going so well? I can't give a definitive answer, but I can provide a list of what's helped us. I'll let you decide which of these things are due to ExtremeProgramming, which are due to a small team working on a small project, and which are due to blind luck. :) * '''Revised estimates.''' If we hadn't revised our estimates when we saw that they were wrong, the project would probably not be done by Monday. Morale shot up dramatically once we started beating our estimates. In my opinion, although I can't prove it, this led to increased productivity and higher-quality work. * '''Over-estimating.''' Our original AtsLoadFactor of 3.0 was too high. As of the end of the day today, it was 2.53. Our revised estimates were also too high, coming in at about 115% of actual. (The first week of estimates, on the other hand, were about 63% of actual.) Since we came up with several overlooked tasks and had 44 last-minute developer-hours of absences, we needed the extra time. Right now it looks like we'll make it right on schedule, with only a fraction of time available for AtsSpitAndPolish (unfortunately). Please note that I deliberately over-estimated the load factor as a way of giving us "wiggle room," but we weren't trying to fudge our task estimates. * '''UnitTest''''''s.''' These made a lot of things easier, but 95% of the problems we encountered were in the unit tests, not in the code they were testing. Perhaps this is because most of our tests operate against our database, so we have to do a lot of non-fun stuff like creating and deleting cascading relationships. Some tasks took hours for to get the tests working, but less than an hour for the actual code. (You could argue, though, that the code wouldn't have been so easy if we hadn't first spent so much time on the tests.) On the other hand, those times that the tests did catch something, they saved us a ''lot'' of time. If the tests hadn't taken so long, they'd be an unqualified success; as it is, I'm happy with them, but cautious. * '''PairProgramming.''' This appears to have been a success. I was the only one on the team of three with previous experience with ATS, so I did a lot of mentoring. Pair-programming seemed to be an effective way to do so. I wonder, though, if having everybody work alone but free to call on me for help would have been more effective. * '''ExtremePlanning.''' Story cards went over great with the users. Engineering tasks also worked fairly well, although we had trouble coming up with non-interdependent tasks. Next time we'll probably create the tasks with a particular order in mind, perhaps using a variant of AlistairCockburn's ProjectPlanningJamSession technique. In addition, separating IdealTime from ActualTime with a LoadFactor was a great help - without it, our estimates almost certainly wouldn't have been high enough to account for overhead. Tracking time spent on tasks in terms of ideal time allowed us to see exactly how well we were estimating as well, which should be a big help as we plan our next iteration. Practices that were of questionable value: * '''Merciless refactoring.''' We did a fair amount of method and variable renaming and method extraction. Although it definitely made the code cleaner, and satisfied my obsessive-compulsive leanings, I don't know if it had any large impact on the production code. This could have been because the production code is fairly well established and reasonably well designed. The design didn't make any change hard enough to demand refactorings. On the other hand, the test framework, which was brand-new, would have been worthless without merciless refactoring. * '''DoTheSimplestThingThatCouldPossiblyWork.''' I'm a strong believer in this attitude at the macro (inter-iteration) level, but I'm not convinced that it's valuable within a single iteration. In our test code, we took this approach to the (ahem) extreme, mostly because we were already spending a lot of time on the test code and didn't want to spend more. As a result, our test code feels very organic and haphazard. I don't have a strong understanding and comfortability with the structure like I do with the production code. On the other hand, it feels like it's just barely on the verge of crystalizing into a very clean and useful design, so maybe I just need to give it some more time. Part of the problem could be that, since it's "only test code," we aren't as careful with the test code as we are with the production code. * '''CodeSmell''''''s, simplicity, and refactoring as a substitute for UpFrontDesign.''' This approach worked fine in the production code. But I think that's because the production code already has a strong, well-established structure. It's easy to see where things fit into place. In the test code, that wasn't the case, and I'm not happy with the result. It smells. Of course, maybe that's a signal that I should sit down and refactor it for a while. Everybody's reluctant to do so, though, because refactoring the test code isn't ever relevant to an AtsEngineeringTask. Maybe it should be, though. Practices that hurt us: * None that I can think of. I'll discuss this list with team further next week, and post an updated version of the list then. ---- ''3 April 2000'' I've updated the AtsStatusReport''''''s page with the last two weeks' status reports. ---- ''12 April 2000'' We had our iteration five planning meeting last week on Monday. That went very well - we used the story cards again, but this time, I used a combination of ExtremeProgramming's PlanningGame and Alistair Cockburn's ProjectPlanningJamSession. First, we went through all of the cards and categorized them as high, medium, or low. There were some arguments about relative priority, but I overrode them by stating that this first pass was just a rough cut and that the highest priority anyone mentioned would go on the card. As a result, we got through the initial batch very quickly. Next, I picked up all of the high priority cards and put them down on the table one by one. For each card, I asked the users to arrange them by relative priority, with one end of the table being "highest" and the other end being "lowest." This, too, went fairly quickly. There were some disagreements about priority among different users, but overall they were able to compromise and accommodate each other. When we got through all of the high priority cards, we counted up the times and found that it equaled the amount of time available. So we stopped. The cards on the table were the cards for iteration five. If I were to do it over again, I would have continued with the medium priority pile. I think some of the cards in the medium priority pile would have been higher priority than some of the "high" priority cards since the initial rough cut resulted in a few artificially high priority cards. But overall, the users were happy, and I think this session went even better than the first one. *** We also deployed iteration four of ATS last week, two days ahead of schedule. Those extra two days, unfortunately, were eaten up very quickly with user training and server issues. Right now, we're about 7 ideal hours behind schedule for iteration five. I'm not worried about it, though. Not yet, anyway. Our estimates are based on an load factor of 3.0, which is probably going to be too high. After we deployed ATS, we spent about five hours in a AtsEngineeringTask planning meeting for iteration five. This one went ''much'' better than the planning meeting for iteration four. In iteration four, we tried to create EngineeringTask''''''s that were completely stand alone, so they could be done in any order and meshed well with DoTheSimplestThingThatCouldPossiblyWork. That approach didn't work out too well, though. It was hard to tell what exactly needed to be done for each task, since they were worded somewhat generically. This also decreased the accuracy of our estimates, I think. So in iteration five, we used a modified ProjectPlanningJamSession. Only the engineers were present. We took each story card in the order defined by the users and brainstormed all the tasks required to do each story. Then we ordered the tasks and made sure there were no overlaps or gaps. Some stories also required data migration or specialized testing, so we created cards for these things and put them at the end of the story's tasks. As we proceeded with other stories, we assumed that things would be done in a particular order. So the first time we needed to something related to user preferences, we created a card for "Create User Preferences window" and estimated it accordingly. But the next time, we created a card named "Add 'foo' to User Preferences window" and gave it a correspondingly smaller estimate. The overall result felt much more coherent and plausible. Part of the benefit may have been due to the other programmers' increased understanding of ATS, but I also feel that the ProjectPlanningJamSession approach gave us a better overall view of what needed to be done. *** Our biggest problem at this point has been with the production server. We're actually deployed on a staging server for various reasons, and this is one of the first servers that new patches are tested on. Maybe we were just unlucky, but we've had very poor reliability from that server since Friday. Coincidently, that's when the users started using ATS. I'm concerned that the users will try ATS, not be able to log on, and will write ATS off entirely. :( We've located a new server that we can dedicate to ATS and will be moving to that server as soon as possible. We've also lost one of our developers. He was needed for another project. Now we're down to two, from initial plan of four. I'm not really concerned about this - it's increased the length of time for the project, but lowered the overall cost. I communicated this to the GoldOwner''''''s and they were perfectly happy with it. Our estimate for this iteration is about four weeks, with deployment scheduled for May 3rd. I've also posted the latest AtsStatusReport.