Yagni tells us not to add a feature unless we need it. But What about deleting features when they're no longer needed. Either: * we know that don't need the feature now, so delete it and refactor Or * we don't know that we won't need it again, so don't waste time deleting it now Many of the arguments in favour of Yagni center around not wasting time now because you can't predict the future. However, the effort is reversed (in the short term) when deleting. But if you don't delete, you'll end up with unused features in the code, which may slow you down later. So, should I spend time now to potentially save time later? --DaveWhipp. Short answer: Yagni doesn't cover deletion, but OnceAndOnlyOnce and RefactorMercilessly do. See rule #4 of the XpSimplicityRules: "No superfluous parts". If you deleted unused code (and archived it just to soothe your fears), wouldn't you say that the resultant codebase is simpler? --AnonymousDonor Yes, the resulting code is simpler. So this is a case where I should spend the time now to save time later. Archiving the unused code is irrelevant: its in CVS anyway; and it will probably not be a plug-in replacement later. ''Unused code is a maintenance cost, both now and in the future. Unused code makes refactoring more difficult now. If the code isn't used to provide business value now, delete it. If you need it later, you'll know exactly how to write it. And if desperate, you can always fetch an old version from your source control system. (...but it's amazing how often you'll find it not worth the bother, even when substantially the same feature is needed again.)'' ''So, if you don't need it, delete it.'' I'm sure this is the correct "dogmatic" answer. But I'm not sure that its really the correct. Nor is it consistent with some of the marketing hype of XP. Consider the oft-stated "fact" that well factored code, with adequate unit tests, does not become harder to maintain as the code-base grows. If this is true, then unused code will not hinder maintenance if that code remains well factored with a complete set of unit tests. And now lets pluck some figures out of thin air. Lets say that the code in question would be estimated at 3 weeks effort to re-implement it. And lets also say that there's a 50:50 chance that it will be needed again in 6 months. Lets also suppose that, within those 6 months, the code will have changed sufficiently that it would be difficult to re-introduce the deleted code from the version-control system. OK, there' are a few assumptions there; but, under those conditions you'd be crazy to delete the code now, if it is true that well-factored code (with good tests) does not hinder current development. So please, demolish the assumptions. --dave. ''Nor is it consistent with some of the marketing hype of XP.'' Any examples of this? It seems 100% consistent with my reading of XP. ''... unused code will not hinder maintenance if that code remains well factored ...'' It certainly does hinder if it needs to be updated to keep up with the used code (as implied by 'remains well factored'). Even if it doesn't need to be updated, it hinders because it's one extra thing to keep track of. If I only use 1% of the code, it doesn't matter that all of the code is perfectly factored when 99% of it is a RedHerring. ''... lets also say that there's a 50:50 chance that it will be needed again in 6 months.'' How, pray tell, do we know that? Even if we know that, what's the harm in storing it in a scrap file for the intervening 6 months? There's no need to clutter the current source code. ''Lets also suppose that, within those 6 months, the code will have changed sufficiently that it would be difficult to re-introduce the deleted code from the version-control system.'' Couldn't have been that well-factored then, could it? What's the alternative? To update the code as you go, right? So the 3 weeks of 're-implementation' is spent during the active project rather than after it. I'm reading the above scenario like this: * We've currently got a library for Foo as version A. * In 6 months, there's a 50% chance we'll need Foo as version B. * It will take 3 weeks to convert Foo from version A to version B. * If we spend that time during the active project (100%), then we'll spend 3 weeks on it. * If we spend that time after the project (50%), then we'll spend 1.5 weeks on it. The choice seems clear. All that being said, I can't imagine code that works, is well-factored, and has unit tests being difficult to retrieve from source control. You must have some pretty messed up source control if it totally mangles your files after 6 months. ----- Thank you, I think I now understand my assumptions better. But I'm still not convinced on the relative cost of the two paths: alternative 1 is "delete&refactor; maintain simpler system; [refactor/restore/refactor]". alternative 2 is "... maintain bigger system ..." Perhaps I've had bad experiences; or perhaps I've never played with really good code. But in my experience, many small refactorings are simpler than one big refactoring: the whole is bigger than the sum of its parts. The question is, is going from libA to libB, is it less effort to do it in one chunk; or is it less effort to do it incrementally. Of course, if we never need to do the big chunk, then that should be even cheaper. As you pointed out, we only know that we need it after-the-fact (when the probability is 100%). There's one thing I do know. If the maintenance phase is zero time (i.e. "oops, I shouldn't have deleted that"), then the cost of the delete-restore cycle is greater than zero. So there's obviously a cross-over point --dave. Well, maintaining unused code in the hopes of future reuse should be cheaper than all-in-one upgrading by a factor inverse to the probability of reuse. In this case, it better be twice as cheap. I'm not convinced that it is. I'm especially not convinced that you can ''know'' the probability in question. Also, I question the assumption that as-you-go refactorings is going to be cheaper than all-at-once development in this case. 1. After the project is finished, you'll have the bonus of knowledge gained during development. Refactoring as-you-go will inherently explore more dead-ends, '''''especially''''' since the code is not in real usage. 1. As the project goes, you have to expend extra (non-zero) effort to mentally manage the unused code. If you leave it to the end, you save this cost. I think you're underestimating this impact. 1. The reason as-you-go refactoring is '''usually''' cheaper is because the customer is looking at it and giving you feedback. They are also getting value now rather than later, so from an investment perspective it's a good bet. This all breaks down when the code is not being used at all. There's no feedback about what is ''really'' needed, no investment factor because the code serves no purpose. The delete-restore is assuredly greater than zero, but I'd wager it's on the order of minutes. Even one or two hours is 2 orders of magnitude less than 1.5 weeks. The cross-over point will be very lopsided in favour of RefactorMercilessly and the SimplestThing. ----- One thing to remember is that with a code control system, you never "lose" the original code, so you can easily replace it. Now, if it's scattered about in 10s of files, then it's not well-factored to begin with, so you can't assume that it would be easy to modify along the way, and might start to think that it would be easier to delete it and add it later, when the surrounding code is better factored, and the replaced code can be added in better factored. --PeteHardie Besides, if you have all the tests for it, rewriting it will be a lot simpler. If you delete the file containing the code, it is gone form your project and your source code control system. If it is not needed, so what? If you are not sure whether it is needed or not, you shouldn't be doing anything until that is determined. When you are sure, however, delete the code and don't look back. --WayneMack ----- This decision becomes much easier when you discovered a feature was unneded while trying to fix a bug in it that -does- impact the needed behavior of the application. I ran into a case of this recently, and coined my own term, YouNeverNeededIt (YNNI) to describe this case. Specifically, the application had a settings form, and it tried to keep track of whether a change was made, and behave diferently on accepting or canceling the form depending on whether changes had been made. It would ask the user if they were sure they wanted to discard changes if they clicked cancel, and it would only bother to actually save anything if changes were actually made. The code for this, though, was spotty, and missed changes to some controls, so if you only made one change, it's possible the change would not get saved. The simple fix would be to patch the holes, but that leaves the awful structure in place that allowed cases to be missed in the first place. A serious refactoring was going to be needed. The breakthrough was in realizing that the settings form is used so infrequently that the user probably won't care if they're asked to confirm discarding changes when they didn't make any, and the overhead of saving settings when no changes were made is completely trivial. The right answer in this case was clearly YNNI. CategoryDiscussion