It is well-known that the cost of a software defect goes up, the later in the product development cycle you are. Defects discovered in design/analysis are (in theory) cheaper than defects discovered in implementation; defects discovered by the compiler (and reported as compiler errors or warnings) are cheaper than defects discovered by executing the program; defects discovered in unit test are cheaper than defects discovered in final system test, and most significantly, defects discovered during development are '''far cheaper''' than defects discovered after deployment. Thus, it is beneficial to DiscoverDefectsEarly. You can't discover (in the general case) ''all'' defects in the early stages, but there are things you can do to move discovery of defects earlier. * First, consider the life-cycle of the product, the consequences of failure, the cost (and logistics) of a field repair, etc. up front. Not every software has to be bullet-proof. But 'tis better to err on the side of caution. * Choose appropriate algorithms (and consider the target environment, now and future) during design and analysis. One cannot solve all problems with BigDesignUpFront (not even all problems of specification, as opposed to implementation), but you can save yourself a big amount of trouble by paying attention here. One of the nastiest sorts of defects to fix--scalability problems (it works in development, but not in the 20,000-node production environment)--is usually a defect of design or analysis. * Choose languages (and other architectural details) which permit easier discovery of defects. StaticTyping, for instance, can detect most type errors at compile-time whereas DynamicTyping cannot (that said, there are many valid reasons to use DynamicTyping; this is not advice to avoid it.) * Choose languages with programming models that reduce the possibility of error. GarbageCollection is far less error-prone than manual memory management. A strongly-typed language is better than one which allows (or even requires, to get things done) cheating. * UnitTest. 'Tis preferable to find and repair defects at the developer's desk, before the code is committed/released than to find them during acceptance testing. * Perform acceptance testing, ''including in an environment which simulates that of the customer, and by testers who can simulate the behavior of the end user.'' Should be obvious, but still too many managers (and engineers) consider testing to be overhead. (See QualityIsFree/QualityIsNotFree for more) * Use a TrackingTool. Even if it's an Excel spreadsheet or a whiteboard in the lab (as opposed to a professional software tool like ClearQuest), do it. '''However''': It is certainly true that defects discovered very late in the process (after the product has been deployed, for instance) are more expensive to repair than defects discovered early. ''It is debatable'', however, whether or not defects discovered in "implementation" are cheaper to fix than defects discovered in "design". This belief is one of the primary motivation factors behind BigDesignUpFront. In other disciplines (like EE or manufacturing) it is certainly true due to the significant outlay of $$$ needed to move from design to implementation (fabricating circuit boards or ICs is expensive; as is setting up production lines); in software it is far less true. ExtremeProgramming is based on the premise that detecting defects in "implementation" (i.e. coding; but see TheSourceCodeIsTheDesign) is ''less'' expensive than discovering defects in "design" (referring to paper-or-UML-driven diagramming exercises which then drive the implementation). * Changing source code, especially in the presence of good software engineering tools (RefactoringBrowsers, good configuration management), is often inexpensive--the only expense worth mentioning being labor costs). * The CASE tools used to manipulate ''design documents'', OTOH, are frequently more cumbersome. * Furthermore, even if ''fixing'' defects in design is cheaper, ''discovering'' them is far more difficult--so much so that it makes attempting to find and fix them all a futile effort. (As Brooks said, plan to throw one away--you will anyhow). In disciplines (such as EE) where BigDesignUpFront is appropriate, there are numerous CAD/CAE tools to allow engineers to verify designs ''before'' the design is committed to silicon (simulators, diagnostic/analyis tools, etc.)--even then mistakes frequently occur and must be repaired with GreenWires. In the domain of software; such tools are of questionable quality and difficult to use--I'm aware of no tool that will take a UML diagram as input and tell you "you fool, this will never work!". Numerous tools (debuggers, compilers/"lint", etc.) exist, OTOH, to verify the correctness of code. In many cases, the only "diagnostic" tool available is the DesignReview, wherein teams of human programmers analyze static diagrams of software looking for flaws. This technique is known to produce high instances of false negatives (glaring design flaws are overlooked and not discovered until a later phase) and false positives (reasonable and valid designs are rejected for specious reasons, often reflecting the prejudices and/or limited knowledge of the reviewers). * Design documents are not a saleable or shippable work-product; code is. (But don't fail to SharpenTheSaw). While it's good to DiscoverDefectsEarly; it's an AntiPattern to expend ridiculous amounts of effort to eliminate ''all'' defects before coding begins. But this observation applies ''only'' to the front of the curve--it is a far more egregious AntiPattern to practice CustomerQa. ---- '''What Defects?''' The above is based on the assumption that one starts with defect-free software and coders insert defects as they proceed. The logical conclusion is that the software is in its most accurate state before one writes a single line of code. The reality is that software development is a process of successive approximation. During development one is adding conditions where the software does the correct operation. Most of the items identified as "bugs" are not cases where bad code was put in (i.e., a defect), but cases where code to handle a specific condition was not put in (i.e., an omission). The "correctness" of software depends not so much on the validity of the cases handled as it does on the elimination of cases not handled. The most efficient manner to detect unhandled cases is to execute the software in its intended environment. When one relies on paper and mental models of the software and validates those models against paper and mental models of the environment, one ensures that operational cases will be overlooked and the time spent on these models delays detection of the overlooked cases. If one wants to discover defects early, one needs to get operating software in the hands of users in the actual environment as soon as possible. ''An interesting take; OTOH in my experience a signficant number of defects are sins of commission (incorrect code) rather than omission (needed code not yet done). Certainly one does start with nothing and works until one has something (working code); one doesn't start with code that does everything and strip away the unneeded bits until one has just enough code left to meet requirements.'' ''And it does bear repeating: Paper design documents '''are not the WorkProduct'''. They can be useful in generating the WorkProduct, but they are not it. To the extent that they are '''not''' useful in generation of the WorkProduct (or in some cases, '''impede''' generation of the WorkProduct), they probably should not be done. I've found that a LittleDesignUpFront is often quite useful--especially when done as a PaperModel.'' ---- If we take ''defects'' in its ZeroDefects connotation, meaning ''non-conformance to requirements'', then the software starts out being completely defective -- it doesn't conform to any requirements. (or does it? we don't know since we haven't written any tests yet to find out whether it does or doesn't conform ;). And when we say "requirements" we don't just mean the customer requirements but also the process requirements (e.g. conformance to coding standards). Now, in order to discover defects we need ''defect detectors''; there are many, and they exist to detect different types of defect/non-conformance. Since the goal is to DiscoverDefectsEarly it is wise to deploy and employ the defect-detectors early. For example, a powerful defect-detector is the UnitTest; if it is deployed early (CodeUnitTestFirst) then it will DiscoverDefectsEarly. Ditto for AcceptanceTest. Another example is the CodeReview; with PairProgramming having the earliest possible detector mechanism. Other examples include StandUpMeeting, OnsiteCustomer, FrequentReleases. Even tools as simple as syntax highlighting help DiscoverDefectsEarly. ---- See PokaYoke ---- CategoryDiscovery