From ExistingPageOrBegging: : "a known, simple, reproducible error ... that is not resolved in reasonable time, devalues anything that is written here about software quality." But does the existence of "bugs" really indicate a kind of hypocrisy about software quality? Isn't it more reasonable to recognize there are priorities, which may leave some less critical problems unsolved? Even if they are known, simple and reproducible? I don't want to come across as anti-quality or pro-bugs, but I have seen the "extreme" of the above position. What can happen unfortunately, without a real sense of priority and focus on the critical issues, is that anyone that raises an issue in a team can derail the team's efforts into things that don't have a lot of value to the end customer. I believe it's actually ''easier'' to raise issues freely when there is no automatic obligation to fix the problem identified. When I know the issue will be properly sorted by its criticality, I don't have to worry that it will distract the team from its main goals. If the management of the team is easily distracted, I actually find myself reluctant to bring things up, even if they would be helpful in the long run, because I know it will probably shoot up to the top of the to-do list, where it probably doesn't belong. See SoftwareIsReallyPointless. ---- : "Human beings are not accustomed to being perfect, and few areas of human activity demand it. '''Adjusting to the requirement for perfection''' is, I think, the most difficult part of '''learning to program'''." ''see ProgrammingRequiresPerfection'' That quotation is out of context, and, in this context, nonsense. Just about any non-trivial program has bugs. Will the user/customer think it's not a quality product if all the bugs are not removed? Depends. Will the customer think it's low quality ("fitness for purpose") if they can't ever use the product because the developers are taking the last bugs out? Assuredly. : You win the "out of context"-contest easily. I talked about a specific bug at the heart of wiki (automatic linking) which is *the* wiki reference system (not just a clone) owned by a special person (not just a guy like you and me) important to XP (in addition to wiki). You made this a question of "may software have bugs at all". : Surely in my context ExistingPageOrBegging has at least as much weight as a UnitTest about which I read (as someone interested in but not doing XP) in UnitTest''''''s: "The tests must run at 100%". So leaving this page broken for about two months touches the combined credibility of Ward+wiki+XP. Please tell me again, that this is just a question of priorities. ---- I usually define software quality as a measure of how well the software helps the user do his task. Missing capabilities can be just as much a problem as incorrectly implemented capabilities. Also, as a more subtle point, software quality is a relative scale, not an absolute. A better title for this page might be DoesBetterSoftwareQualityMeanFewerBugs. -- WayneMack ''Given a choice between missing capability and buggy implementation, I'll always take the former. While it's true that missing capability can be a problem, at least your losses are limited. Buggy implementations can do more harm than any of us can predict, and are therefore the more treacherous. "Do it well, or don't do it." -- WaldenMathews'' "How buggy" is a business decision. My only problem with the business deciding to allow bugs is if they have me on a fixed-price contract (otherwise known as "salaried."). Being on salary encourages the business to raise risk, knowing that they can get free resources to manage the damage later. This is one way that businesses take advantage of IT workers. So as a salaried employee, I won't lower quality below a certain level. But if I'm being paid hourly, I'm happy having the business decide when features are more important than bugs. -- WayneConrad Oh, I'm confused now. Even if it were the case that it's up to the customer to prefer buggy features over fewer features, would we have to conclude that "software quality" '''doesn't''' mean "no bugs"? On the other hand, even if we '''do''' define quality to mean, primarily, helping the user do her task, doesn't it follow that a buggy feature, being one which hinders rather than helps with that task, represents a negative contribution to quality? In other words, isn't "buggy feature" an oxymoron? -- LaurentBossavit ''Yes, but it's a buggy oxymoron at best. I hate it when "how buggy" gets elevated to a business decision. Where I work, they drag the customer into meetings to discuss all the bugs found in the current release, and use the customer's schedule expectations to leverage bad quality into production. Yuck. It looks like the customer is in the driver's seat, but I know better. When quality management becomes the customer's job, somebody isn't doing theirs. -- WaldenMathews'' ---- I would tend to answer the page title with a firm '''yes'''. The reasoning is as follows - when someone tells me there's a bug in my program, I feel guilty. I wouldn't feel that way unless I also felt this was an indication of not having met expectations of quality as best I could. While it is true that expectations of perfection are usually unrealistic, this doesn't mean that perfection cannot be defined or that we can't strive for it. Yes, software quality means no bugs. Yes, as long as I live I will continue to ship software with bugs - that doesn't mean I can allow myself to feel good about shipping bugs. I hope to ship as few bugs as my limited skills will allow. -- LaurentBossavit ---- I, too, answer the question ''yes, software quality means no bugs''. The alternative is to believe - and folks do assert this - that it costs less to develop buggy software. This is a slippery slope, since the cheapest software would surely have the most possible bugs. I believe that there is a sweet spot on the cost curve, where strong testing and other techniques keep the defect count incredibly low. I recently visited a client who spent an entire '''year''' recovering from a buggy release. A year after that release, they released one with most of the important bugs fixed. The code is still so fragile that they literally fear to add features to it. With Laurent, I'll continue to implement bugs. I do it often, some of them so stupid that I have to laugh, since killing myself seems extreme. But each time I do it, I try to think for at least a moment about how I could avoid doing it again. In a few more decades, I think my code will be pretty good. And so far, the harder I try to be defect-free, the faster I seem to go. I'll let you know if it ever starts slowing me down. -- RonJeffries ---- I believe that "quality software" is software that passes my customer's AcceptanceTest''''''s. The Acceptance Tests are my customer's measure that I have satisfied his requirements. If the tests he has designed pass, then the functionality he has requested is considered to be bug-free, and I believe I have delivered a quality software product. If in the future he begins to exercise the software differently because his requirements have changed and he uncovers a bug, I will (as an internal software resource) address that new requirement appropriately. ''I found this attitude common among ExtremeProgramming afficionados. UncleBob and RonJeffries advocated it a lot in Usenet discussions. But judging it with a little bit of objectivity, it raises a serious question on whether it is not in fact a dereliction of responsibility, as it moves both complexity and unnecessary responsibility from the shoulders of the software engineer to his customer. The software engineer now may get a free pass for doing a less than acceptable job, as long as he/she can put the blame on the customer for not writing (or directing the writing of) conclusive acceptance tests.'' ''It is well established that many requirements are much more easy to specify in informal english prose, or semi-formal or even completely formal requirements specification. However, writing conclusive acceptance tests that would provide a good enough confidence level that the software meets those requirements it is anywhere from very hard to impossible. Whereas in traditional software development models, if software exhibits abnormal behavior, contrary to its written specification, and more importantly negatively impacting its users, the software developer is '''in principle''' responsible for this defect. However in the XP version of the story (at least according to UncleBob, who acknowledged that) it would be the customer's responsibility for not writing an appropriate AcceptanceTest that should have anticipated the undesirable behavior. This is a sad state of affairs from a customer's point of view. And putting myself into my customer's shoes, for example when I'd sub-contract a component for development by a third party, I would consider it unacceptable to be pushed into this situation. -- CostinCozianu'' ---- Making mistakes is natural. Ignoring mistakes is stupid. You will always write bugs. Caring about that is important; otherwise, you'll convince yourself it's ok to release crap. Don't release crap. Do your best to do your best. This is one thing in XP I never understood, too. Quality is one of the variables that can be controlled by the stakeholders, but it's not as if you write fewer bugs if you get paid more. You can only test more, I guess. But I'd say 90% of bugs can be avoided by learning how to program correctly, grok the language, etc. -- SunirShah ---- Okay, there's a good consensus that bugs are bad. But to get back to the theme of the original quote... (moved to CommitmentToQuality). ---- I would answer '''No.''' Software Quality is the degree to which the software makes the user's job easier. I have seen far too many debates regarding whether an item is a "bug" or a "new feature." Although the purist in me would like to say my software has no bugs, the realist in me says that not all bugs are equal. For example, a bug in a little used routine ''may'' have little affect on software quality, while a missing or confusing bug-free feature may cause the quality of the software to be poor. The quality of the software must be evaluated from an external, user perspective, not an internal developer perspective. Probably the best measures of the quality of the software would be number of user calls to the help line and the length of the software release notice. -- WayneMack ---- One intangible that is very important is the user's expectation, and by that extension, the user's trust. It's best to promise little and deliver much. Take, for example, the experience of using small, modular command-line Unix tools like grep or ls ... the output is always dependable. You spend no mental energy wondering whether ls missed something. Now compare it with your experience of using an unfamiliar feature in a large application, like doing indexing in MicrosoftWord. How much confidence do you, as the user, have in the results? How useful is the index to you if it missed 1% of the words you'd like to index? At what point does it become not useful? 2%? 5%? 10%? The point is that users lives are made easier when the gap between expectation ("I think this feature will do this") and resolution (what the feature actually does) is so narrow as to be almost nonexistent. Mentally accounting for possible bugginess slows you down. I'd rather use software that does the things it does at 100% accuracy (keeping in mind that 100% is an ideal, not an actually attainable goal) than it do 5 times as many things at 90%. Because in the second case I don't get to choose which features need to be 100%; in practice that sort of sloppiness is often pervasive throughout the whole code. -- francis ''"You spend no mental energy wondering whether ls missed something."'' ''Apparently the author has never ran into someone who could not find .profile'' Good point. Is a complex interface a bug? -- francis ---- The difficulty is that there is no clear dividing line between a "bug" and a "feature," hence any sort of argument trading one for the other is meaningless. A user simply does not care why the software is preventing him from doing his job; he is only concerned with how much time he must spend doing work arounds. If the effort expended on work arounds exceeds some level, the user will simply stop using the software. The users are simply the only people capable of evaluating how the software affects their workflows. The users must be allowed to prioritize the changes desired in software, and it is immaterial whether those changes are "bug fixes" or "new features." Reasonable people can disagree on whether something is a bug or a missing feature, and unreasonable people can disagree even more so. In the end, though, one needs to decide how important is the requested change. -- WayneMack ''I don't think that reasonable people can disagree when the software crashes and loses data or corrupts data. Many a software system still behaves like that, and reasonable people cannot call that behavior other than a bug. So while in some places the distinction between bugs or missing features can be blurry, there are other situations where the dividing line is as sharp as can be. -- Costin'' I don't want to let this discussion drift too far from my original point, which was that neither a "bug fix" nor a "new feature" is by definition more important to improving the quality of the software. Either a missing feature or a current misimplemented feature may be more detrimental to the current use of the software. Returning to the "bug" issue, terms like "system crash" and "lost or corrupted data" are still very subjective. I've heard the report (on several different projects) that "the system did not crash, it was just running very slowly." What one person may consider as lost data, another may consider as data deleted as per design or an acceptable mode of data loss. What one person may consider as data corruption, another may consider as data updated per design, an unspecified user operation, or an unspecified data rule. There is simply a lot of pressure to define things as "improvements requests" rather than "bug fixes" and the more people, departments, and organizations involved the stronger it gets. My opinion is that the classification does not matter and that things that are classified as "bugs" are not necessarily more important than those classified as "improvements." I have far too much of my time wasted in debates whether something is a bug or not, and usually can resolve the problem by getting people to address the issue of whether we want to address the problem now or defer it. -- WayneMack [I agree that it is sometimes arguable whether something is a bug or a feature, and in fact that subject has been a widespread joke for at least two decades, probably longer.] [It's also true that it's common to have bug reports that claim something is a bug, when it's actually a feature, or vice versa, either because the user is clueless, or because the reasoning behind the design decision wasn't understood.] [However there are other cases when something is inarguably a bug. It is not a feature when use of e.g. Internet Explorer freezes the system, forcing a reboot, and neither I nor anyone else on the planet would be amused by a suggestion from Microsoft that that was really a feature.] [I'm not sure whether anyone has done a hair-splittingly accurate definition of what constitutes a bug no matter what the business policy, to reliably and rigorously define the difference, but the difference does exist in the absence of such, and despite the grey area where it may be hard to say. -- DougMerritt] Proposed Summary Statement of the preceding discussion (please feel free to word-smith at will): The difference between a "bug" and a "missing feature" can be quite fuzzy (in the meaning of fuzzy logic). In some cases, a requested change may be obviously either a bug or a missing feature, while in others, the change may involve aspects of both a bug and a missing feature. To return the original comment in this section, software quality is not based on the quantity of bugs, as implied by the title of this page. The quality of software is dependent upon how well it aids the user in performing a task and upon how little it hinders the user in performing a task. This implies that the importance of a change (i.e., the level of software quality improvement that will result from the change) is dependent upon the frequency of encounter of the problem, the amount of time and effort to repair damage caused by the problem, and the amount of time and effort to perform the particular task by an alternate means. If one accepts the preceding proposition, then the conclusion is that some common measures of software quality, such as defects/Kloc are not of value. This measure focuses only on defects (however they may be defined), ignores the frequency of encounter, ignores the severity when encountered, and ignores the cost of performing the operation using an alternate method. A more subtle conclusion concerns the "perfectability" of software. If the definition of software quality is based entirely upon the number of existing bugs or defects, then it implies that software can be perfected; all defects eliminated. If one accepts a definition that software quality is based on the degree to which it assists the user, then it is implied that software is not perfectable and can always be improved. --WayneMack ---- CategoryQuality CategoryBug