'''Differentiate Priority and Severity''' The effect of a bug on the software does not automatically correlate with the priority for fixing it. A severe bug that crashes the software only once in a blue moon for 1% of the users is lower priority than a mishandled error condition resulting in the need to re-enter a portion of the input for every user every time. '''Therefore:''' Track priority and severity separately, then triage appropriately. It helps to have input from others on the team on priority. The importance of a bug is a project decision, different from the bug's perception by the Customer. In some cases it makes sense to track Urgency, the customer's point of view, separately. ---- Question: What makes a bug severe? This page gives no clue. Difficulty of fixing? But that can seldom be known in advance. Microsoft uses a four-point scale to describe severity of bugs. Severity 1 is a crash or anything that loses persistent data , i.e., messing up your files on disk. Sev 2 is a feature that doesn't work. Sev 3 is an ''aspect'' of a feature that doesn't work. Sev 4 is for purely cosmetic problems, misspellings in dialogs, redraw issues, etc. This system works very well. (Interestingly, sev 4 bugs end up getting set to priority 1 fairly often, because they are frequently VERY annoying to users, and fixing them is generally easy and doesn't destabilize things.) -- MichaelGates ---- The exact definition of severity is project-specific. Here is one that is reasonable for many projects, however: * Enhancement: New features * Low: Improvement to existing code, e.g. performance enhancement, or problems with an easy workaround * Normal: Broken or missing functionality * High: Problems causing crashes, loss of data, severe performance problems or excessive resource use. * Blocker: Problems that prevent testing or development work We differentiate them so that the dispatcher or whoever reviews bug reports can set priority based not only on how severe the problem is, but the customer's importance, business needs, etc. Many bugs cause crashes (High severity in the example above), but aren't fixed because the crash is very infrequent or on a version/platform/feature low on the vendor's support list. There's a pretty good example in "Lessons Learned in Software Testing", a book by CemKaner, et al. Lesson 73 there is "Keep clear the difference between severity and priority". The example given says "a start-up splash screen with your company logo backwards and the name misspelled is purely a cosmetic problem. However, most companies would treat it as a high-priority bug." As of early 2007, I no longer advocate tracking severity in the general case. There are rare situations where it provides a benefit proportional to the time and effort required to define and track it. If you are in a situation where you are asked to track severity, find out why. An answer like "because that's what must be tracked (according to the book/methodology/tradition)" is not a good answer. However, if you are in one of those places where tracking severity has value, then do keep the distinction from priority clear. I suspect, however, that in that case you don't need this advice. If the organization is tracking something called either priority or severity, but not both, and the two concepts are muddled together, then there is value in moving towards differentiating the two concepts, and eventually eliminating severity or determining how it might be worth keeping. -- StevenNewton ---- For a while at my last company, you had to specify both priority and severity when you reported a bug. Everyone was confused about what those terms meant. ''Its the testers/bug reporters job to set the severity, priority should be set by the project manager/office.'' It's much better to merge them into a single value. The bug filer can provide additional information, such as when a fix is needed and whether a workaround exists. The responsible team uses all of those factors to prioritize bug fixing, enhancements, and other work. ''I had the same experience at my last company. I think that differentiating priority and severity is a great idea '''in theory''', but in practice humans have too much difficulty with the concept to make it worthwhile. It gets mixed up too often. Although perhaps a change in terminology would help; how about "Importance" and "Destructiveness" instead of "Priority" and "Severity"?'' I had the same problems in my company. I managed the problem that every "bug" is authorised and set the priority and severity to best fitting values. Beyond that I checked who uses most often the highest/lowest priority resp. severity and gave a talk to that users. ---- If using different words like "Importance" and "Destructiveness" instead of "Priority" and "Severity" improves communication, then by all means choose your SystemOfNames appropriately. The choice of this page's name comes from existing usage in places like the bugzilla as used in TheMozillaProject. ---- I understand the difference between priority and severity. But time is linear, so I have to do things in a particular order. How do I use two rankings to decide what do next? ''As described in AutomatedTodoList: Priority, which ideally is adjusted by someone acting in the role of dispatcher, is the deciding factor. The true severity of a bug as determined by a tester or other gatekeeper can be used as an input to the priority number, but it is secondary. That said, if a developer has been assigned multiple bugs of the same priority then it ''probably'' makes sense to use severity as a guide to choosing what to do next.'' Right now I just maintain issues, with no rankings, on IndexCard''''''s. And then I keep asking my manager (outside of work, I'm my own manager, of course) which card is most important to do next. The ExtremeProgramming PlanningGame essentially does this when it's a team developing software for a client. Also, any attributes your BugTrackingSystem requires had better not confuse users - otherwise they will accidentally or deliberately enter incorrect or meaningless values. KeepItSimple! And if you let users set a bug's priority/severity when they enter it, won't they just make everything Very Important and Highly Destructive? -- ApoorvaMuralidhara ---- Somewhere else is described the following idea: 1 Rank each item of work according to its value '''V''' where 1 is the most valuable. 1 Rank each item of work according to its perceived difficulty '''D''' where 1 is the most difficult. 1 For each item of work, compute '''P''' as '''V/D''' 1 Rank the items, starting with the smallest value of '''P''' 1 Rearrange where there are obvious dependency problems. 1 Work first on the item with the biggest value of '''P''' I strongly suggest you try a few experiments for yourself to see the interplay between the values - reading examples of this just doesn't do it justice. We've now use it for 12 months and it's proved very good at increasing our value-delivered per time unit ratio. See TaskSchedulingUsingZipfsLaw ---- Consider also the bugs which affect only developers, but do so on a very regular basis. If the bug adds another 30 seconds to each test cycle... ---- Rather than trying to set priorities in a vacuum, I suggest the following: first address the issue that most improves your users' quality of life the most. This may be a new feature, a screen change, data validation routines, performance enhancements, crash fixes. Check with your users; their prioritizations may surprise you. ---- As mentioned above, Priority and Severity are two, very distinct properties of a Bug/Task. The example of a backwards logo on a splash screen is a good one - Minimal Severity (zero effect on software functionality), but High Priority (extremely obvious and affects every user). The problem, of course, is to determine what tasks are the most important in order to decide what should be worked on first. And, unfortunately, neither property (Pri/Sev) alone can be used to "sort the list" in order to make this decision. You can't work on all the HiPri stuff while ignoring HiSev. Or vice-versa. Instead, rather than relying on Severity or Priority alone, you must sort using a combination of both values to determine which tasks are more "important" than others. Implementing this is fairly simple: If both Priority and Severity parameters were assigned values from 1 (low) to 5 (high), the Importance value is simply the summation of both values. (For you database-types: SELECT severity + priority AS importance FROM tasks ORDER BY importance DESC) Using this method, a '''Hi''''''Pri/Hi''''''Sev''' task would have an Importance rating of 10 and would receive attention before a '''Lo''''''Pri/Hi''''''Sev''' task (Importance=1+5=6) or a '''Hi''''''Pri/Lo''''''Sev''' task (5+1=6) or a '''Lo''''''Pri/Lo''''''Sev''' task (1+1=2). -- KevinTraas ---- The real thing we want to know is "What should we work on next?" This is something that needs to be determined by users (or user surrogates). There is no magic equation based on priority, severity, level of effort, risk, or dozens of other possible valuations that will answer that basic question. In my experience, I have usually found it best to schedule a mix of important but risky changes with low risk and unimportant changes. If the important changes take longer than expected, the unimportant changes can be dropped. It also seems to help with programmer morale to get some easy, check the box type corrections now and then. Don't get caught up in trying to determine precise numeric evaluations for change requests. It is going to be a subjective evaluation to create an appropriate mix of changes within a release cycle. ''The answer to "what to work on next" is (or should be) determined by priority alone; that seems to be the definition of "priority". (Of course, given there is a small number of priority levels in most tracking systems--usually 4 or 5--there's the issue of breaking ties). The problem is is how to assign priority levels. Severity is a bit more clear-cut--the effects of a bug can be observed. But when you factor in things like effort, risk, customer impact, etc.... then there is no clear-cut answer. The best strategy seems to be having humans making such decisions.'' I think we might have slight semantic disagreement that masks underlying agreement. I believe we have agreement that the decision on "what to work on next" is subjective and best made by humans. The "priority" of what to work on next, however, is not necessarily the same as the "priority" of what a specific user may want addressed next. Priority is a matter of perspective, and a project manager is often hit with many competing "perspectives." The bottom line is that no algebraic equations exists that gives an answer to what to work on next, and any tried equation will not even come close to being fair and appropriate. * Concur with the disagreement. At my employer, though "priority" is unambiguously used as an indicator of what the ChangeControlBoard thinks ought to be worked on next; not a metric of how pissed-off the customer is. In other words, it's the output of the decision making process. The defect tracking software we use (an old one called DDTS) only explicitly tracks one of the input parameters--"severity"--issues like risk, difficulty are handled in the notes. But all are considered in determining what gets fixed and what does not. ---- Upon further consideration after much experience, it occurs to me that perhaps this page should be "Don't confuse priority and severity". Not every bug that causes some big visible scary-looking effect needs to be fixed RIGHT NOW THIS MINUTE. Sometimes (it's a business decision) the value of fixing the problem is lower than the cost of fixing a problem that results from rarely-occurring configuration and sequence of events. --StevenNewton ---- My company (a business unit of HP) differentiates 3 ways: Priority, Severity and Engineering Priority. An important refactor could have a high engineering priority but low priority/severity. An example of high priority/severity but only moderate engineering priority would be something that needs to be fixed but requires a massive overhaul of the system and would require more effort/testing/debugging. Only an engineer can determine the time/effort/impact of a bug or enhnacement, and project managers must know this to set schedules. ----- For commercial organizations, '''economics''' generally determines the rank of flaws. The bug that risks the most estimated financial loss is the one that gets the most attention. (This also includes the cost to fix it.) Related: SimulationOfTheFuture. -t If it's not a private organization or part of an internal project layers removed from customers, then the usual Dilbertian office politics determines which bugs get fixed first. ---- See: AutomatedTodoList DefectTrackingPatterns CategoryBug