The exhaustive list of alternatives should live in exactly one place. Object oriented systems handle this by having a class hierarchy. For example old style procedural code to print different "shape" objects might read: if (type==CIRCLE) print "Circle. r=" + radius; else if (type==Square) print "Square. sides=" + sideLength; else if (type==Rectangle) print "Rect. h=" + height + " w=" + width; endif But then this same list of alternatives would be used in computing the area, and it rendering to a GUI and in checking for overlap with another shape, etc., etc. So rather than have lists of cases which have to be modified in all locations whenever you add a new shape, use subclasses. Notice this supports the OpenClosedPrinciple also, because it allows you to add a new case (keeping the system "open") without modifying the superclass (which is "closed"). [Actually, the procedural code is often quite a bit messier than shown with large chunks of code between each case.] ------ But I consider method names a "list" also. Is duplicating the "list" of "subtypes" somehow more sinful than duplicating a list of subtypes (methods)? Besides, I find that subtypes often de-generate over time in practice as multiple intersecting factors come (and go) into play. (This may not happen with shapes since the laws of geometry don't change very often, but other domains do change often.) IOW, global taxonomies are a risky structure to bet future maintenance on (AllAbstractionsLie). Structurally, it is a matter of inverting a nested relationship: Variation 1 --operation A --operation B Variation 2 --operation A --operation B etc... Versus: Operation A --variation 1 --variation 2 Operation B --variation 1 --variation 2 etc... One nesting packing is not necessarily inheritantly better than the other, although I personally find operations more stable over the long run than mutually-exclusive taxonomy "slots". Others have disagreed, but that is my observation of change. ''The solution is *not* to put the whole operation into the subclasses, but only those parts which differ between variations. Those common to all variations (for example, the print format or output target) should still reside in exactly one place.'' You mean if the implementation is the same. Corresponding equivalents exist for non-sub-typing approaches, I would note. See http://www.geocities.com/tablizer/inher.htm ''Yes, of course. And you could have written something equivalent to Listing 1 using OO techniques.'' * Perhaps, but what benefit does it give you over the procedural equivalent? KISS would dictate procedural unless some additional advantage is identified. Besides, Meyer strongly implies that the purpose is to reduce duplication so that you have only one place that needs changing per change request. However, he fails to apply it from a "verb" viewpoint and only shows the noun viewpoint. ''I don't understand why you differentiate between those "viewpoints". You could have applied the same techniques to the "verbs". You could even have done both.'' ''The goal is to put things which change for the same reason into the same place.'' Things change for a variety of non-anticipate-able reasons. ''And still, having to apply a change to only one place in your code is a very desirable property of a design.'' {Agreed, but often it is diffult because of the intersection of 2 or more "dimensions". His solution seems to favor changes in one dimension over the other. It is too narrow a treatment of change analysis in my opinion.} Besides, grouping by "reason for change" is kind of arbitrary if you ask me. I can think of a lot of other "sorts" that would be more practical in a general sense. ''There are certainly other forces competing against the SingleChoicePrinciple. What are you thinking of?'' Most of the grouping decisions are domain-specific. But, I tend to group code by task. ''Yes, I did understand that. My question still is: What forces drive you to do so?'' I suppose because behavior is the weakest grouping ability of a database-centric approach. It is easier to farm off some or most of the grouping of other factors to the database. That makes the grouping virtual, not physical like it would be in code. Virtual means you can customize your view of it. See CodeAvoidance. Describing and classifying nouns is best done in the database such that the code's concern is mostly processes. If you want to better manage noun classification, then the database is a better tool for such, largely because it can better provide multiple virtual classifications on the same given noun. Textual code is limiting in this regard. A given car part can both be classified from an engineering perspective AND an accounting perspective, for example. I don't have to choose one over the other. OOP cannot do this without either making a convoluted mess or reinventing a database in code. -- top ---- Critical comment about Meyer's chapter on the topic: http://www.geocities.com/tablizer/meyer1.htm#singlechoice ---- True, it often comes down to inverting the nested relationship and choosing one to be in one place while the other is required for all subclasses of a class. However, that does not invalidate the principle -- it just points out that you sometimes hit a case (nested relationship) where you can't apply SingleChoicePrinciple everywhere and you have to choose one place. * The principle is sound, but case statements are perhaps a poor example since we have two interweaving factors in such examples, and which factor to make the inner and which the outer is not universaly agreed-upon. But if we remove that, then SingleChoicePrinciple is nothing more than OnceAndOnlyOnce. If it is different than OnceAndOnlyOnce, it would be nice to know how. Certainly we all agree that scattering case statements all over is a pain to deal with -- the cases in the various statements diverge and you can't tell what *exactly* happens where after a while. ''Good programmers would agree with that statement.'' * It is argued in SwitchStatementsSmell that case statements "degenerate better". ''By topmind, a controversial programmer who prefers procedural techniques, so of course he'd say that. It certainly isn't a viewpoint many share, and shouldn't be presented as such.'' [see below] * For example, they allegedly handle change better when multiple orthogonal factors come into play and/or the choices become non-mutually-exclusive (is-a to has-a). One only has to change the conditionals rather than move code out of named units and into others. In this view, moving code out of a named unit is consistered more costly and more risky. However, others either seem to calculate change penalties differently, or consider such changes unlikely. People's past experience seems to determine which path is perceived the most change-friendly and their stance on ThereAreNoTypes. {If you feel this paragraph unnecessarily duplicates existing material, feel free to factor it.} * To truly answer this question, one has to become a student of ChangePattern''''''s. Without examples and analysis, it's difficult to say what kind of change is more likely over another such that one grouping is preferable over another. I cannot answer such in a general sense beyond anecdotal statements without examining both a specific switch/case statement instance and investigating the domain involved. (It also depends on the language I am coding in. C's switch/case statement is an anachronistic abomination that should be banished and burned. There may be a performance reason for it on some chips, but not a human reason for it.) -t Also note that VisitorPattern is a technique for inverting the nested relationship if you feel that you have the wrong one embodying this priciple. (In visitor, the method is one class and has an entry for every subtype so you can add a new method by adding a single encapsulated class.) ---- Ranty HolyWar stuff: Re: ''By topmind, a controversial programmer who prefers procedural techniques, so of course he'd say that. It certainly isn't a viewpoint many share, and shouldn't be presented as such.'' * "[Even though...] it has been known since 1847 that classifications are dependent on the purpose of the classification, people continue to believe that it is possible to create a classification system that is context-independent." (Haim Kilov on comp.object, 6/01.) Subtyping assumes a "global taxonomy" in order to be change-friendly down the road. * ''Hm... this medicine doesn't cure all possible diseases so we should never use medicine. It's a silly argument, subtype taxonomies solve the bulk of problems associated with procedural code, it doesn't matter that they aren't perfect for all contexts, we call always fall back to procedural code for those special cases.'' IfItCantBePerfectDontBother. {We have to chose one or the other, so "don't bother" is not an option being discussed here.} * This comment about classification is quite true, but the implication of a global taxonomy isn't. The better OO programmers realize that AllModelsAreWrongSomeModelsAreUseful, as long as one keeps in mind that the taxonomy is partially (not completely!) arbitrary. These issues of classification actually arise in pure procedural programming, too, since one must pick some organizational principle for modules/files/grouping of functions. * ''I personally find task-based grouping more stable most of the time. '' * That's an absurd statement and suggests you don't even understand the page topic. There can be no argument that updating multiple switch statements is bad, it's a fact. Any method that reduces switch statements is an improvement because it eases maintenance, that's a fact. Polymorphism does that, that's a fact. It doesn't matter how polymorphism is implemented, whether it be OO or functional, objects/multimethods/templates, all are vast improvements over not having polymorphism, and that's a fact. If you don't understand why polymorphism is useful, then you're not equipped to argue against it in this conversation! ** Watch your blood pressure, there; please try to keep a more neutral tone. ** ''Perhaps I am not communicating my objections to subtype-based polymorphism very well. I will let somebody else try to explain it this time. Maybe try PolymorphismLimits and SwitchStatementsSmell. Further, switch statements being bad ''in C'' is not the same as them being bad in general. -- top'' * ''Maybe not all of the time, but one cannot know for sure until decades after conception, so I go with the historically most winning horse: task grouping. It is the more UsefulLie. Querying the database is used to create virtual noun taxonomies as-needed instead of hard-wiring them into a large-scale code structure, such as classes pointing to other classes. Yes, sometimes it runs slower, but that is the price for higher abstraction. For example, suppose we had an e-store that initially grouped products with a hierarchical classification. Over time we find that a hierarchy is insufficient because some products belong in multiple categories. Thus, we abandon a hierarchy for most internal processing, replacing them with sets, but still keep a hierarchy view (with duplicate leaves) for customer convenience. The customer view differs from the internal view.'' -- top * [Great, setup a bad object model then knock it down, lose the strawman. If you knew the topic, you wouldn't say things like this.] ** Like I said below, not every OO developer uses direct subtypes heavily. I am talking about a given technique to replace case-lists, not necessarily all of OO. * So I know exactly what you're talking about, point me to where you've previously defined/discussed what you mean by "task grouping". ** See ProceduralMethodologies. * In regard to having a highly dynamic classification: this depends on the domain; it isn't '''usually''' the case in most problem domains that classifications need to be highly dynamic, even though it's true that they may need to be tweaked from time to time. Similarly for complete abandonment of a hierarchy: it depends on the problem domain; some problem domains may call for that, but others are well-served by an approximate taxonomy. * ''There are very few agreed-upon techniques for determining whether something will stay tree-ish over the longer run. My personal experience is that tree-ness and subtypes can be stable if a standards body dictates a tree and the standard does not change often. In that sense, it is kind of a self-fullfilling prophecy. But in general I have been burned by betting on stable trees or trees in general many times. SetTheory is better future-proofing. Users prefer trees so I still lean that way, but when the domain gains non-tree requirements, it screws things up royally. -t'' RE: ''It certainly isn't a viewpoint many share, and shouldn't be presented as such.'' Even many OO proponents agree that polymorphism and sub-typing are overused. They simply point to other attributes of OO as the key selling point of it. See: OoBestFeaturePoll. ''Mmm...not sure "overused" is quite the right word. Sometimes misused, and sometimes oversold, perhaps is closer.'' ---- Should this all somehow be merged with SwitchStatementsSmell? It is true that Single Choice is more general than switch/case statement issues, but the examples usually end up pointing to switch/case statements (and their if/else cousins). ---- Isn't this covered by the Class FactoryPattern? ---- See SwitchStatementsSmell, LimitsOfHierarchies, DeltaIsolation ---- CategoryModellingLawsAndPrinciples