''Moved from ObjectOrientedDesignIsDifficult'' I said on ObjectOrientationIsDead that I like OO, but do not believe OO (by itself) improved my coding quality. Why didn't OO by itself improve my coding quality? It is hard to answer this question in the negative. With apologies to Tolstoy, reasons for quality code are few and similar, reasons for non-quality code are many and unique. But I can try to answer the two questions "What did improve my code quality?" and "How is this different from OO?". First, "What did improve my code quality?". For about six years, I have been keeping a journal of what makes my coding better. Luckily, I have a lot of room for improvement, so I have a lot to write about :) The biggest thing I have realized is that my code quality did not improve by following a better recipe-like process. My code quality improved by following a better bunch of little rules. The bunch of little rules I use now are better than the bunch of little rules I used six years ago. (The XP rules, and several other rules from Wiki, helped my collection of rules a lot). My conclusion is that "Stan's code quality improved by collecting better little rules that work". Next, "How is this different than OO?". This is tough to answer, because collecting little rules that work is independent of OO. So instead of a single answer, I will put down some thoughts... * OO plus a good set of little rules = quality code * OO plus bad or non-existent little rules = bad code * Procedural code plus a good set of little rules = good code * Procedural code plus bad or non-existent little rules = bad code * Many people teach OO with a recipe-like process * Therefore, good sets of little OO rules don't pass easily from expert to novice * Good sets of little OO rules are hard to collect on your own * Therefore, good sets of little OO rules are hard to accumulate for a novice * ... -- StanSilver ''Stan - I want to see your notebook - Alistair'' My pet viewpoint is "little rules". So my interpretation of what Alistair is saying is that ''there are more little rules to learn to "become competent at" OO than "to become competent at" a procedural language, so OO (can be?) (often is?) (is?) harder.'' And my interpretation of what Costin is saying is that ''OO is not a cure-all; it is not the automatic choice; rather it is one tool amongst many, and using it is a trade-off. OO has plusses and minusses, and its extra difficulty, among other minusses, does not always justify its benefits. A procedural programmer can use OO-like little rules to give many of the benefits of OO, while keeping the benefits of procedural code.'' Addressing Alistair's point: As an addicted, unrepentant collector of little rules, I even collect little rules about collecting little rules. One of my meta little rules :) is to occasionally shrink the number of rules you have collected to 10 (See CollectingSeashells). So in order to make progress, you are forced to improve the quality of your rules, not the quantity. In other words, one way to make OO easier is to reduce the number of little rules needed to master it, by improving the quality of the rules used. As one attempt at this, I have started trying to get by with only three refactoring rules: OnceAndOnlyOnce, SeparateTheWhatFromTheHow, and DoYourOwnWork (roughly equivalent to LawOfDemeter). It seems to work OK so far. I don't really know if LittleRulesAndPatterns are alike or different or both. But the rules I end up with seem to be quite concrete - about the same level of concreteness as pattern names. -- StanSilver ----- I find OO forces one to think further ahead. For good or ill. It also makes people think in terms of structure, which isn't a bad thing until it becomes excessive. The downside is that OO tends to confuse a lot of concerns. ''One of the alleged benefits of relational-centric programming is you don't have to think ahead about "structure" as much. The ad-hoc power of relational queries grants this ability. PutClassificationsIntoMetaData. You issue the query to get the view you want, and then process the view. If the schema changes, then you just alter the query code instead of overhauling the application code shape. (The code shape does not reflect a NounModel for the most part.) The query acts as a liaison between the data and the task. The counter-argument is that you may have to modify more queries as the trade-off for not having to change the shape of your code. IOW, shot-gun changes instead of rifle changes. Whether this is true or not results in HolyWar''''''s. '' On the one hand it is a convenient form of templating and code organizing. That seems to be the C++ bias (at least originally). Then it also has type runtime behaviour implications, very much emphasized in Java. Then it has object behavioural implications as emphasized in the message passing objects idiom emphasized by SmallTalk. All the OO languages are borrowing from each other causing a coupling of concerns that makes things harder rather than easier due to issues of combinatorial complexity. It also forces decisions about code partitioning to be made earlier and earlier which is good at first, but makes maintenance/refactoring harder. I'm not sure what the solution is, but I suspect that OO structuring needs to be de-emphasized somewhat from being a presumption, to the consequence of a particular design/problem. -- RichardHenderson. ------ ''In my career (25 years) I've seen a LOT of procedural code, and some OO code (and of course I'm speaking of other than my own, which is always perfect... {grin}). A depressing percentage of BOTH kinds where bad, but the OO code tends to be chopped up more, so it isn't monolithic slabs of bad code like the procedure stuff is. Almost as if there's a lag in lessons; supposedly we've known to break code into manageable '''modules''' for many years, but OO is forcing people to do it now.'' I tend to divide my procedural code up into "tasks", and with "utility" or "library" routines, of course. These are generally module-sized. It works pretty well. Events in event-driven programming tend to be just such "tasks". I tend to use a database to manage state, so there are not many issues about managing or wrapping state in the application code. (Whether SQL should be wrapped is another long topic.) If you have too large chunks of procedural code, then it probably needs some rework. [You seem to imply that OO "forces" one to better divide stuff up. What exactly is the penalty with OOP languages for not doing such, and why is a penalty allegedly lacking in non-OO?] ''On the other hand, I often see TOO MUCH chopping; OO novices go "class crazy", making lots of classes with overlapping/competing responsibilities, which have complex relationships and lots of collaboration, and are therefore closely coupled. I'd estimate that only one-in-five OO programmers knows how to choose an elegant, minimum set of classes that capture the problem well but keep the overall design as simple as possible, but flexible. As the page says, ObjectOrientedDesignIsDifficult - but I think maybe it is just that DesignIsDifficult. Each paradigm brings strengths and weaknesses to the problem; novices and hackers will make hash out of any paradigm...'' ------ ''Moved from ObjectOrientedDesignIsDifficult'' OO design has more dials on it, more levers and balances to master: when to use inheritance, when to use polymorphism, how much data to put into this object, how much to put over into another one, where to put breaks in a framework. In simple procedural programming, there were only structs and functions. Therefore, it is harder to learn and harder to get right. ** [Yes, but you can't do big systems with "simple procedural programming". You need to start inventing "modules" and "libraries" and "ADTs". And pretty soon you've got a conceptual complexity comparable to OO. NickKeighley] ** ''True, and even if you've got modules, libraries, and abstract data types, before long you're starting to say "this code looks like that code but for these changes" and you're inventing polymorphism and inheritance. Then you try to control changes to state by inventing interfaces and encapsulation.'' ** DeltaIsolation and inheritance tends to be tree-shaped, and trees are not powerful enough in this context of "scaling". SetTheory or something similar is much more useful for non-trivial variations-on-a-theme management in my opinion, but perhaps a tree-oriented-mind may see it different. When variations-on-a-theme scale, they tend to drift away from tree-ness but still have a fare degree of tree-ness. Sets tend to hide this 70% or so that is still a tree and that is a practical down-side of sets because trees are easier for most humans to grok and visually navigate than sets. Perhaps if tree fans find a way to handle the non-tree portions satisfactorily, then this mostly-tree-thing-without-a-name may be competitive with sets and scale sell. (MultipleInheritance, as usually found, is a bear.) --top ** ''In good OO, trees are relatively rare and only used where a computational hierarchy is natural, like Object -> Widget -> Panel -> Listbox -> Combobox in which each item to the right provides further specificity of the category to the left. Dealing with "non-trivial variations-on-a-theme" is handled through composition ("has-a") rather than inheritance ("is-a").'' ** Then you get a graph. Sets are easier to navigate and/or query in my opinion. There are better ''known'' ways to apply aggregate transformations on sets than graphs, at least in terms of viewing/studying them. "I want to see all X's that have a Y with at least 3 Z's". (This harkens back to the Bachman-vs-Codd debates.) I don't know about you, but my brain is not powerful enough to memorize larger code bases such that a query-like-tool becomes a necessity. ** ''A composition of elements may have internal relationships that can be identified as a graph, but that doesn't mean the programmer -- or anything, for that matter -- has to consider them as a graph. The memory-to-memory references of every running program in an operating system are a graph, but nobody (and nothing) cares. Aggregating compositions is only an issue if you need to aggregate them. I don't normally need to run aggregate operations on the widgets in a form, for example. Bachman-vs-Codd doesn't appear to be relevant here -- we're not talking about OO (or CODASYL!) databases vs relational databases. I assumed this was about OO programming. If it's about OO databases (or CODASYL) vs relational databases, the RelationalModel quite rightly won that debate long ago. As for large code bases, I don't have to memorise them. They are invariably organised into an appropriate hierarchy of discrete modules precisely to make them manageable. Compositional systems make this easier, not harder, because complex systems can be built from a relatively small number of universal and easy-to-remember components.'' ** Re: "but that doesn't mean the programmer...has to consider them as a graph." - Yes, with machine processing the view of something can be re-projected into a new and different view. It's usually called "querying". ** ''No, not usually. Querying is a different thing. It's sometimes called ModelViewController if explicit views of some model are provided, but usually it's invisible. The fact that -- for example -- the references between identifiers in a program form a graph of dependencies is irrelevant to the developer. It matters to the compiler -- maybe -- but the programmer doesn't care. Sometimes tools are built -- profilers of various sorts, for example -- so that if we do care we can see that view.'' *** What do you mean the programmer doesn't care? When stuff goes wrong, they have to care, especially when the volume of parts goes up. Profilers may help you find what went wrong, but they don't help you organize stuff better to reduce the need for profilers. ModelViewController is poorly defined. Some versions create a MirrorModel, which is something to avoid. *** ''Composition of programming components -- such as the relationships between event handlers in a GUI form, for example -- may structurally be a graph, but that doesn't mean the developer views it as a graph. There's no need to view it is a graph. It's true that since ModelViewController is a pattern, it's general and abstract rather than concrete and specific, but you clearly understood it when you created a definition on ModelViewController under "Top's Working Definition". I'm not aware of any versions of ModelViewController that deliberately create a MirrorModel. They shouldn't. If they do, that sounds more like a bad implementation or a good reason to avoid those that do. However, let's not lose sight of what spawned this discussion: You suggested SetTheory as an alternative to inheritance hierarchies. I pointed out that "is-a" hierarchies are relatively rare; composition is typically used to handle "non-trivial variations-on-a-theme". Using composition '''is''' (perhaps in a somewhat analogical way) using SetTheory, because you are composing a subset of options from a set of available options.'' *** What's an example of another way to PRACTICALLY view or navigate it if "There's no need to view it is a graph"? Composition is still a hierarchy, or at least a directed graph, it just has different typed/kinds-of nodes at each layer. IMS used composition, but it was found limiting. *** ''A practical view of event handlers -- to continue the example above -- is as source code in a programming language. Yes, composition could form a hierarchy or a graph or some other structure, but that doesn't mean you will consider it as a hierarchy or graph. It might be source code, a diagram of boxes and lines, or some other representation. I'm not sure what IMS (I presume you mean IBM's hierarchical DBMS?) has to do with this. Are we talking about database architectures and data models, or source code for programs?'' *** What's a "practical view of event handlers"? Event handlers are only one aspect of a large program anyhow. And database-versus-application-code is often largely interchangeable, especially for larger projects. For example, if you have an app with 500 CRUD screens, you could have all that GUI definition in code, or it could be in tables, with only non-parametizable GUI event handlers left for the app code. *** ''A practical view of event handlers is source code, but I'm not sure we're talking about the same thing any more, assuming we ever were. My response was originally to your claim that "SetTheory or something similar is much more useful for non-trivial variations-on-a-theme management". Let me try a different approach: Given that we currently use a mix of inheritance+polymorphism and composition to handle variations-on-a-theme in OOP, how would you use SetTheory to accomplish the same thing?'' *** I am not claiming in all cases in all domains. ItDepends. Can you suggest a realistic variation-on-a-theme scenario to explore? Do you want to stick with GUI's, since they were mentioned above? *** ''GUIs are a good area to explore. Yes, please illustrate how SetTheory can be used to construct GUIs, compared to inheritance+polymorphism and composition.'' *** Well, RDBMS are currently the most common set-oriented (or at least set-influenced) tool. We typically track and manage the "objects" of employees and work units (organization structure) in RDBMS and not app code, so why should non-trivial GUI's be different? It's true GUI's probably don't change as often as employee info, but is that the only reason? Other than perhaps change frequency, I don't see that the needs for managing GUI objects are different enough from employees and org structure to warrant not using a RDBMS or RDBMS-like tool. Who would want to hard-wire employee info in code (other than for JobSecurity)? Managing GUI's in a DB could give us the added benefit of not tying GUI's and GUI-engines so tightly with a specific app language, which is often the case now, at least for desktop apps. *** ''Seems reasonable. How would you build a GUI using an RDBMS-like tool?'' *** There are a couple of existing topic pages that explore that further, if I remember correctly, but are not a complete spec or fully documented by far. Rather than wait around for them to be finished, do you have a specific concern you'd like to explore verbally now? *** ''No, I'll await your presentation.'' *** How many more years does your insurance chart estimate you'll live? Or are you gonna try CheatingDeathCheaply? See, it may be until my retirement that I get to that, and I'm a bit away from that. I gotta fat mortgage to pay off and college kids. *** ''My insurance chart says I died three years ago, so I'm in no hurry.'' *** That would result in negative insurance rates. They send YOU a check. Nifty. ** Re: "I don't normally need to run aggregate operations on the widgets in a [GUI] form" - I don't necessarily mean just summing and averaging. Perhaps it was a poor phrasing on my part. There have been disagreements on this wiki about the preferred grouping of GUI parts in code and it became fairly clear that different members of a team or different maintenance tasks may prefer different grouping or views. Remember we are talking about "scaling" such that we are dealing with non-trivial GUI's. ** ''Again, you appear to be talking about a ModelViewController approach to source code management. I have no objection to that; I think it would be interesting. The old VisualAge for Smalltalk (and later Java) from IBM seemed to be heading in that direction, but it never got there and I think it's gone now.'' ** Re: "[Code bases] are invariably organized into an appropriate hierarchy of discrete modules precisely to make them manageable." See LimitsOfHierarchies. ** ''I'm sure there a plenty of cases where LimitsOfHierarchies applies. Code bases aren't usually one of them, though there is a certain amount of craftsmanship to organising code bases. It would be interesting to see ModelViewController applied to a codebase as the model, with multiple views.'' For example: "We're putting this code in the superclass because - and I promise - in the future we'll have to change it the same for all of the subclasses. This way we only have to type it in once, and change it once." The quality of the design choice relates to the accuracy of that promise. Get it right -> good choice. Get it wrong -> bad choice. Without inheritance, the programmer is not faced with choices like this. That's part of what makes OO design harder (imho). -- AlistairCockburn On the issue of OO being broken, there are some problems with semantics that make it difficult. The CircleAndEllipseProblem, for example. There is a real difference between concepts like "is a" versus "behaves as" versus "knows". I have seen these things get people in trouble. I have also seen ridiculously over-designed OO designs. But I have seen the same over-engineering on non-OO software as well. -- MikeCorum Usually my comment would be made in response to a change in those six CopyAndPasteProgramming methods. Refactoring, not BDUF. As I'm sure you know, code that changes is likely to change again, especially in the high-change environment I specified. Reducing duplication is almost always a good thing. At least, it's a good rule of thumb. Hey, in XP, it's required. I don't think my example was weak. It's actually based on a session with a co-worker I had last week where this situation came up. OO novice, lots of duplication, headache project, used OO not just to make the code 'pure', but to solve a real live headache. When the headache went away after a few hours of refactoring and coding, my co-worker's relief was almost tangible, and he said something to the effect of, "OOP used to be a big mystery. Now I can see how it is really useful." -- RobHarwood ''Without inheritance, the programmer is not faced with choices like this. '' Without inheritance, the programmer doesn't even have a choice, so sure, OO design might be more difficult, but the alternative procedural design sounds like it would be a nightmare to program. I don't think it's really fair to target just design without taking the whole picture into account. Paraphrasing JerryWeinberg, "I can make a design in no time if it doesn't have to be implementable." ---- Consider this fictional but not unrealistic thought: I always wondered why good programmers like ErichGamma, RichardHelm and JohnVlissides would use such a difficult language as C++. I mean, there is so much more to manipulate in one's head to keep the code straight and clean and still good domain model with good maintenance and compilation characteristics - it just shouldn't be worth their while. And then I realized that maybe it is simply because ''they can''. So many people can't, and they can, so it's a subtle sort of macho badge being able to produce good OO design/code in C++ ... kind of like that stags have huge antlers to prove to the lady deer that they can survive ''even though'' carrying around those huge weighty antlers. So they are proving they are good enough to produce good code despite carrying around the mental burden of C++. Ergo, suppose that people who like OO do so because they ''can'' manage all that complexity while designing. Scary, huh? (This possibility can't be too correct, because I am not a very good procedural programmer - in some sense that is too amorphous and plastic. OO designs have a fairly concrete 2D structure in my head (and not the class diagram), so I can manipulate them more easily.) -- AlistairCockburn ''If that were really the case, why don't they do OO in assembler or machine code?'' * Because they haven't made a UnifiedDataModel * ''which is sorely lacking examples and clarity.'' Why don't bucks have 3 tonne antlers? http://home17.inet.tele.dk/elling/x86attic/#TaOOP ---- ---- Alistair wrote: ''The question of is OO design more difficult (to me) has to do with the "this mechanism or that?" questions running through the programmer's head while programming, what you [i.e. RobHarwood] call the "more in quantity and more in kind of decisions to make at each step". If, in procedural programming, I had to decide at each step whether to involve a table-lookup or a switch or a FSM or a quick neural net or rule-system reference, then I'd complain about the cognitive complexity of procedural programming.'' My experiences are different. When I write OO code, I just don't find myself boggling at the wide array of options facing me. My cognitive task seems pretty simple and straightforward, and while it's been a long time since I've written any substantial procedural code, I think programming OO is easier than programming procedurally was (and it's certainly no harder). Maybe it's that I'm content to be a satisfier, instead of an optimizer - human beings are rumored to be much better at the former than the latter, so why fight evolution? For example, if I need a little bit of code to deal with variant formatting of a currency, I just toss it in a new method in the class where I actually use it. I don't worry about putting it in a superclass, or making a new currency class for it to live in, or (in the other direction) even consider inlining it in the method that needs it. The time to put it in a superclass is when I need it in another subclass; the time to make a new currency class is when I write some other, related method; the time to inline is never, or at least not yet. Similarly (and possibly heretically, in some people's books), I'm happy to put a case or if/elseif structure in my code. I'm equally happy to take it out again later when I break the class up into separate classes with delegation or inheritance. At no time do I sit there in the throes of menu paralysis wondering which OO mechanism to use; all the entrees taste pretty good, though I do have some favorites. (Yes, there are XP terms for what I do, but that distracts from the point; see MarthaStewartAndCookieMonster.) So if you're saying "OO is cognitively hard because there's so many ways you can do things at any given point", I'd say that's a good point. It reminds me of DonaldNorman's "affordances", and leads me to think about the ''usability'' of OO. I'd reply, "So pick some favorite ways to do things and quit worrying about getting things perfect the first time". In other words, maybe OO programming demands a more exploratory attitude in order to take advantage of the flexibility and larger choice of tools that it offers. -- GeorgePaci ''But this means that different people building the same system will could have vastly different designs. It supplies no MentalIndexability. Exploratory design probably results in exploratory maintenance, which is almost like "creative accounting". In other words, a swamp. Without rules, conventions, and normalization procedures; you have chaos. It might be fine for cutting-edge R&D, but not for those who need to get work done on time and on budget, not taking days to fix Print("hello world") because Bob made it very differently than Joe. I find this attitude a bit disturbing. We should strive to turn our profession into somewhat of a science, not a black art. If we don't somebody else will, somebody who does not want practitioner opinion such as (insert name of your favorite evil monopoly).'' ---- I would offer that there is one difference: in OO programming, there is no way to avoid making that decision on every "if" statement (should it be polymorphic instead) and every function creation (where in the hierarchy should it go) and every class creation (ditto) and every instance variable creation. Those are a lot of dials to manipulate every few keystrokes. A procedural programmer doesn't *have to* think of those. An experienced procedural programmer may suddenly look at a problem and decide that table-driven could be useful, but isn't forced by the compiler to make all those choices at each step. So a procedural programmer can just go ahead and make lots of functions. It may not be fantastic, but it's at least straightforward. To me, it's that difference. --AlistairCockburn ''But the OO programmer doesn't ''have'' to make those choices either, they too can just go ahead and make lots of functions and put everything into one ginormous class. I think the comparison you're making is non-ignorant OO programmer vs. ignorant procedural programmer. The root differences are not in the method, but in the level of awareness of various techniques.'' ''Fundamentally, I believe that object orientation is just syntactic sugar to make some of those various useful techniques automatable. If you don't really understand those techniques, then it's probably easier to misuse the automation bits (is that what you're trying to get at?). As I said, I'd prefer to focus on the overall effect on the project, not just the design part of it. If you don't use advanced techniques, your design might be 'easier' to make, but the code will be harder to write and riddled with duplication. If you do use advanced techniques, then the design will be equally complex regardless of method, but the fact that OO automates some of those means that the coding will be easier.'' * It has not been established that procedural inherently produces more duplication. It often depends on how one is counting duplication. Past debates on this only produce different ways to count and different duplication scoring systems between each side. Sometimes one man's "duplication" was another man's "indirection", and most agree that indirection is often a good thing (with an "it depends" warning). There are no documented FundamentalFlawsInProceduralDesigns, only subtle things that people will bicker over. There seems to be more ways to make messes with OO. In procedural designs you don't have 30 design patterns to misuse or overuse. Most procedural problems are related to poor duplication factoring, or bad general design (wrong features, bad user interface) which is external to software organization issues anyhow. '''A consistent mess is better than in inconsistent mess.''' (Ah, good catch - to me, designing and programming are synonymous, both go together. This is a not-rare-but-somewhat- unusual stance... I just write "design" when I mean "design and program", alternatively I sometimes just write "program" when I mean "design and program". So it couldn't be the case to me that the design is easier but the programming more difficult, because they are part of the same activity. I am certainly not referring to diagram-and-click style program generation. --AlistairCockburn) ----- Re: "If you don't use advanced techniques [meaning OO?], your design might be 'easier' to make, but the code will be harder to write and riddled with duplication." Does this imply that OO reduces duplication? Unlike the more psychology-related claims, this one is more publicly test-able. Do you have an example of it reducing duplication? ---- After much soul searching and experimentation, (being a former OO bigot myself), I kind of settled to the root cause that makes OO unnecessarily complex. ObjectsDontCompose. ---- ObjectOrientedProgramming is simpler for me than ProceduralProgramming because ObjectOrientedProgramming draws on my natural ability to imagine interactions between agents. Over millions of years my ancestors evolved to survive in small groups. This optimized my brain for keeping track of who knows what about whom and predicting how they will interact. This is why CRC sessions work so well. ''Your ancesters must be different than mine. OO does not model the way I think hardly at all.'' You know your co-worker is sleeping with the boss's wife. The boss does not know. Your brain is well suited to tracking that sort of information and predicting what would happen if someone told your boss. Most human speech is gossip (who knows what about whom, who did what to whom, etc.). I think one of the reasons OO has caught on is that our brains are well suited to working with models of interactive, secretive agents. ''I guess I am a nerd because I tend to stay away from gossip. Besides, modeling physical or social interactions may not be the best way to manage information. Like the saying goes, if engineers always tried to stay faithful to the real world, they would have built planes that have flapping wings instead of propellers. However, I agree that it may be subjective. OO may indeed map to your mind better. However, that does not mean that it maps to everybody's. (PersonalChoiceElevatedToMoralImperative). Now, if you claim that OO better maps to user interfaces because the people using the machines usually think in a social way and want social-like interfaces and OO models such social systems better, that perhaps is another worthy argument to consider, but probably not the one you had in mind.'' Have you ever participated in a CRC session? ''The failure of humans to properly keep tabs on such relationships leads murder/suicides. One person (the boss) is given information they weren't supposed to know (wife is sleeping with co-worker), and undesirable coupling occurs (confrontation, may lead to murder/suicide). It this really a desired outcome were it to happen to your code? --DevilsAdvocate'' No, and that's exactly why modeling software this way fits our brains so well. Keeping tabs on this information is vital to individual survival in social groups. For hundreds of thousands or millions of years (depending on your model of human social history) we have been more likely to be killed by a member of our tribe than a non-human predator. That enviornmental pressure has amplified our ability to build and manipulate mental models of secretive agents and their interactions. ModelLikeYourLifeDependedOnIt. ''Please read EwDijkstra on StopUsingMetaphors'' I have, but it isn't convincing. Software development should work with our strengths, not against them. That page is like asking a carpenter to hold a hammer with his elbows. :-) ''If you say that it isn't convincing it's not enough, you have to read Dijkstra's writings and understand the reasons behind his convictions. You may still find those unconvincing, but then I'd like to see what faults you find in his arguments.'' The document that quote is from (http://www.cs.utexas.edu/users/EWD/ewd04xx/EWD498.PDF) gives no indication why he believes it. Perhaps you can point me to something more convincing. Or perhaps you can argue against using our natural modeling abilities. ''Perhaps, you should start with A Discipline Of Programming ? On a more easier note, you should consider that Dijkstra's insights revolutionized the discipline of multiprocessing, and you should consider that the mathematical approach to programming is reponsible for all the basic computing infrastructure that you build upon (see ProgrammingIsMath). The proof is in the pudding as the saying goes. When it comes to the pudding, Dijkstra and the people like him (Floyd, Hoare, Knuth, Wirth and many many others) do have a lot show for it, they made absolutely essential contribution that we all benefit from today. Most of his detractors (and there are many) have quite nothing, and there's little doubt in my mind that many years from now a Discipline of Programming will still be a valuable reading, and some of Dijkstra's algorithms will still be essential to the society information infrastructure (just like the Galois theory is still essential after 200 years), while most of the hypes of today (related to objects, design patterns, XML, "agile", RUP and what not ) will at best be the next generation of COBOL.'' I'm not discrediting EwDijkstra. I agree that ProgrammingIsMath, but I also believe MathIsModeling. Can you give a good reason we shouldn't use our natural modeling abilities, regardless of who invented what? ''There is no "natural modeling ability" beyond a trivial level, all abilities are learnt and cultivated to get you beyond that trivial "natural talent" level. Take music for example: you might have a natural ability to play piano, good enough to get you some money if you play on the streets, in bars or at the parties, but to really do music with piano you have to go way beyond that and learn to approach the complexities of the music, you can't go beyond that basic level until you put enough effort and enough quality effort. There are really hard problems in programming and their essence is mathematical, there are also yet unsolved problem that you need to know to avoid. As Dijkstra's pointed out in http://www.cs.utexas.edu/users/EWD/ewd04xx/EWD898.PDF all these unmastered complexities, naturally give rise to '''the search for the Stone and the Elixir''', and they make room for the charlatans who try to sell you the elixir (or the stone). Of course when you get the elixir you might feel very good for a while from the placebo effect. Please note the that EWD898 was written almost 20 years ago, but it surely reads like it was writen today. '' What may seem like a trivial modeling ability to you and me is far beyond the abilities of most other forms of life on this planet. We are better at some kinds of thinking than others. Imagining the interactions of secretive agents is one of our specialties. This has nothing to do with stones or elixirs. * I don't like it when people tell me how I think. You don't know how I think. You may know how you think because its your head, but you cannot automatically extrapolate that to others. Psychology is a "soft" science at this point. I've learned to think more in terms of "sets" over the years. It is not (always) natural, but it is far more powerful than the interacting agents of OO because sets have a more compact and better understood "math" than interacting agents. Tables are also powerful because they can show patterns that navigational (graphs) cannot show as easily. Interacting independent agents generate messy graphs on a larger scale and nobody has found a way to teach groking of these graphs, invoking "if you were smart like me they wouldn't be messy to you" instead. That's a copout. '''Naturalness should not have to be tought'''. That is as big contradiction. I am not disputing that OO may better fit your head. It's extrapolation to other heads that miff me. --top ''Now all these arguments are not specifically about objects, it applies to VB and SQL as well, even more so. But there's certainly an argument to be made that ObjectOrientation has yet to build that consistent theory going beyond the superficial aspects of program organization, and to show more precisely how that theory simplifies the resolution of the problems we face. The en-masse "feel good" fuzzy warm feeling that we have today and the wishy-washy arguments you read in your average OO literature, those are very suspect of being only the current generation of the elixir.'' This isn't something I read in OO literature. It is my own theory based on my experience as a programmer and research into the evolution of human brains. This isn't even a pro-OOP arugment. OOP is a poor fit to human brains in many areas, and we can already see refinements in those areas in new paradigms that build on OOP. It's a "use the hammer that best fits your hand and use the modeling tool that best fits your brain" argument. ----- Possible imperical evidence to back "OO is harder to get right": http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel3/4921/13575/00624239.pdf?arnumber=624239 Quote: * The results of the experiment strongly suggest that such design principles have a beneficial effect on the maintainability of object-oriented design documents. However, there is no strong evidence regarding the alleged higher maintainability of object-oriented design documents over structured design documents. Furthermore, the results suggest that '''object-oriented design documents are more sensitive to poor design practices, in part because their cognitive complexity becomes increasingly unmanageable'''. However, as our ability to generalise these results is limited, they should be considered as preliminary... [emph. added] (I don't favor structured design documents anyhow as the primary non-OO engineering tool. Different documents are needed for different perspectives.) ---- Much of the difficultly comes because ComputerObjectsAreNotPhysicalObjects. When you find yourself modeling People, Cars, etc., you're almost certainly on the wrong foot. ''OopNotForDomainModeling? Would you consider GUI objects in the camp of "physical object" modeling, or "computer objects"? I'd place them kind of in the middle because GUI's typically model physical control panels and/or physical forms.'' -------- See Also: OoLacksMathArgument, SoftwarePlatonism, WellDesignedFooCanBeBetterThanBar ---- CategoryObjectOrientation