From ObjectOrientationIsDead came the question of whether OO design is "difficult". ---- Well this is a fascinating topic and there is so much to answer both on this page and the previous page, ObjectOrientationIsDead. One thing I've noticed is that a lot of failures are blamed on OO or Java or C++ when, in fact, they are just due to plain old bad project management practices. -- MikeCorum DoesOoRequirePristineEnvironment? (or TechniqueWithManyPrerequisites) If so, isn't this a weakness? My answer to that would be that OO does not require a pristine environment but it won't save a project from bad practices that would have made a non-OO project fail. I don't see that as a weakness since I don't know of a programming paradigm that would do better in the presence of a bad environment. However, one possible weakness might be that OO takes some time to "get" and some personality types will have a more difficult time than others. This is something I've observed. However, I suppose I am converted. I can't really imagine programming without OO now. Part of my job is to help others get to that place. It takes time. -- MikeCorum ---- OO doesn't make bad procedural programmers into good OO programmers. Good procedural programmers usually become good OO programmers, but for some it takes longer than others. As to why, who knows? ''Perhaps your judgement of "good procedural programmer" is programmers who make their code OO-like. Thus, this is kind of a SelfFulfillingProphecy perhaps.'' ** depends what you mean by OO-like. Good procedural code should be well modularised and encapsulate well. On the other hand getting anything like inheritance in a procedural language is awkward. NickKeighley ''Some say that OO proponents don't "get" relational, either: See ObjectRelationalPsychologicalMismatch. Most complaints about OO come from functional and relational enthusiasts.'' I have to strongly disagree with this thesis, particularly that there is some ObjectRelationalPsychologicalMismatch; I get OO, I can visualize and translate between static and dynamic models mentally. I also get Relational Models (I can normalize and denormalize models mentally). I know there is an excellent match between Object and Relational models (I can perform an Object to Relational mapping mentally and vice versa). The "mismatch" that many perceive is merely an 'inversion' of the navigability and multiplicity of the links (relations/associations) between thing (tables & schemas/objects & classes). AIH I also have good understanding of Hierarchical models such as Jackson, but think these offer the least useful model. I would struggle to explain any of this to anybody that doesn't already get it. ''Discussion moved to TablesAndObjectsAreTooDifferent'' I think mental processes inherent in personality types are the crux of this issue, but that Psychological Mismatch is actually between good architects/designers/programmers/coders and good communicators. -- MartinSpamer Perhaps. Each camp is doing a lousy job of communicating their preference. HolyWarWall ''Are you saying that good architects/designers/programmers/coders hardly ever make good communicators?'' {I do see what appears to be a reverse correlation between communication ability and OO promotion intensity. Those who are better communicators are more likely to either agree that OO may not be the best tool for all problems, or agree that their preference depends on certain assumptions which others may reject. The more "aggressive" OO'ers seem to accept ArgumentFromAuthority as fully legitimate evidence. But this may be because zealots in anything usually rely heavily on ArgumentFromAuthority. However, a bigger issue here is whether the '''benefits of OO are psychological or universal'''. Until we answer that, communication probably will not improve. -- top} MindOverhaulEconomics ---- OO languages have more built-in facilities that help manage complexity than procedural languages. So a good programmer in a good OO environment will spend less time fiddling with the mechanics of the programming environment and language, and more time wrestling with the problem domain. This is where the difficulty lies. ''I would like to see more specifics on this claim. Past attempts seem to result in PersonalChoiceElevatedToMoralImperative'' OO design is difficult in part because design is difficult, and in greater part because, used correctly, OO techniques put you much closer to the human problem being addressed than procedural languages do. And that's as it should be, since software builds upon itself. But human problems are much more complex than programming problems. ** I'd always thought OO design was *easier* and more natural than its alternatives. NickKeighley Contrast the fact that revenue management systems used to do their calculations overnight with the fact that today, airline systems can work out the optimum offer price for a given seat to fly on a given aircraft on a given route, etc., etc. - and do it in soft real-time. Contemporary problems are harder, and bigger, and have tighter constraints. More is being asked of us. One criticism of OO is that it doesn't have a sound mathematical basis. But the kinds of problem that it gets applied to these days don't have a sound mathematical basis either. Take revenue management. RM systems do involve a great deal of ''arithmetic'', and the schemes for doing so were derived using ''mathematical techniques'', but the problem that RM addresses is: given what we know about ''people's historical behaviour'' in purchasing, let's say airline tickets, what can we predict will be an offer price acceptable to ''this person'' on the phone right now? ** this "mathematical basis" stuff is just nonsense. A good class should have and maintain an invariant. The mathematical basis of RDBs is of little every-day use NickKeighley *** ''The mathematical basis of SQL -- assuming there is one -- is of little use. The mathematical basis of the RelationalModel is fundamental to the entirety of ComputerScience and SoftwareEngineering: It's FirstOrderLogic and SetTheory.'' ** "Invariants" can be hard to come by in some domains. OOP was supposed to make changes easier, but if it relies on or assumes invariants, then that's a contradiction. -t ** ''No it isn't. See http://en.wikipedia.org/wiki/Invariant_(computer_science)'' ** That only defines them, and says little if anything about their frequency or up-front discovery/identification. In other words, even if they do exist, we as designers won't necessarily be able to recognize them early in the project. I've seen 100-year "invariants" change on me after I bet the design farm on them. I actually dug into dusty old binders to check 100 years of history. (Maybe the Gods of Change disfavor me personally.) In practice, '''true invariants can only be identified in hindsight''' (except maybe for physics or math domains, where God is unlikely to change the rules of the universe. But law and biz rules are psycho.) ** {You didn't actually bother to read that definition did you? I don't see how you could have read it and still think your complaint has anything to do with it.} ** Sorry, I don't know what your point is. ** {The point is your response doesn't have anything to do with the definition of invariant as stated on that page.} ** The software engineering definition (which may be diff from CS) is domain requirements that don't change. At least that's how I perceive its usage. *** {Which just goes to show that you didn't read the definition provided to you. Since you are using a different definition of "invariant", your counter-argument is fallacious.} *** That link lists multiple definitions. Plus, it doesn't directly cover the SE version, per below. I don't want to bicker about bickering. Just give your assumed working definition rather than bicker over who reads better. *** '''TIME-OUT:''' I discovered the communication problem: Wiki mangled the hyperlink parsing, giving one the wrong link when clicked. I mistook the parentheses for a comment, not the URL. Apparently so did the wiki parser. ** ''The SoftwareEngineering '''programming''' definition is the same as the ComputerScience definition. Maybe there's a RequirementsEngineering or other "soft systems" definition of "invariant", but that doesn't mean a class (or a type) shouldn't be defined around an invariant. Classes are for computational modelling, not domain modelling.'' The way that procedural problem solving tends to proceed is to take a given mechanism that will be used to build the solution, amenable to description in a procedural language running on a von Neumann machine, describe the problem in suitable terms and join them up. OO problem solving, done well, tends to describe the problem in terms that are closer to the client's understanding, then invent computational entities to fit. But most of the people who do programming for a job aren't very good at or interested in, dealing with people's problems in their terms. Since they'd have to be doing that, a little at least, to do OO well, they tend not to. ''Measuring how or if a tool or technique "better fits the problem space" or "better fits the customer's mind" has proven difficult. This gets into the fuzzy depths and opinions on psychology and philosophy. It is more art than science.'' [Not true. Hammer users can easily tell if a hammer fits their hand by using it. Hammer makers survey and measure hammer users to refine their hammers. The same principles can be applied to programming techniques. Try them all and use the ones that work best for you.] That seems to agree that it may all be subjective. Different carpenters may select different hammers. Jimi Hendrix held his guitar "wrong", but he could make it do things that nobody knew were possible at that time. ["Subjective" just means a human is involved. Yes, different carpenters select different hammers. Different programmers select different paradigms, techniques, patterns, etc. But look at how a company like Stanley develops hammers. There is science to it.] They probably use "focus groups" and user sampling surveys. In other words, it is mostly the "science" of marketing. However, marketing tends to have a faddish element to it because humans like status symbols and marketers play into this. Hammers with vertical rubber strips might be "in" for a while, and then horizontal strips for a while, then completely covered with rubber, then back to mostly chrome, etc. Otherwise, they would have gotten it "right" in the early 60's and never had to change much. [Nonsense. A focus group didn't develop a hammer with a tuning fork in the handle.] I don't understand how your tuning fork inclusion relates to the above. [It's one example of hammer makers using science to develop hammers. Vibration is a major cause of long-term injury from tool use. Stanley found that placing a tuning fork inside the handle reduced vibration transmitted to the user's arm. This is physics and medicine, not focus groups.] Okay, some of it is science, and some is marketing. But I don't see anything close to such rigor with OOP. See also DisciplineEnvy. [It isn't more of an art than a science as you claim above. It can be observed, models can be built and tested.] Perhaps, but until it is tested, I hope the industry stops shoving OO down our throats. Prove first, then shove. ''Perhaps you could try the same with RDBSs? NickKeighley'' I agree 100%. But it's too late, RDBMS have already been accepted as "standard tools" since around the early 1990's, and thus are heavily road-tested. (They are not perfect, but better than existing alternatives such as IMS and file-systems-as-DB's.) Similarly the NoSql movement needs road-testing also. If pioneers want to test them for us and risk arrows in their backs, that's their prerogative (as long as they don't claim they are ready for mainstream.) I'd argue OOP GUI's are similarly entrenched as much as RDBMS, at least for desktop apps, and so far nobody has shown a production approach that works better for desktops. (I'd like to see table-oriented GUI systems tried by pioneers. The HtmlStack is a possible alternative, but most agree it sucks from the developer's perspective. Delphi, VB, etc. are nicer than HtmlStack.) ** you mean we've chopping our data into unusable little chunks because the RDBS gurus say so since the 1990s. We then have to glue it back together using procedural code. Hopefully using a Real Programming Language. ACID is nice but we pay a heavy price for it. NK ** ''Screwed-up contradictory or lost data can also have a heavy price. Databases can remove ACID, but I've yet to see a good demonstration of it providing more benefits than an ACID version. (Except maybe in cases where speed or costs are more important than consistent data, but that doesn't significantly change the overall nature of the application.)'' {"Similarly the NoSql movement needs road-testing also."} {Google have been "road-testing" it for a decade and a half. The Berkeley DB crew have been "road-testing" it for almost thirty years. Looks like it works.} For large companies with a similar usage profile to Google, okay, I'll agree. But, some of the hype seems to imply they are ready for "mainstream". I still consider it a niche tool at this point. There are not a lot of Google's around, quantity-wise. {What about the Berkeley DB, then? It's been a foundation of devices like network routers and PBXs for decades.} I googled around a bit, and couldn't find descriptions of practical uses for it, other than LDAP or LDAP-like projects. I will agree that LDAP is "mainstream", but it's mainstream for a niche need. (I am not fond of LDAP, but that's another topic.) {Have you heard of ActiveDirectory? (If you haven't, you've been living in a box.) LDAP underpins it. Anyway, who cares whether it's "mainstream for a niche need" or not, and who cares whether you're fond of it or not? All that matters is that NoSql has been well road-tested. By the way, '''every''' SQL DBMS is a language parser built atop a NoSql data store, by definition. That's pretty "road-tested", too.} Active Directory is covered under "LDAP-like projects". Berkeley DB is a kit for building databases, usually roll-your-own special purpose database, and thus is not directly comparable to an RDBMS. If you want to argue it's road-proven for special purpose databases, I won't disagree. Such databases are typically hard-wired for a predetermined purpose. Anyhow, it seems there may be a difference between what we each call "No Sql". {I mentioned ActiveDirectory because it's nearly ubiquitous. NoSql in general isn't comparable to an RDBMS, and the Berkeley DB is precisely the sort of thing typically called "NoSql". Tools like MongoDb, Cassandra, Hadoop and so on, may provide varying functionality but are conceptually the same: a data store -- very often a key/value or "column" store -- accessed via an API rather than a database language.} Some have optional query languages, such as "Pig". Anyhow, it's still comparing apples and oranges because the boundaries have yet to settle. They are kits, not really products, although that may be gradually changing. But we are wandering off topic. {Yes, some have query languages. How is that relevant? How does a kit differ from a product? How is discussing NoSql wandering off the topic of NoSql?} Why is it off-topic? Well, because, this is an OOP topic, in case you forgot. {You started it, so don't blame me.} I didn't blame anybody, only pointed out that "we" are wondering off topic. Perhaps it should be split out, but I am not in the mood for a LaynesLaw dance over "mainstream" etc. any time soon. We don't have literature mention studies etc. such that it would merely become yet another AnecdoteImpasse. ---- One of OO's advantages is that most OO programming environments have known good practice built in to them, much as mathematical notations do. ''Example?'' The other is that they enable, even encourage, this building upon earlier results (i.e., "reuse"). But more powerful tools require more skill to use. Which is another reason that OO design is hard. ''Saying "OO is hard because it is good, and good things are harder" is only inviting HolyWar''''''s. If the hardness of OO was really because it (allegedly) has better reuse, would OO be easier if one did not take advantage of reuse? (Related: OoIsNotAboutReuse.)'' [That isn't what he said, though. He said more powerful tools require more skill to use.] ''Bad tools are also require more skill. I am just pointing out that requiring higher skill is not evidence of betterment.'' ---- One criticism of OO is that it doesn't have a sound mathematical basis. Well, for one thing, it's rare for those bits of software that do have a sound mathematical basis to be used in accordance with it (e.g. database schemas that get "denormalized for speed efficiency" or "normalized for space efficiency" long before they've ever been deployed on a machine). ''Database efficiency'' is an example of the classic twofold trade-off between size and speed. Though Instinct and experience can provide a good measure without the requirement for a hard metric. Soft or non-functional requirements should give a good indication of which of these should be targeted. See also: IsProgrammingMath, OoLacksMathArgument ---- When you move from procedural to OO code you make your focus smaller. In procedural code, you have much more coupling, so you have to keep more of the code in your head. In OO, you can focus on a few small objects at a time, and trust that the rest of it will be taken care of by somebody else, even if that somebody else is you tomorrow. For example, say you're writing code that parses data from a text file and then feeds it into a report. A procedural process for writing this code might be all-or-nothing, so you don't feel finished until the whole thing's done. Object orientation, however, lets you write just one part, say the Parser object, and you can write a stub Report object that had interfaces but only enough implementation to compile. You can try to make the Parser robust, if you like, before you even make the Report useful. ''What stops one from doing the same thing in procedural? It sounds like regular old SeparateIoFromCalculation to me. One may want to use an intermediate structure, perhaps a database, to store the parsed info. That can be analyzed and tested as both input (2nd stage) and output (1st stage). One could argue it is easier to query and view an intermediate table than a bunch of allocated objects.'' ** for once I agree with you. He's comparing bad procedural design with OO design. "Structured Design" by Constantine and Yourdon NickKeighley But this thinking - that we can solve problems as we encounter them, instead of planning for all of that at once - makes some people uncomfortable. Is this part of the psychological mismatch we're groping for? Do programmers like complete control too much? ''I find the opposite. Procedural tends to let me focus on just one task at a time. I don't have to worry about dividing the algorithm up by some noun taxonomy or anything else. It queries the database to get the info it needs and then optionally writes the results back to the database, and then (usually) returns control to the caller. A variation of the ol' input-process-output. OO gets your foot tangled in a bunch of different class protocols and rather arbitrary taxonomies that you must grok before effective use. Talking to the database is more consistent. OO seems to invite too much "creativity". The approach to query and talking to the DB is more established, whereas in OO every class seems to reinvent its own version of add, change, delete, find, sort, join, lock, save, etc. in different ways.'' ---- Perhaps it's just as easy to write SpaghettiCode with procedures as it is with objects. -- RobHarwood I would equate procedures as a technique of StructuredProgramming which is the antithesis of SpaghettiCode. I also think both StructuredProgramming and OO are easier because they allow [aim[ to allow the complexity to be reduced or localized. -- MartinSpamer The big difference is that procedures are familiar, objects are novel, so LessAbleProgrammer''''''s? who've grown up with procedures have a learning curve to figure out where the dials are. It's just as easy to crash a fighter jet as it is to crash a helicopter. Just as you wouldn't send a novice jet pilot to fly a helicopter without proper training... -- RobHarwood ''Some percentage of people successfully who learnt either method first can make the transition, others cannot.'' ''I often learn best by being shown how something is superior to alternative approaches. However, I have not seen such external OO evidence. It seems to be more of an emotion or mind-fit thing than something that improves some externally observable quantity or metric. Time (raw exposure) is not always the best teacher. Often it just solidifies bad practices due to the comfort of familiarity.'' What I've found in about ten years of OO programming is that OO code is much easier to create, but is much harder to reuse, and that it needs more continuous refactoring to be as maintainable as procedural code. I started coding just about 17 years ago, so I don't think that's because I am too old to learn. ;) -- NikitaBelenki It seems that "good OO" is just too hard to demonstrate, document, and articulate. There are no (relatively) simple design principles that most OO celebrities will agree on. It seems like you need to hire a full-time personal OO Zen Master in order to convey proper OO to OO novices or people doing it wrong. Conveying the knowledge of how to do good OO is just too tough right now. The training for "reasonable" procedural-relational design seems a more obtainable goal. ** Read "Agile Software Development" by Robert Martin NickKeighley ** ''Robert Martin sees different ChangePattern''''''s than I do (per other books). His view of how and where applications change differs greatly from mine. There is no solid data on what the change patterns really are such that such disagreements are stuck at an AnecdoteImpasse.'' ''What, then, are we to make of the many programmers who believe deeply that they are more effective with OO code and came to this state without the benefit of any Zen Master consultants?'' Perhaps some just find OO difficult and some don't. Either those who don't are just smarter, or there is something about OO that fits their particular mind better. ---- I see a majority of programmers say they do OO because they program in a language with OO features. Almost none of those programmers really understand OO. In my experience, the ratio of those who truly understand OO to those who only say they do is 5%. And all of that is assuming that I have any real understanding of OO. -- MikeCorum That is because there is no agreement on what OO and OO modeling is. NobodyAgreesOnWhatOoIs. It depends far too much on ArgumentFromAuthority. ---- ''Again, you make an ObjectOrientedCulturalAssumption. The fact that you can't do without inheritance doesn't mean that others can't also. I wouldn't be so judgemental about procedural code considering that all the major OSes are written in C, they are very maintainable, well designed, stable and everything, and we have yet to see an OS either designed or implemented as OO. -- CostinCozianu'' Actually, BeOS was written entirely using C++ (as in object-oriented, not procedural with class-like structs). The only exception was at the device-driver level. Be, of course, is dead. That shouldn't diminish the fact that BeOS was a fantastic operating system, way ahead of its time, and implemented using object-oriented techniques. In fact, ask anyone: BeOS is, perhaps, one of the fastest operating systems ever implemented. Oh yeah, and I also forgot: Around 1991, IBM started reimplementing OS/400 using object-oriented techniques with C++. So that's two that I'm aware of. -- JeffPanici ''Here's a third: Symbian OS (formerly Epoc) is written entirely in C++ too. -- MattBriggs'' ---- They are also written with 'dials' equivalent to inheritance, such as table-driven programming with function pointers. Six of one, half a dozen of the other. I can do without inheritance, if I can use table-driven programming. But that doesn't make the design any easier, in fact it makes it harder since I have to deal with all these messy tables. If you want to claim that ObjectOrientedDesignIsDifficult, be fair and also claim that ProceduralDesignIsDifficult. Refactor that to DesignIsDifficult. ''Rob, interface implementation (as in "table-driven programming with function pointers") is not equivalent to inheritance. Inheritance is by definition a partial order. Interface implementation is not an order at all, because there is no '''transitive''' relation. -- nb'' From a pragmatic point of view, they are equivalent. They are two ways of solving the same problems. ''No. Being followed further, they produce different results. With inheritance the dependencies you introduce are '''explicitly''' transitive. So to get rid of some coupling in one class, you will probably need to refactor all its relatives.'' ---- I've seen developers have trouble with OO because they have trouble thinking abstractly: Without a good grasp of abstractions, OO is just a huge number of tiny subroutines all tied together in a confusing matrix. Without a good grasp of where functionality "belongs," one has little guide as to where to find things or where to make changes (and once having changed, what the change will break). I recently had an argument with a project manager: The "communication" component, which conveyed data between the different tiers of the system was doubling the apostrophe (') characters in strings, so that things would "come out right" in the database. ''"No," I said, "that doubling should be done in the data layer."'' "But it works fine the way it is." he said, and therefore "I can't let you change it." ''"Aside from being a bad idea, from the OO perspective," says I, "it complicates the business logic, which would have to undo and then redo the doubling, and it makes it impossible to transform this from a 2-tier to a 3-tier system."'' Finally, he let me change it. ''(Or maybe I just went ahead and changed it while he wasn't watching. ;-)'' I think this is a good example where lack of understanding of the abstractions being implemented causes one to put code in the wrong places, making the system brittle, difficult to maintain and unreliable. -- JeffGrigg ''I find that OO proponents define abstraction as *being* OOP. A tautology. I consider myself a highly abstract thinker, but I don't find OO an abstraction nirvana. If anything, I find it fails to abstract commonalities I see in dealing with DatabaseVerbs. I see interface repetition in OO. Repetition is often a sign of bad abstraction. To me OO is a pointer nightmare without consistency and reason. It is the Goto of modeling. -- top'' ---- I think the problem with OOP is not the method, but the people using the method. Several possibilities exist. The ones I think likely are: Lack of training, lack of familiarity, psycho-technological mismatch, wrong paradigm, misleading hype. Out of all of those, I reject the 'psycho-technological mismatch' hypothesis because I myself, and others I know, including novices and LessAbleProgrammer''''''s, have grown to strongly prefer OOP over procedural programming. I'd also say that 'misleading hype' can be subsumed by 'wrong paradigm'. Likewise, since I was formally trained in university for OOP, but still didn't really get it until I read ReFactoring, I'd say that the 'familiarity' and 'training' hypotheses are possibly valid but dwarfed by the 'wrong paradigm' effect. Overall, my interpretation is that the prevailing paradigm is that OOP is something you do because 'that's the way it should be done'. I used to think that way myself. A much more useful paradigm is that OOP is something you do to help you work faster, given that changes will happen frequently. If it's not helping, don't use it. This goes for the whole method, as well as each individual practice. That's what I mean when I say OoIsPragmatic. Use inheritance to reduce duplication, use polymorphism to simplify logic, use objects to make thinking about your problem more natural, etc. So OO isn't difficult because it's inherently tough, but because we've been learning it in an overly complicated and generally irrelevant fashion. -- RobHarwood ''"A much more useful paradigm is that OOP is something you do to help you work faster, given that changes will happen frequently. If it's not helping, don't use it."'' I'm on your side. I chafe at OO languages when programming a finite state machine... FSM programmers work from tables, not methods. I skip objects when writing quick scripts or file manipulation programs. I question objects when doing screen-to-database programming. I like objects when the program is going to stick around and I have to live with it, or it's bigger and harder. OoIsPragmatic is a fine phrase. (You're also right that people do OO 'cuz it's being done, but that's to the side of the question.) -- AlistairCockburn -- IMO: What is hardest about OO is that too many trainers have claimed for too long that OO is a silver bullet, when its only claim to fame is that it is not a poison-coated lead bullet. OO has shiny bits and can be made too work trivially on some simple examples in a laboratory. This presents learners of OO with the illusion that they are capable of designing a program (the answer) in at least paradigm. The problem really is that in the real world where problem start out pear shaped, they will fail to understand the question. If in contrast (a control group) the same sample of randomly selected humans attempted to become procedural programmers, my hypothesis is that self-culling would occur. Thus in the wild there are more incompetent programmers and designers who think that [they?] can do OO than functional decomposition. ---- ''Failures of OO projects shouldn't be blamed on language, since TuringEquivalent languages are basically the same. -- RobHarwood'' Turing equivalence of the languages isn't the issue - Turing equivalence is at the level of mathematical power. Difficulty in programming is at the cognitive level, and at the code maintenance level. The latter means, "How many lines of code do I have to touch to make this change?" Any kind of badly designed code is more difficult at the code maintenance level, but still equivalent at the Turing level. The question of is OO design more difficult (to me) has to do with the "this mechanism or that?" questions running through the programmer's head while programming, what you call the "more in quantity and more in kind of decisions to make at each step". If, in procedural programming, I had to decide at each step whether to involve a table-lookup or a switch or a FSM or a quick neural net or rule-system reference, then I'd complain about the cognitive complexity of procedural programming. -- AlistairCockburn ---- I'm not good enough at procedural programming to detect whether a program is full of random "stuff" (in lieu of design) that will make maintenance harder. I am good enough at OO programming to detect that, and I see it a lot these days. I attribute it to OO being harder. A third possibility is that there's the same amount of equally bad rubbish being produced in procedural code, and I've just not had a chance to notice it. -- AlistairCockburn ---- ''Or perhaps that's the factor that makes some people not realize that OO is one good method amongst many? It's not just "OO or Pascal" you know.'' Quite. It's OO and Pascal: DelphiLanguage ''On this page do we mean ObjectOriented as Java and C++? And "procedural" as C and Pascal? If so then I think Smalltalk programmers (and others) would thank us for being more specific. -- LukeGorrie'' ---- I remember when I got into OO. I was programming in C and trying to do a direct-manipulation graphics program, something like a UML tool but it wasn't UML. I couldn't handle the complexity of the problem until I started to structure it like objects-in-C. Then it became instantly much more manageable. So I think the kinds of problems people were trying to solve (e.g. GUI programming) influenced the adoption of OO. And at any time, the problem to solve will influence the methods of solution. I suspect there are some other problems emerging now influencing people I know to consider functional languages. -- BobHaugen GTK (the GuiToolkit) is an example of this: written in C, thoroughly object-oriented, and (it seems to me) widely considered to be very well designed. -- LukeGorrie Many years ago I had occasion to work with XWindow and Motif, and they appeared the same (object oriented, written in C). It was kind of cool, because every C structure had an 'extension' void pointer, the need of which is now obvious, and you could see the search along the "inheritance" chain for the function that would handle your call. The art of writing a Widget consisted partly of following the conventions so that the OO wouldn't fall apart on you. Grass roots stuff. -- WaldenMathews Yes, indeed, writing a UI library many years ago was when I first started seeing the use of objects. OO is great for windowed graphics systems, or indeed any setup where you have lots of independent entities hanging around with their own state, and with events zipping between them. Problems start when people try applying OO to inappropriate domains. I have seen drivel from a supposedly prestigious "strategic consultancy" starting that OO is the ''only'' programming paradigm worthy of consideration! What tosh. Numerics is a good example of a domain that is often ''not'' a great fit to OO. Functional programming is definitely gaining ground there - even in non-dedicated languages (C++). -- anon ---- My head seems to be screwed on differently. I wrote procedural code for a while, in C Assembler, Pascal and more C. The maximum complexity height I could reach was X. I always failed to achieve enough decoupling, and eventually a task that was X+1 complex went belly up and code that was X-1, devolved under maintenance fairly quickly. Once I started OO, I was one of the easy converts, I wrote in C++, and a few others, I ''suddenly'' found I could build projects at least 2X in complexity. I tried things that were simply not on the map before. Some of them failed. -- White Hat I would like to inspect a specific example. As a reader, I cannot tell if OO is simply superior, if you were not good at procedural design, or if you had trouble grokking procedural. Any one of these 3 could be the case. ---- Now whether that sudden increase was simply coincidental with me learning something else, like 'design by contract', or 'works like a', or 'Group Theory' is unknowable, as my experience and its account here are ''ANECDOTAL''. Does anyone know of ''ANY'' science that backs any claims on this page, or are we like the psychologists who used to practice shock therapy simply charlatans, (Apologies to Psychologists for comparing you with our level of shamanism.) IMO: All rational statements on topics like this is, start with "What works for me ..". When you find that N people agree, the statement is "What works for us ..." When N is large, the statement is "What works for us, apparently with dissent in this forum, ..." -- White Hat As far as I can tell, there is very little science *at all* on effective development. Instead, what you get is religious wars based on shaky ground. Fundamentally, any design or programming language or system is there simply because human beings are limited creatures and cannot take user requirements and instantly spit out binary code that machines can understand. And because we are all different we each work better with some tools than others. What is needed (besides some investment into basic research in this area!) is a mechanism that allows each of us to develop our "units" in whatever system fits with the way our brains work. Unix pipes gluing IO streams together was one of the first, and I have no doubt that XML and .Net are the latest attempts. I'd like to get away from the usual "OO is good", "procedural is great", "I hate Java", etc. and start focusing on *why* OO/procedural/functional/table-oriented is difficult for some and not for others. Then perhaps we can make some headway on the productivity issue that still dogs development. -- NeilWilson Perhaps we should distinguish between the type of "failures" that are being discussed: * Cannot make OO programs work * Cannot grok others' OO programs * Cannot see what the benefits are compared to alternatives ''In my opinion, people tend to find the paradigm or tools that best match the way that they think. Perhaps productivity can be increased by letting people use tools that they like instead of those dictated by trade magazines. Of course, it makes sense to perhaps settle on a limited set as "corporate standards" that cover fairly wide styles rather than every variation. For example, Ruby fans could probably live with Python, and visa versa. Maybe choose the best (oh oh, fight) dynamic and static language from each paradigm.'' ---- '''TentativeSummary''' (add more to this) Reasons ObjectOrientedDesignIsDifficult * OoHasMoreDials * BigSoupOfClasses * OoLacksMathArgument ---- '''Object Oriented Design is a Two Step Process''' It is having to take the second step that makes Object Oriented Design more difficult (Clarification: This is only saying it is more difficult, it does not say anything about whether or not it is beneficial). When discussing a new or existing software program, the natural description is of the things it will do or it currently does, rather than what will be or what is operated on. People describe the functions of the software. In design or implementation, the first step is to address the first function, then the second function, etc. If one sets out to design objects first, he must step back and look at multiple functions at one time and determine where they intersect to create objects. ''But the relationship between nouns and verbs is often many-to-many in the long run. See PrimaryNoun. Resolving this reality in OO gets messy because OO has no built-in many-to-many helpers.'' * Only if you're stuck in a procedural/relational paradigm or you have a problem drawing the distinction between a class and a collection. OO's "many-to-many" helpers are associated collections. Owners have collections of cars and cars have collections of owners. Simple, scalable, adaptable, and a whole lot easier to explain and maintain than a crosstab. And you can reuse the "collection of owners" for dogs. --MarcThibault * OopNotForDomainModeling. If you have cars, owners, and dogs, then you have an OOP domain simulator. If cars have collections of owners and vice versa, then you have no modularity: each class would know about all the other classes (CantEncapsulateLinks, LifeIsaBigMessyGraph). You will be fighting the paradigm (especially encapsulation) to do anything at all: add new relationships, maintain relationships, support ad-hoc queries, manage concurrency. If you want a domain simulator, you'd be ''far'' better off favoring with a paradigm that supports ad-hoc queries, backtracking, backwards chaining, and modular relationship management - such as temporal LogicProgramming. If you ''don't'' want a domain simulator, then you need to rethink your classes, and perhaps instead be looking at FunctorObject''''''s, CommandPattern, StrategyPattern, and so on. -- NonTopAnonymousDonor An alternative approach is to begin with the first function, implement it, add the second function, and derive objects when appropriate, i.e., refactor. This still requires a shift from implementing each desired function to creating objects after each function is implemented. The desired result, for many reasons, is to have objected oriented code. To get there, however, requires a two step process. Without the second step, one is left with procedural code. The users of software will describe it in terms of functions. It is the responsibility of the developer to translate the users' descriptions into an object level description. A procedural description is a more straight-forward translation of the users' descriptions, but there are other reasons that make an object based design desirable. Dr Joseph Juran describes this as "language translation;" translating the language of users to the language of technology. ''The language of the users usually revolves around inputs and outputs in my observation. This is a natural "consumer" point of view: What do I have to feed it, how often, and what does it give me in return?'' ---- ''The above sounds like the classic battle over tasks-first versus nouns-first (or something else first). Plus every methodologist will probably have their own OO techniques. Lack of consistency between OO celebrities is one reason why some find ObjectOrientedDesignIsDifficult.'' Perhaps because consistency is the hobgoblin of small minds? ''Regardless, without consistency we have an art instead of a science, resulting in endless debates and fads.'' You want a field where all experts say the same thing? That's not science, that's a cult. ''Or a mature technology way behind the front lines of science.'' Can there be such a thing? Anyhow, consistency, or at least the documenting of the differences, is sorely lacking IMO. Is OO nothing more than a BigSoupOfClasses with no larger-scale structure or discipline? ''There are plenty of mature technologies. I didn't mean to imply that either OO or software development is one of them. I certainly agree with the thrust of Francis' "cult" statement above. -- TomRossen'' ''Unless there are clear side benefits in the other, I will take the consistency route.'' ---- The concepts of OO are relatively simple (polymorphism, inheritance, wrapping data with its operators, etc.) It is just the application of these concepts to the real world that is the messy part. In my opinion, the concepts of OO are too simplistic to match the multiplicity of the real world except in narrow circumstances. OO designs just keep adding '''more indirection layers''' of OO concepts on top of each other until it satisfies somebody; but the end result does not match the simplicity of the base concepts. The end result is a shanty town. The beauty of the base concepts is nowhere to be found in the result except in the building blocks. There is no FractalNature of the base concepts bubbling up to the big plan. One ends up feeling betrayed by the simplicity of the building blocks. The parts are simple, but the result is a BigSoupOfClasses, a graph. We need something that improves on graphs, not something that ''is'' a graph. -- top ''This is why many programmers prefer to stick to writing public API's and work with base algorithms that all the others can then wrap in more private OOP. Consider though how hard it is to work with the procedural Windows API and compare it to an object wrapper like delphi or visual basic? Would you use the plain procedural API each time? Some oop tersens and speeds up development since it wraps verbose public API's into a terser form. Believe it or not, one of the advantages of some OOP code, if written well, is that it is terser than procedural code and neater.'' * My idea of the "ideal GUI engine/API" would not need OOP. I don't want to get into that issue right now other than point out that it is difficult to make a language-neutral OOP GUI API because the OO rules and limits are different per language. And reinventing GUI's for each language is poor OnceAndOnlyOnce. Windows API's indeed do stink, but it is not because they are procedural, but because they were designed poorly. -- top * ''I'd like an example from you then, for a replacement for the Windows API. A lot of people criticize the Windows API and they say it stinks. I think so too. But what is the alternative? .NET? Obviously not.. so what then? Relational API? Are you sure OO doesn't have some use for widgets and gadgets? Is your work/industry not gadget oriented and does this bias your view?'' * The implication here is that only OO can do "widgets". I am skeptical of that. Anyhow, like I said above, making a ProgrammingLanguageNeutralGui is the biggest issue with OO gui's. Nobody has figured out how to do it well. If you wish to explore this further, please continue it under ProgrammingLanguageNeutralGui. * ''There is no implication that only OO can do widgets. The Windows API and Gnome Tool Kit is procedural. There are serious questions asked about how one could create widgets. Widgets are more like wood tables that are carved by hand - do wood tables have to utilize set theory? Should they? If you can demonstrate that they can, or that there are more procedural ways or functional ways (or whatever ways) that a wood table can be designed, then please do. This isn't an attack - it is a serious question - how does one design a button, or a wood table, using set theory? How does one use his saw blade and his dremel via the set theory? If the set theory is not perfect for the task - what other models can we use? What models are you proposing?'' * I would rather we answer the ProgrammingLanguageNeutralGui question first. If you wish to discuss this further, may I request you create an example under NonOopGuiMethodologies? ''When it comes down to gadgets, widgets, even procedural coders like Linus Torvalds agree that some OOP is better (even files in unix, or files in old pascal are kind of OOP in a way - this also may be a flaw, though, if for example a relational file system could be used in place. How would people send stuff to /dev/null/? With an INSERT INTO DevNull command?).'' ''When filtering inputs and sending data across a pipe, sending some quick text to a web browser, sending SQL text to a database: this is where objects don't necessarily make code neater, quicker, or easier immediately. Especially in prototype applications where one needs input/output sent and received right away. Some people spend more time writing programs that send data through the internet, send data to a console. These people don't see as many of the OOP benefits because they don't work in the gadget arena (buttons, windows, edit boxes), or an arena where OOP would be more useful (business objects do bother me, and I'm not sure OOP is really needed so much there). CSS and HTML is sort of OOP since it allows you to align your gadgets and inherit from them. Do you find the CSS/HTML useful ever, with its inheritance abilities? What would be an alternative? A procedural or relational markup language?'' ''Consider what industry you work in. Maybe it is affecting your view too, Top. My hatred toward OOP is partly because I don't work in an industry that needs to reuse as much code. I have a hard enough time creating more and more NEW code since there are so many projects to complete.. rarely do I tactically reuse an old structure '''over and over and over''' again. I reuse algorithms and lists/databases/arrays/buttons/html often, sure. But when writing batch/console/database programs, or web programs that spit out different text each application - I often find there is no need for all of OOP concepts. Since I write a lot of prototypes and applications that do batch tasks, my view will be biased.. compared to someone who writes GUI gadgets and needs to reuse GUI gadgets over and over again. I suspect that you write a lot of DB apps that may not require OOP, top, and you don't see the need for it due to the industry you work in.. but that is just a guess.'' This assumes that OOP is about reuse. Even many OO proponents do not accept this as a key reason to use OOP. See ReuseHasFailed. ---- '''Hard to Teach?''' Maybe the problem is not that OO is difficult per se, but teaching OO is difficult. If somebody could take my hand (figuratively, please) and show me clear benefits, I might finally "get it". OO's benefits and beauty just seem to difficult to turn into words and practical-oriented examples. It is relatively easy to get OOP programs to run based on requirements, but much more difficult to convey why OO is better or how to do it "right". It all comes off as Kungfu-TV-series-like vague mysticism. Something about OO is textbook-elusive. ''Yes, that could be it. But then, learning any programming language (unless it is *very* similar to one you already know) is difficult. -- DavidCary'' Learning languages is about "how", but often not "why". Slapping classes together until the program runs is not the issue here. It is relatively easy to get OOP to run, but seeing the benefits and perhaps not making a mess with it is tough to teach. Some just seem to "get it" (agree with the benefits) and some don't. Those who don't "get it" find existing training material and techniques vague, elusive, and non-committal. OOP just seems to alienate certain kinds of minds, and this can create a backlash. ----- What makes OOP hard is the need to generalize everything regardless of the pressing need. In Math and Science this is second nature and no Mathematician or Scientist would consider it sufficient to do otherwise. However, Computer Science,and even more IT, have a strong taint of excessive pragmatism. This is compounded by the way many of us make a living by ' getting it done'. Today the ubiquity of computers in the home and work environment has bread a generation or two of programmers who learned only by doing. They would no more likely see Computer programing as the end product of a scientific discipline than they would see driving a car as an act of Mechanical Engineering. To do OOD/OOP right you need to have at least an intuitive feel for what a type is, what it means to define operations on a type,and the extent to which marking something as a type is allowing the language system to make inferences about how the type will behave in all cases. Now it is true that all good programing requires this, which is why it is taught to Computer Science students, however the power of inheritance allows poorly defined and inconsistently defined types to promulgate very easily. This means that much more damage can be done. It also means that persons with little or no formal training in Computer Science or Math are defining what amounts to language systems ( or at least extending them). These very pragmatic IT people, who at best may have taken a few practical lab oriented courses or at worst are self- taught ( miseducated ), are in my opinion the cause of 'OOD/OOP failure'. This is not to say the OOP is a silver bullet or that it should be used everywhere, but at least it should be used well before it is condemned. The current state of the practice is as if we let Medical Doctors practice with out teaching them the basics of Anatomy and O-Chem. At one point such charlatans did practice 'Medicine' all over this country and the world, until licensing helped stem the problem. I hope we can police our own, but sometimes I am not so sure. -- MarcGrundfest ''I disagree with some of this. First, the need to "get it done" is often forced upon us by those who pay us. Finance theory even suggests that paying too much attention to the long-term is often a poor investment choice, and I have not seen a well-formed argument against it yet. A better model has yet to dethrone it. We are paid to make a profit for the owners, not produce works of art. Needless to say, it's a more complex issue than you seem to be making it. '' ''Second, OOP is often not the best "abstraction technique" (what you call "generalize"). Its abstractions are simply not powerful and flexible enough in my opinion. Set theory and CollectionOrientedProgramming offer more potential in that area. Outside of abstractions that fit nicely into hierarchies (see LimitsOfHierarchies), OOP starts to get ugly, or at least no better than its competitors. Please be clear that I'm not saying OOP "can't do" non-tree abstractions, but rather that it cannot do them better. (And others have their own pet abstraction techniques that they promote.) --top '' My Claim is a bit weaker than 'OOP is the best abstraction technique' It is merely that good abstractions are hard. I do not even claim that we should only use OOD/OOP. Most programmers approach OOP from the procedural world and are often required to interface with a large installed base with very poor abstractions. The difficulty of creating good and useful abstractions, coupled with the need to interface with such systems and the need to ' get it done' makes the conversion to OOP hard. I also do not claim that 'getting it done ' is inherently bad ( though I can see how you may think that I disdain it - I do - even as I practice on a regular basis). I only claim that it has implications for the long term success or failure of a project. Now it may well be that any paradigm that requires a long term view is a bad idea for that reason alone-- an interesting perceptive, but one which I fear condemns Software Engineering to the realm of myth. No Scientific or Engineering discipline can develop if there is no systematic attempt to learn from the past and encode those lessons for the future. Perhaps you only mean to suggest that it is not the role of practitioners to perform this role, but if so who should? And how will it benefit the practice if practitioners are compelled to routinely ignore best practice in the need of expedience? I suspect that you would not make the claim and I think we may agree more than we disagree. My apologies for typos and spelling issues I am not always able to take the needed time to clean up my comments until sometime later. -- MarcGrundfest * ''I have "learn[ed] from the past". I've found that HelpersInsteadOfWrappers are the most flexible abstraction on average because you can change and toss them without huge up-front investments in a grandiose framework(s)/abstraction(s). I've tried grandiose abstractions many times only to see unanticipated changes go against their design grain and kick them in ass. Flexibility requires dumping stuff away. The world is not logical, clean, and predictable on a large scale, and domain abstractions must reflect that. Smaller-scale abstractions do indeed have more "glue overhead" than big, integrated abstractions because lots of smaller parts must be matched and adapted to each other, but it's a worthwile tradeoff to gain flexibility. Perhaps OOP can do such also, but I haven't seen it attempt to target such. Instead, I see interfaces that are larger than the code. -t'' * I would not disagree. I do not claim that one must never loosen the abstraction. I am not a Zealot, and I have used code of that sort in my programs as well. What I claim is that all such relaxations of OOP principles must be done with conscious intent and thus one must know how to abstract properly in order to know when it is no longer worth the cost/benefits. I have noted that many junior programmers routinely violate the art of encapsulation without considering the consequences and do so in the name of flexibility-- which in many cases is unneeded, and counterproductive due to the coupling involved. It maybe that I would set the trade of point differently than you do,or perhaps not, but I can see from your response that you are thinking in the same lines as I do. However consider this, and this is question I have been ruminating over for years, is it only boundary classes that exhibit this issue-- interfaces with other systems which are not OO, or is this general problem which can not be contained, and thus may introduce so many ripple effects as to negate many of the benefits of OO. My instincts say that the problem can be contained to the boundaries, but I can not prove it. --MarcGrundfest * ''Is the purpose of OOP to "protect" code and coder from mistakes, or to make code easier to change and read? I do agree that the key is knowing what trade-offs to make and when to make them. This is something missing in most OO training materials and books. They tend to ''dictate'' a approach rather than produce a careful justification with CodeChangeImpactAnalysis, DecisionMathAndYagni, etc.'' * I am at loss here. There are no good books on OOD/OOP that dictate anything. I am sure there is list of recommended ood/oop books that speak in principles not application GradyBooch springs to mind. If you read poor books, or do not read the good ones well ( that an editorial use of 'you' I make no such claim in your case) then how is that the fault of OOD/OOP or any other tool? Moreover as has been mentioned below OOP does not = 'programing in hierarchies' - never has - never will. As early as Booch's first edition the received wisdom was to use inheritance sparingly ( and very good reasons were given in terms of CouplingAndCohesion as per DavidParnas). -- MarcGrundfest * ''I haven't mentioned hirarchies recently. As far as CouplingAndCohesion, it's still an ill-defined concept(s), or at least highly dependant on personal classification systems. GradyBooch appears to focus on systems software and industrial control software instead of custom business apps, which is where I spend most of my time. The patterns of change don't seem to match well across domains. Others have agreed that OO has difficulty with custom biz apps. Further, they seem to use C as their procedural comparison reference, which is a poor choice because it's not optimized for software engineering, but rather machine resources.'' * CouplingAndCohesion is ill-defined ??!!! Ok I give up are there any design concepts in the world of Computer Science that are not ill defined? Is Occam's razor ill-defined? Are there any practice guidelines at all that are not ill-defined as you see them? If 'requires human judgment' = ill-defined then there are no meta-principles at all and this exercise is pointless. If not then ' or a least highly dependent on personal classification systems', even if true, is not an issue. However ,it is possible to measure coupling syntactically with flow analysis tools ( rarely used in practice because in my experience most practitioners do not have problem defining it at least operationally), and, while cohesion is more difficult, as it is a semantic concept, a very large installed base of good code argues that we know it when we see it and, more importantly, we know it when we do not see it. I also know how to weld Occam's Razor. Someday we may have a good formal definition for you, but I can guarantee that none will use it. I can even guarantee that some will claim that it can not be a good definition because it is contrary to existing bad practice, but that is another story. :) -- MarcGrundfest * ''Yes, they are ill-defined. Or at the least, I've never seen definitions that lacked words subject to enough subjective interpretation to make them useless as objective metrics. It is possible to agree on a specific metric for a specific language, but there's no guarantee all readers will agree with the application or value of the metrics. I remember one debate where somebody stated, "But that's not a scientific question, but an economic one." Whether that's true or not, that doesn't make it any less relevant to real-world problems and concerns. Occam's razor is an interesting example because I've seen it used by both sides in political and religious debates. It usually ends up being an battle over what "simpler" means. It usually leads to a fractal LaynesLaw situation where each definition leads to yet more disagreements. People just plain think different. --top '' * So there are no principles in all of Computer Science that are well defined? OK, I am done. I am new here, and had I known this was your claim I would have never wasted my time with the preceding kabuki play. I am also to infer that you are the the author of TopMind ? If so it's all my fault -- you do have your own warning label after all. Well, I hope at least to have provided some entertainment for any lurkers who at least had the good sense to stay out of this cul-de-sac-- I guess this is newbie right of passage. I am ,however,not so much a fan of performance art, so I hope you will not mind if I treat this as a lesson learned and suggest that we go our separate ways. -- MarcGrundfest * ''I didn't claim there were no principles that were well-defined. All I ask for is a coded example(s) generally representative of my domain comparing OO and non-OO with a '''clear''' description of why the OO version is overall better. Instead I get round-about talk about everybody's pet theories. That's not science in my book. RaceTheDamnedCar. The closest we've come on this wiki is PayrollExampleTwoDiscussion, which was inconclusive. --top'' * When I say that I am done it means that I am done disscussing this matter. I just did not want you to expect a substantive reply. - MarcGrundfest * ''I may continue to flesh out my statements in case there are other readers. Please don't take it personally. And I would also like to warn against SovietShoeFactoryPrinciple in applying any definitive principles of "computer science" (if and when done). It wouldn't make sense to judge the quality of a newspaper just by the quality of the font alone, for example.'' * Ok so if any when a perfect priciple of Software Design is invented/disovered and which works in all cases and is proven to your standards of 'Science' you reserve the right to claim that it does not matter anyway and ignore as mere coicidence. My word, you really are working overtime on this aren't you? --BlackHat * Where did I ask for perfection? At this point I just want reference coded example(s) fairly representative of custom biz apps along with clear descriptions on why the OO version is "better". I want to '''see''' the code "being good" with my own eyes, not merely brochure-talk and pet theories. If OO makes it better because it satisfies the Thonkameyerhinkledorf Principle, then show code that uses the Thonkameyerhinkledorf Principle to make a better app and show how and where it makes a better app. (Note that I will apply WaterbedTheory where appropriate, so be ready) -t {As usual, you seem to be conflating some antique notion of (perhaps) object oriented databases and domain modelling ("abstractions that fit nicely into hierarchies" et al) with how OO programming is actually used, and you've yet to demonstrate that "[s]et theory and CollectionOrientedProgramming offer more potential" or define what "offer more potential" actually means. Your explanations appear to follow your own amorphous and irrigorous pet definitions ("TableOrientedProgramming"), and you back away when challenged (PayrollExampleTwo) to show your techniques are as superior as you claim.} ''I did not find your original material very specific either. Thus, the complaint of vagueness comes with a dose of irony. But you did specifically mention inheritance and "types", and that is primarily what I was addressing.'' {I don't know which of my "original material" you're referring to. I hadn't written anything on this page until a single paragraph that starts with "As usual ...", above.} * ''I suggest you use a "normal" handle. I am curious why you singled me out for not being specific enough but not the originator? And generally, asking questions is the proper and more-fruitful way to obtain details, not insults. -t'' * {The originator is a FlyingVisitor (until proven otherwise), and despite his poor writing and spelling, I found his points clear and quite resonant. Yours, not. I do not intend my comments to be insults; they were intended to be terse but accurate.} ''As far as PayrollExampleTwo, I think it's clear that the "benefits" largely depend on the domain, environment, and its ChangePattern''''''s. There is insufficient information on both an historical level and language-problem-level to know what allegedly went wrong and why with case/switch. PayrollExampleTwo raises more questions than answers, which I won't repeat here. At best, it's the starting point for further analysis. And, I made no mention of TableOrientedProgramming above.'' {Isn't TableOrientedProgramming your term for a coding style that emphasises "[s]et theory and CollectionOrientedProgramming"?} ''I never claimed it's the only way to apply set theory and COP. As far as "back away when challenged", I never saw an OOP version where a power-user manages payroll formulas/calculations instead of programmers. My example was first. In other words, think like SAP instead of a one-off payroll programmer. -t'' {Having not clearly defined TableOrientedProgramming, it appears you can claim anything you like. As for an "OOP version where a power-user manages payroll formulas/calculations instead of programmers", an obvious way to implement it is to use a spreadsheet package. OO is ideal for creating spreadsheet packages. QED.} * RE: ''OO is ideal for creating spreadsheet packages.'' - this premise is entirely disputable. Implementing the ObserverPattern, scheduling updates to avoid glitches, parallelizing updates for better utilization of multi-core processors, allowing laziness to efficiently focus on updating just the visible cells, optimizing the proper granularities for cell-block recomputes and intermediate caching... none are features that OO is anywhere near ideal in expressing and achieving. To which alternative are you comparing OO? * {I'm comparing it to Top's "TableOrientedProgramming", obviously. Have you actually read the discourse, or merely taken issue with a random edit because you're bored and angling for a quibble?} * ''Thank You for being rude to somebody else besides me. I don't feel so alone anymore. -t'' * {I'm an equal-opportunity insulter.} * I have followed the discourse. Nonetheless, I'm not at all convinced that, for purpose of developing spreadsheet packages, that OO would be more effective than a more global 30% procedural+ 70% relational algebra approach that roughly corresponds to TableOrientedProgramming. (Whereby 'effective', I mean relative performance, validation, verification, and scalability of a simple implementation.) As far as equal-opportunity insults go: I have no expectation for a rational argument from the manchild who fangles himself 'TopMind', but I would hope you not lower yourself to his standards. ** ''Ignoring the growing insultfest here, I once created a semi-general-purpose spreadsheet using FoxPro because nobody knew how to get a regular spreadsheet to pull it off in a timely manner. Basically, accounting created a one-sheet budgeting spreadsheet as a template. It was about 5 printed pages long. There were about 40 satellite offices and each office had to fill out the one sheet. However, we also needed about 2 levels of summing such that there were regional sheets and a grand total sheet. Eventually they hired a spreadsheet whiz who was able to do it in a regular spreadsheet. Mine was easier to propagate certain changes, but was otherwise too much code reinventing spreadsheet idioms. It may had made more sense if there were say 500 satellite offices instead of 40. Further, if the number of simultaneous editors (users) went up and logging and versioning was needed, a database-like approach would make more and more sense. Using OO for such would only lead to GreencoddsTenthRuleOfProgramming. -t'' * {Your hopes are of no interest to me, but having developed special-purpose spreadsheet packages, I would happily use OO to build them... ...Again. I would also happily use an intelligently engineered relational language with support for reasonably rich user-defined types. That would probably be equal in power -- for some accepted definition of "power" -- to an equally intelligently-engineered OO language. But then, I expect the intelligently-engineered OO language and the intelligently-engineered relational language would be quite similar. However, if we're comparing run-of-the-mill pseudo-OO, like Java or C#, to run-of-the-mill Top-flavoured TableOrientedProgramming (e.g., ExBase), then even pseudo-OO blasts it into the ditch.} * ''More unproven "types will save the world" evangelism. -t'' * {More Topist computational ludditism in the face of obvious enlightenment and truth.} * ''Obvious, yeah right. Types are the outdated gizmo. They just create anti-WYSIWYG e-bureaucracies.'' * Anti-WYSIWYG? That objects to TypefulProgramming, TopMind, rather than to types in general. Perhaps you would favor the sort pluggable/optional, DuckTyping type-systems seen in PLT Scheme and Newspeak (i.e. where the program runs exactly the same without types, so types only exist to help maintain code). Here's a discussion you might enjoy: http://lambda-the-ultimate.org/node/1311 . I disfavor the typeful programming, nominative typing, and ManifestTyping found in many mainstream languages... but Scala, Newspeak, and other rising languages are succeeding in eliminating the bureaucracy (and hierarchy) of types. (I suspect that if my entire working knowledge of types was from battling them in ADA, Pascal, Java 1.2, and C++, then I'd dislike them as much as you do.) * ''I prefer type-tag-free programming like Perl's and ColdFusion's type system over dynamic tags, such as PHP's. "Validation" is used to check them when needed, not tags.'' ** You're using a PrivateLanguage for TypeSystem''''''s that I'm not especially interested in learning. Also, consider reviewing VerificationVsValidation. ** ''See TypesAndSideFlags and EmpiricalTypeBehaviorAnalysis for more on type "tags" (called "flags" in some topics). My working meaning of validation is testing that a variable satisfies some criteria when it's used for a particular purpose. For example, that it has no letters (except maybe "e" for scientific notation) because it's going to be used as a number. Some call it parse-based "type" checking.'' -t * {Gag. You appear to prefer the behaviour of the two most retchingly-ghastly languages to ever drizzle from the infected bowels of misguided language design. MacromediaColdFusion is Satan's shorthand, and Perl his handwriting.} * I'm just talking about their type system here. Other aspects can await other topics. * {I'm just talking about their type systems too. Other aspects are almost too mind-wiltingly horrid to contemplate. I'd rather have my liver slowly munched by badgers than use MacromediaColdFusion for anything, and I'd sooner gouge out my own gallbladder with teaspoons and use it for a coin purse than code in Perl.} * ''Most AlgolFamily scripting/dynamic language are not that much different in terms of productivity in my experience. They all have their own bright-spots and annoyances and gotcha's, and one learns to work around the low-spots. Now if you are a heavy user of FP or OO, then the differences may stand out. -t'' ''Spreadsheets? You are kidding, right? How many companies with more than say 25 employees run their payroll on spreadsheets? As far as the definition of TableOrientedProgramming, I don't think I can offer a rigorous definition, anymore than OOP can (other than squeezing it through a made-up model). -t'' {Excel on its own might not be the best choice for a large-scale payroll package, but the spreadsheet approach has certainly been successfully used as the basis for scalable accounting packages. See, for example, N''''''ewViews: http://www.qwpage.com/ } ''Based on the FAQ, it uses databases under the hood. Spreadsheets by themselves are not very good at doing things like storing and querying employee info, payroll histories, etc. You could add it, but you'd end up reinventing a database. I would note that my example sort of reinvents a spreadsheet. Thus, there is a partial truth to your statement. Further, spreadsheets are pretty close to TableOrientedProgramming. Perhaps there's a larger concept here: grid-oriented-programming? Tables can be viewed as a constrained grid. -t'' {The underlying storage mechanisms are wholly irrelevant to the debate at hand, and you know it. By the way, the "larger concept" you seek is called "spreadsheet programming", and is an established area of academic research.} * ''DatabasesAreMoreThanJustStorage. It didn't say they were ''only'' being used for "storage". Database==storage tends to be an OO-centric viewpoint. Anyhow, what does this have to do with OOP? Even if OOP is better at making Excel (which I may even agree with), that does not mean it's necessarily better at programming an app in Excel.'' * {Fine, the underlying '''data model''' is wholly irrelevant to the debate at hand, and '''you know it'''. Stop being evasive. If OOP is better at making Excel, and an Excel-like model is a "better" way to make a user-editable payroll program, then it follows that OOP is better at making user-editable payroll programs. Etc.} * ''Please clarify the last "it follows". The best tools for making tools are not necessarily the best tools for making apps. Also, please clarify "underlying". If the "data model" can be made close to the abstractions needed for the app domain, then wrapping them yet again is often unnecessary indirection. Related: AbstractionsTooNear. -t'' * {I'll leave it the reader to decide; I've got code to write. Furthermore, I just made the tastiest cocktail ever and couldn't resist consuming it in a single gulp. It was highly infused with vodka. Therefore, I estimate I've got about seven minutes of sober cognition remaining and I've no intention of wasting it here.} * ''I hope it mellows you out.'' ''Re: "Having not clearly defined TableOrientedProgramming, it appears you can claim anything you like." - This has been a problem in past OOP discussions where it appeared one party was giving OO credit for just about every software invention. But when they are "stretching it", I point it out and then LetTheReaderDecide whether it's really OO or not rather than accuse the writer of conscience manipulation of terms. We live in a fuzzy world; deal with it. -t'' -------- '''Key Piece of Puzzle Missing?''' Perhaps the problem is that the description of what OOP "should be" is not sufficient. Most seem to agree that using polymorphism, inheritance, and encapsulation does NOT by itself make software "good". But what does make an OO app "good" has been difficult to describe. OO may describe the brush and the paint can, but it's not describing how to paint a room well yet. ''What can make OOP software "good" is the maintenance of the code base (not just how the application looks, feels). OO can be emulated using procedural code (i.e. the WinAPI), sure. However, sometimes it's better just to use OO with an OO capable language, so that you don't end up emulating OO. On the other hand, '''API's often need to be public''' and less private. An API author can't always anticipate what should be private and what should not, due to developers using that API for more interesting tasks than originally planned by the IvoryTower.'' ''Moved some discussion to BenefitsOfOo and OopGoesHalfWay.'' ---------------- PickTheRightToolForTheJob If your OO design is getting messy, perhaps you are using it where it doesn't belong. I find OOP useful for modular-izing certain "function" groupings in terms of putting related functions together and managing the variable/attribute scope related to these functions, but bad things happen if I try to force it on every aspect of the application. Sometimes it's just the wrong tool for the job. See OopNotForDomainModeling. In addition to domain modelling difficulties, there are also '''scaling problems'''. When working with GUI code, sometimes one needs to group code by widget type, sometimes by widget position, sometimes by event type, etc. There is no One Right Grouping for non-trivial GUI's. This suggests that TableOrientedProgramming may be better for managing the parts of involved GUI's, just like a relational database is better for managing say car parts for a manufacture than say a strict hierarchy. Sometimes you want to group by costs, sometimes by location in the car, sometimes by vendor, sometimes by weight, sometimes by material (metal, plastic, and so on), etc. GUI's are not much different; you have event type, widget location (nesting and/or physical location), widget type, attribute properties (such as viewing all titles together), and so forth. However, TableOrientedProgramming is still experimental in terms of integrating behavior (event code) with GUI attributes. It may take a different language design approach to pull it off. A ripe area for research projects. See also PowerfulCodeEvalDiscussion. (Note that a "live" table may not necessarily be needed during run-time, it could perhaps be used for "static" GUI code generation, although a run-time table may offer more flexibility.) -t ---- See also: IsObjectOrientationMoreComplex, OoEmpiricalEvidence, ArgumentsAgainstOop, PeopleWhoDontGetOo, OoIsPragmatic, LearningProgrammingLanguages ---- CategoryOopDiscomfort, CategoryObjectOrientation ---- AugustTen