(Based on material from ProblemsWithExistingOopEvidence) Evidence in OOP literature often fails to address realistic "granularity of variation". In practice when new features appear, the difference between things is usually smaller than the method's granularity. Thus, one needs to override 1/3 of a method, for example. The XP crowd simply shouts "just refactor the granularity to be smaller". But this is not change-friendly code. Thus, either withdraw the "easier to change" claim, or justify having ever smaller and smaller methods, plus the effort to shrink them. An IF statement around the area of difference appears far simpler than re-shuffling all the related methods. Something smells here. -top I have rarely seen a feature that can be added by overriding 1/3 of one method, excepting if that method is GodCode (like the evaluation function for a language). Are your pet variation-patterns somehow more realistic? What sort of features are you imagining? -BlackHat Examples of 1/3 overrides can be found in DeltaIsolation. -top What feature is being added in the DeltaIsolation example? -BlackHat I suspect also that you assume some sort of context (for definition see ExplicitManagementOfImplicitContext) to support something like a FeatureBuffetModel in procedural. That same pattern can be used easily enough in OOP (e.g. via ContextObject). Language support for the class of CrossCuttingConcern''''''s addressed by context is pretty much orthogonal to both OOP and Procedural. -BlueHat But ContextObject becomes a '''big wad of pointers''' for non-trivial apps. I'd rather manage all that as table-based sets instead of pointers. I'd rather use a database to manage them so that I can search, sort, group, report on, etc. in different ways. -t Eh? ContextObject does not imply NavigationalDatabase, nor does it need to introduce coupling in the language interface. A traditional use of ContextObject will have it carry only encapsulated interfaces... e.g. 'console' will access an object with the console interface, and so on. This is far more similar to a flat table of pointers. In any case, you aren't being rid of the need for context by use of table for context. -BlueHat I don't have a photographic memory and want machines to help me track the relationship between stuff because my brain has weak areas. Is OOP only for those who have photographic memories? -t StaticTyping will allow your machine to help your poor, weak brain track the relationships between stuff. ;) -YellowHat Non-trivial OOP tends to degenerate into a hand-rolled NavigationalDatabase. Pointery techniques are one reason why the NavigationalDatabase fell out of favor. Why is it trying to come back as object pointers? With enough indirection (usually "pointers") OOP is indeed very flexible. However, I find it difficult to surf, sift, and manage all that indirection/references compared to the same or similar info in a RDBMS-like tool. Please describe how you manage it as OOP code. -top ''OO databases need to be '''strongly''' distinguished from OO programming. OOP != OODB. They may have the same conceptual basis, but are otherwise as different as chalk and cheese. OO as a means for representing a data model may well turn into an ad-hoc NavigationalDatabase, but the voices who advocate this approach (i.e., creating a DomainModel) are growing steadily weaker in favour of those using OOP to represent invariant computational abstractions.'' ''Think of OOP like this: Note that a single procedural program can be defined in terms of inputs, outputs, and internal state. Now imagine that you create a bunch of little procedural programs to serve various purposes. Each one will have its own inputs, outputs, and internal state. To create a bigger program, you simply (!) hook the little programs together. Each little program stands "on its own", so to speak, but all participate in a greater whole. That greater whole, of course, is itself a program that can be defined in terms of inputs, outputs, and internal state. It too can be connected to other "programs" to form an even bigger program.'' ''That, in a nutshell, is object oriented programming. Each instance is a little procedural program that serves a well-defined purpose. To create a bigger program, we hook them together. Having to surf, sift, and manage the connections or relationships between them '''is''' an issue -- I won't deprecate that -- however, it's nowhere near the issue that it appears, because a good OO programmer endeavours to connect the little programs (i.e., instances) to each other in a logical, intuitive, and manageable fashion. And, as noted above, StaticTyping certainly helps identify and control what can be connected to what. There is never a need to worry about "navigating" the object graph, because (a) each instance communicates only with its neighbours; and (b) we are effectively creating a data-processing machine, '''not''' a network of data that needs to be queried.'' ''As an aside, note that FunctionalProgramming is exactly the same idea. We merely eliminate "internal state" from the above, and concern ourselves purely with little programs (i.e., functions) that have inputs and output.'' Almost all paradigms/techniques fit that description such that it's hard to compare based on that alone. Even the human body is like that. The devil's in the specifics of how similar things (variations) are related and how they are monitored and managed. -t ''I'm now wondering how it is you believe you can compose two human bodies into a larger one... but I'm not sure I wish to know.'' Siamese twins! I generally meant from the cell and organ perspective where the human body is compared to an app. ---- ''"Almost all paradigms/techniques fit that description such that it's hard to compare based on that alone."'' Notably, pure procedural programming, by in large, does not fit that description. ''If you mean TopDownDesign (StepwiseRefinement), then I'd somewhat agree (but with caveats that I won't bother with right now). However, with a fair amount of DB usage and/or a GUI framework, it usually becomes more like EventDrivenProgramming. -t'' No, I don't mean TopDownDesign. I mean pure procedural programming, especially that without user-defined type systems of reasonable capability, is barely scalable. It tends to form a foaming sea of procedures, all accepting and returning arbitrary primitive types and differentiated only by name. This is alleviated somewhat by StaticTyping and proper use of user-defined types, but then it tends to border on OOP sans inheritance and polymorphism, both of which tend to get implemented (out of necessity) using a variety of brittle ad-hoc mechanisms. ''Uuug. This is turning into yet another static/strong-typing lecture it seems. I almost decided to dissect your original description above to see how you found a way to turn it into a typefest. But then realized it would just be deja vu.'' That's because your understanding of TypefulProgramming is weak, saggy, and slightly damp in one corner. ''No, it's because you mistake personal preferences for universal truths. And falsely assume the whole world can be "properly" classified up-front so that it can be compiled and pre-checked such that everything runs all nice and smooth. (Well, maybe it could given a million years of man-power.) I believe you to suffer from PurityAddiction. By the way, HowToSellGoldenHammers.'' The '''computational''' world can certainly be "properly" classified up-front so that it can (optionally) be compiled, and pre- and post-checked so that everything runs all nice and smooth. It is the "real" world where this is a problem, hence the appropriate deprecation of OO for DomainModel''''''s. Don't try to use OO to represent data; use OOP to create programs to process data. * ''So SmallTalk isn't "real" OOP in your book?'' * Where did I write that? *''It is dynamically-typed.'' * That doesn't mean it isn't "real" OOP. [Purity is good stuff, duuuude. Put some in your pipe and smoke it.] [Anyhow, it might help to make a cleaner distinction between two different KeyLanguageFeature''''''s: '''structural aggregates''' and '''typing'''. These features often get confused. Even without user-defined types and typing, support for structural aggregates (like tables or dynamic tree-structured values as TopMind has advocated in the past) is sufficient to avoid the "foaming sea of procedures". ForthLanguage does well enough even without TypeSafety, and it implicitly passes a whole stack from procedure to procedure (or word to word) plus allows references to external blocks (effectively a heap) which may contain arbitrarily structured aggregates.] [For reasons unrelated to typing, "pure procedural programming" is incapable of efficient scaling to multiple processors and long-lived programs, at least without support for "process objects" with both waits and signals/events from an OperatingSystem's kernel at which point it is difficult to argue one has "pure" procedural programming. Thus, it is correct to say that pure procedural programming scales very poorly, albeit from a metric different than the implied "scaling in terms of adding more and more procedures".] * ''I agree that "pure" procedural is limited. But fortunately it has a Yang to its Yin such as relational. They complement each other well.'' * [I don't believe 'relational' helps procedural programming scale to multiple processors and long-lived programs. Is there a reason you brought it up at this point?] * ''Of course it does. Look at a typical web app in say PHP: thousands of potential processes talking to the same DB (data-set). It came to fame with client/server designs. It is sort of like EventDrivenProgramming, where each event is like a little sub-program. ACID is a very nice multi-process coordinating technique. As far as "long lived", please be more specific, such as showing a scenario that P/R cannot handle. I ain't gonna let you make shit up without a demonstration/example.'' -t ** [(1) ACID is not relational (i.e. one can have ACID without relational and relational without ACID). (2) A typical Web Application involves multiple process-objects to manage the inputs and connections, and so is not 'pure' procedural. The 'programs' are serving as message-handlers in what is overall an OO system. If you were even moderately competent, you'd have realized that '''"at least without support for 'process objects' with both waits and signals/events from an OperatingSystem's kernel"''' would, in fact include the web-server and multiple process-objects running different programs. But, clearly, you're incompetent.] * What makes "pure" procedural more complementary to using the RelationalModel than (say) OO or functional programming? * [I've been wondering that as well. MercuryLanguage makes a good combination of functional+relational. Most OO languages use procedural message-handlers and use objects to abstract connections, so they're at the very least ''not any less'' complementary to relational.] [Admittedly, where one places the bar for achieving the description "scales well" is an open question. My work towards WikiIde has me fretting over how to get '''ten-THOUSAND programmers''' using+sharing+updating+refactoring a common base of code and distributed services. I've pretty much convinced myself that this level of scalability isn't going to happen without '''both''' StaticTyping and ZeroButtonTesting. On the other hand, if achieved, I think I'd give it a rank above "scales well". Perhaps "scales very well".] [That said, there are actually quite a few paradigms that do not allow PrimitivesAndMeansOfComposition. "Pure imperative" - which I'll immediately clarify as meaning ''"imperative without procedures, recursion, and local variables"'' - is one of them. I'd further say that ''"pure OOP"'' - meaning ''"OOP without object-graph or DependencyInjection configuration support"'' - is one of them. One really needs to add an object-graph configuration language (similar to a DataflowProgramming language or a DependencyInjection framework) before OOP becomes composable.] [I've '''never''' been an advocate of "EverythingIsa". You can't bake a cake with just pure flour, just pure sugar, or just pure baking soda. But I am a fan of purity in the sense that I like the flour, sugar, and baking soda that go into my cake to be free of contaminants. I also like multi-layer cakes with some structure to them (frosting or crumbs on top, fruity stuff in the middle, layers of cake between) rather than cakes formed of a big homogeneous fluffy substance. Err... well, I'll get off that analogy before I take it too far. But, for language designs, I believe that layered solutions with pure ingredients is where it's at. OOP, at least in my favorite design thus far, gets layered '''beneath''' object configurations and '''above''' procedures (which are themselves above data-types, pure functions, and DomainValue''''''s). Since the objects are a 'pure' ingredient, they only allow MessagePassing (thus preventing coupling to attributes and structure).] [When TopMind gets on his soap-box and rants about the failures of OOP in DomainModelling as though one must accept "EverythingIsa object" prior to accepting OOP, I feel 100% justified in accusing him of hypocrisy and pointing at his common references to CeeIsNotThePinnacleOfProcedural and his tendency to compare one-punch 'EverythingIsa object' to his favored two-punch combination of 'procedural '''plus''' relational'.] *''A majority of proponents I encounter generally do approach it from an everything perspective. But I suppose it's also true that the "everything" crowd is more likely to be vocal and defensive. But in general, the OopNotForDomainModeling view has failed to gain a mainstream foothold so far.'' * Based on what evidence? * ''Didn't we have this discussion already in OopNotForDomainModeling. This is a personal observation, which is perfectly allowed on this wiki. If you choose to reject that evidence, feel free.'' * [Based on everything I've ever read on UML, OOA+D, etc. the education is generally to model "software systems" - the sources+sinks (the actors), model the inputs+outputs, and model the process or 'flow' within the program and between program-objects. DomainModelling as part of RequirementsAnalysis is performed, but can (and many people recommend should) be done independently of the software design so as to avoid premature commitment to any particular design. While I recognize you have observed people vocal in their support of OOP for domain modeling (as I have observed the same) I believe this to be more the result of confusion in the educational process, especially the use of analogies to 'real world' objects to teach what proper classes are, as opposed to starting by teaching software patterns (FunctorObject''''''s, Factories, DependencyInjection, and the like).] * Speaking from experience, I know that teaching software patterns first results in a room full of baffled expressions. A handful of students will "get it", but the rest will generally find it easier to grasp OO by '''starting''' with some concrete, "real world", but not particularly realistic (in terms of actual software development) examples. Unfortunately, many pedagogical processes (textbooks, classes, etc.) stop there. That, plus abominations like ObjectRelationalMapping, has tainted the perception of OO to a considerable extent, and somewhat tainted its actual application. * [Then perhaps we should start with DataflowProgramming and ServiceOrientedArchitecture, thus making the purpose of objects obvious by their use in such contexts. Maybe a classless language would be a better start (e.g. Scala or Erlang) so as to focus on MessagePassing (AlanKayOnMessaging) rather than on object structure. Rather than focusing on what an 'object' is, focus on the 'role' of an object. It might be worth a try, since it seems you're in a place to do so.] [EverythingIsAnObjectIsNotThePinnacleOfOop, nor is OOP that defaults to stack-based MessagePassing. OOP greatly benefits from concurrency and asynchronous MessagePassing (necessary to support EventDrivenProgramming), greatly benefits from a separate type for messages (either immutable values or mutable data-structures with uniqueness ownership - these support concurrency and distribution), greatly benefits from an object-graph configuration language (as per DataflowProgramming and DependencyInjection), and greatly benefits from support for a standard-library of 'data service'-objects including databases and DataDistributionService. The degree to which EventDrivenProgramming is a common design pattern in OOP (and, really, is a whole basis for ActorsModel and OOP in the first place) is clear evidence of this latter feature: DesignPatternsAreMissingLanguageFeatures and EventDrivenProgramming requires ''initial'' sources for events, of which DataDistributionService or DataDeltaIsolation+PublishSubscribeModel are excellent high-level sources providing a very useful indirection from abstracting hardware directly (DDS supports redundant sources of information at different levels of quality, allows for level-of-detail, supports sharing of information by multiple consumers unlike direct access to hardware, allows one to further abstract data-fusion as an initial data source, etc. - a lot of benefits from just one layer of indirection).] ------ See also: DeltaIsolation ---- AprilZeroNine