Short WikiName for a seminal paper by DavidParnas in 1972 titled ''"On the Criteria To Be Used in Decomposing Systems into Modules"''. See: * http://www.acm.org/classics/may96/ * http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf If anybody knows where the actual '''source-code''' is, please let us know. Thanks ''Huh?'' The computer programming listing/text source-code that was used in this study. You know, IF-statements, functions, variable declarations, WHILE-loops, etc. I'm sure the two approaches compared are not the only way to program it. ---- Various comments: ---- As a RelationalWeenie, I see the paper as justifying the need of a DataBase of some kind (probably a NimbleDatabase in this case) rather than hand-build your own arrays and linked lists of objects/nodes to house and manage the data. Parnas' favored solution looks like a mini-database (non-relational) of sorts to me. However, the paper lacks specifics about the application requirements, so it is hard to get into a point-by-point comparison. I suppose this is kind of an ink-blot test where OO fans see ADT-like encapsulation and database fans see it as promoting databases to hide and reduce reinvention of the inners of indexing, querying, and sorting. This paper probably won't settle any HolyWar. -- top ---- This is a great paper. This doesn't mean it settles any HolyWar. Papers can only start holy wars, not settle them. -- RalphJohnson ''It is not a great paper because:'' * The sample application is not well-defined. * Its conclusions have no clear-cut message. It does not answer questions such as what to do when arrays become too complex: OO or databases or both? Databases hide CollectionOrientedProgramming idioms behind standard rules and interfaces, and OO hides them behind custom-built class interfaces. This paper does not contrast these solutions to hiding collection-handling details, only suggests that a solution is needed. The only clear-cut message I find in this paper is that '''arrays suck''' for non-trivial apps. No news there. The real battle is what to do about it. -- top ---- While 'top' is certainty entitled to his 'opinion', I urge all who have not read this paper to do so for yourself. For one thing, the paper dates to 1972. It is therefore not surprising that is does not provide an empirical comparison of OO vs anything. OO was still a research project at the time, and indeed was most of modular programming. It is a paper about principles and the exercise of professional judgment, it is a seminal paper, and it is quite possible that it will be far less opaque to others than it was to 'top'. In any case, until some magic methods appear which can be applied without professional judgment, where upon we can all look forward to permanent unemployment, as programming becomes fully automated, it may be useful to peek inside the minds of the some of the greats who have come before us. We could even come to realize that the exercise of professional judgment is part of any non-trivial discipline, and embrace it. -- MarcGrundfest ''Perhaps there's a more fitting topic already, but heavy "fully automated programming" is a pipe-dream in my opinion (short of human-like AI) because of the inherit fuzziness of requirements and impact of not understanding the longer-term implications of certain info organization decisions. Software developers are not just coders, but also a (hopefully) skilled '''liaison''' between machine (literal logic engines) and humans in order to make the link between the two as effective and practical as possible. I doubt a non-human liaison will understand the human side of things sufficiently.'' First, are we to fail to make a distinction between creating requirements and generating code? If we make the distinction, then we have automatic programming already. If we do not, then we may yet become 'programmers' who do only requirements analysis. The context above is that 'top' appears to claim that the principles of design upon which we rely are not 'scientific' because they are not objective. In practice, this means that we need to apply professional human judgment. I deny the claim that the application of judgment is a disqualification for 'Science' (strong claim) and then bypass the issue to suggest that even if true we need to learn, understand and apply these principles (weak claim) and that the paper cited is a great paper because it allows, us (most of us) to do that. I am not claiming that human judgment can be removed from the practice of software development in either the strong or weak form, and did not intend to create that impression. It fact my point is the opposite. -- MarcGrundfest ''You are mischaracterizing my viewpoint. I don't claim that software engineering is inherently beyond science; only that the necessary science has yet to be done and that it won't be easy to "do it right". The existing material tends to focus on the low-hanging fruit; but to get a better sample of the tree will require higher samples and more samples. -- top'' OK, but what I am saying here is that it does not matter at all whether it is Science or not. If it's not Science and never can be (strong form), or if it could be but is not yet (weak form), then in either case we must learn and apply the principles in this paper. The more you think they are not Scientific (yet) the more judgment is needed, then the more you need to understand the paper and not dismiss it. If we are to become a Science (weak form), we will do so by studying the greats who have come before. If we can never become a science (strong form), then we must still learn and apply the principles in this paper to have any hope of doing our jobs. I am sorry if you think otherwise, but I am not all that concerned - I am concerned that others have the opportunity to read the paper and judge for themselves. I do not agree with your characterization of the paper, but I do not dispute your right to hold your views. Neither do I think it's a fruitful use of our time to debate the issue. I am content to allow the paper to stand on its own merits (or lack thereof) and I trust others to form their own opinion. -- MarcGrundfest ''The devil is usually in the details, and without source-code we don't have sufficient details. How can one test competing theories if the skull is locked away in somebody's drawer?'' I guess I will just have to trust the other readers to have better luck. Most of them seem to manage. Of course, you could be right - but once again I am not all that concerned. What I really do not understand is why you seem to care? If, as you say, the paper is not a great paper, and offers little of value, how can it possibly bother you that others disagree? I mean if I thought the rest of the world was wrong I would look for a way to leverage that to my advantage not give away the store. You are truly a mystery, top. Now I care only that others judge for themselves. Do they have that right - the same right you treasure so for yourself? Can we at least agree that they have to right to believe it's a great paper even if you do not? -- MarcGrundfest MarcGrundfest ''Without seeing the actual source-code, I find it hard to verify or comment on the write-up alone. If you believe the write-up alone is sufficient, we'll just have to AgreeToDisagree and move on. I'd really like to see if modern tools/techniques can give us more choices and better designs.'' Now are we agreeing that others have the right to have a different opinion of the paper? Or are we agreeing about the fact that the paper is in dispute. We already know that the paper is in dispute. What I want to know is why you think that others can't disagree on the paper unless they argue with you first. I am ok it you want to think it's a bad paper. I am ok if you want to think that others are fools if they think otherwise, I am ok if you think you have the right to tell others it's a bad paper (which you have in you original comment). I do have a problem when you can't say the words "I am ok if you think it's a good paper, and I think it is good thing for other people to read it an decide for themselves." Can you write back, as a show of good faith as follows and I will do the same (in fact, I will do it first): I agree to let others decide for themselves whether this is a good paper or not. I trust their judgment and recognize their right to do so - the same right that I have. -- MarcGrundfest ''I made my comments based on the limited information available. It's like commenting on Jupiter using an Earth-bound telescope. I'd like to launch a probe to Jupiter called "Actual-Source-Code I" to verify Earth-based observations and/or challenge it with new theories based on closer observations. We already know that Earth-based observations are not giving us a very clear view. But just as astronomers still commented on Jupiter before the days of probes, one can still comment on the paper without a source-code probe. Nobody was fired over that. If probes don't exist, they don't exist. We do the best with what's available. But it '''cannot qualify as a "great paper" without source code''' in my book. Unlike astronomers and probes, the lack of code is author/publisher sloth, not a technical limitation. You want the "great" badge, you need the code. -- top'' I do not know what you are talking about. If I am very lucky I never will. -- MarcGrundfest ''The feeling is mutual. I have no ability or desire to stop others from commenting on the paper. I am merely cautioning against over-interpreting incomplete information. (I've since tried to improve the above paragraph to hopefully make it clearer.)'' [It shouldn't come as a shock that there's a good reason for not including the source. It's irrelevant, and in 1972 you didn't include irrelevant things that would increase the size of the paper by an order of magnitude.] Why would it be irrelevant? There's probably infinite ways to code to the spec. Why should incarnation X be representative of all other possible implementation incarnations of a given approach "type"? For example, some algorithms are greatly simplified by use of associative arrays or tree arrays over positional arrays. Back in 1972, associative arrays were uncommon. {Good heavens. The paper no more needs to show source code than a blueprint of a house needs to show individual nails.} [It's irrelevant because it doesn't matter which of those infinite ways it's coded, as long as it's one of those infinite ways that matches the decomposition described.] Please tell me that the following quote is NOT tied to actual code/language: : The formats of control blocks used in queues in operating systems and similar programs must be hidden within a "control block module." It is conventional to make such formats the interfaces between various modules. Because design evolution forces frequent changes on control block formats such a decision often proves extremely costly. [page 1056] Back in 1972, it was common to include the record "shape" and/or schema in each using module or routine. We generally don't need to do this in modern systems. We can address only the cells relevant to the routine if need be. And database "views" can hide some of the more extreme schema changes. [Sure thing. That quote is NOT tied to actual code/language. It's tied to a property that a particular piece of code may or may not have. With regards to the rest of what you said, you write code that doesn't have that property. Good for you. Doesn't change the consequence of writing code that does have that property. Therefore, that too is irrelevant.] [[Persons attempting to understand the above should see TopMind.]] How about if you have a question about the above, please ask. Jeez! [What makes you think double square brackets had a question?] AdHominem attacks are unprofessional. {It is telling that you consider a simple reference to your chosen nom de guerre to be an AdHominem attack.} ---- The paper fails on the grounds of GoodMetricsProduceNumbers. For example, it commonly uses the pattern of "if you do X, then Y is easier, but harder than if you did Z" and so forth. However, it says nothing about the probability of X. It '''matters little to the real world''' whether X makes Y easier or harder if Y rarely happens. If it's free to avoid something, then please do it, otherwise use the higher probability scenarios to decide. Even if they used "guessed" probabilities based on surveys, it would have more ties to the real world. And readers could plug in numbers that better fit their domain if need be if formulas or probability trees with variables are given. '''Probability is paramount to deciding between design options''' when maintenance is a key focus, and many keep ignoring this. SoftwareDevelopmentIsInvesting. OOP fans often use this paper to suggest that one should wrap all access to data-structures in OOP or at least ADT accessors on the grounds that it "hides implementation". But what if implementation is not likely to change that often (it depends on domain)? Accessors can add code bloat and code bloat slows down reading and changing of code in many cases. It creates eBureaucracy. Thus, if the probability of changing the underlying implementation is low, then the "bloat tax" overrides the benefits of reducing the cost of implementation overhaul. Plus, what seems like an "implementation" is often really an interface. If one uses SQL to talk to the "data structures", that's not an implementation, it's an interface. And, it's one that's more flexible than OOP accessors, which often end up reinventing CollectionOrientedVerbs and expressions for each of the domain nouns involved, which is a violation of OnceAndOnlyOnce. Accessors are just not powerful enough to be flexible. A bigger concept is missing from ADT's and OOP encapsulation. It '''may be cheaper to reinvent the module wheel than reinvent collection idioms'''. In 1972 they didn't really know how to package common collection-oriented idioms. In that scenario, wrappers may have made sense. But again, the real answer to that and related claims lies in the actual future change requests. But since we cannot know the future when designing, it's all back to probability (above), and this paper does diddly squat to address that. : Conclusion : We have tried to demonstrate by these examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. Since, in most cases, design decisions transcend time of execution, modules will not correspond to steps in the processing. To achieve an efficient implementation we must abandon the assumption that a module is one or more subroutines, and instead allow subroutines and programs to be assembled collections of code from various modules. [pg. 1058] Yes, "likely to change" is important, but they didn't prove what's likely to change, and/or tied their estimate to a specific implementation and/or language, which the reader can't see. Show the actual change in the actual code and the customer request scenarios used to estimate probability, and I might perhaps start to agree with their faulty generalization. [It's not surprising they didn't prove what's likely to change. What's likely to change, itself, undergoes changes. By not tying their conclusion to a particular set of changes, or a particular implementation, they can produce a conclusion that is actually meaningful. If they had tied it to a particular implementation, then it would only apply to a particular implementation. Since I'm unlikely to ever need the exactly that implementation, it would be unlikely that the result would ever apply.] I only half agree. If your "implementation" allows you to add new features or behaviors with minimal effort, then there may be net benefits to it. For example, if your "implementation" is a relational database (assuming for the moment that is even an "implementation"), then you can fairly easily get new collection-oriented behaviors/idioms (COBI) with very little new coding. If you use pure encapsulation and/or ADT's, then you could possibly have to reinvent many COBI's from scratch if you swap implementation. One may have to do say 10 times the coding, for example, to switch from SQL as the primary underlying processing mechanism to C positional arrays. Most businesses with custom software are NOT going to go down that route. Obsessing on swap-ability is meteor insurance. If you are using a high-level language and it's working well for you, it's stupid (poor resource balancing) to dump it for a lower-level language and/or wrap everything behind methods/ADT's just "in case" you swap in the future. See DecisionMathAndYagni for ways to apply estimates to such. In other words, the power of sticking to a powerful and flexible implementation may '''exceed the benefits of implementation swap-ability'''. The encapsulation/ADT crowd often overhypes the benefits of implementation swap-ability as a general goal. I agree it is a characteristic to strive for, but not necessarily a top priority for many domains. The only way to determine that is through informed domain analysis and trade-off weighing. And, 1972 didn't have enough high-level tools available for this to be an issue to consider. - t [[Do you know why we call this paper a seminal paper? Do you think that the lack of tools in 1972 may be explained by the practices that Parnas is critiquing? If not, then how frustrating it must be for you. I take great satisfaction in that observation.]] Who knows? An alternative universe doesn't exist to test. Given linked lists and positional arrays as one's only real choices for collection and attribute management, it's probably a step up. I applaud the paper for presenting alternatives to consider, but the actual selection requires (at least) economics and investment-like decision tree analysis. (Such trees are also found in GameTheory.) [[So trees are not collections, hash tables are not collections. bit vector sets are not collections... Wow! Just how do you think RDMS's are implemented? On second thought, let's just call that a rhetorical question...]] The change patterns for SystemsSoftware are generally different than custom applications software. As far as this paper, we don't have enough info to know who the target audience is. Is it a kit to be sold/used by hundreds of different organizations? Is it a single in-house project? Will it be boxed and sold to thousands? - t [[We know that you are not the target audience - it's a research paper. We know that your failure to generalize is debilitating beyond redemption, so why do you insist on misinforming others? Can't you let the rest of the world go on its merry way without your interference? Does it bother you so much that others are quite capable of reading understanding and applying the lessons of Parnas?]] No no no, I meant the target audience of the word-processing system described in the paper, not the target audience of the paper itself. I don't understand how you confused the two because the context was a software product. And I'll pick probability trees over Parnas any day. They are closer to the ideal of GoodMetricsProduceNumbers. Parnas was too chicken to get into number-land. And I am not entirely dismissing the paper. It's good as a design suggestion candidate, but '''NOT as a universal rule'''. Absolute claims require stronger evidence. - t [[For the love of god NO SUCH CLAIM IS MADE This is the purpose of the paper in His own words. Usually nothing is said about the criteria to be used in dividing the system into modules. This paper will discuss that issue and, by means of examples, suggest some criteria which can be used in decomposing a system into modules. Page 1 col 2 Par 3 under the Introduction. That's it!!!]] * You guys are the ones implying it is scripture, not necessarily Parnas. {Your accusation that "Parnas was too chicken to get into number-land" might have some weight if you were known for your rigorous statistical analyses, especially if they regularly punched holes in both intuitive and logical conclusions. As you're not, your claim is naught but hot air. Parnas is demonstrably worth reading. You are not.} It's not my burden to put numbers on Parnas' claims. As far as my claim that excessive reliance on wrappers can make a system less flexible due to the anti-collection-orientation of API's, I'm merely putting it out there as something to '''ponder, consider, and discuss''', similar to what Parnas did in the paper. If you instead treat Parnas' paper like a Bible or Koran, then we won't get anywhere, will we? And again, I couldn't put numbers on the probabilities without knowing the environment of the software. ItDepends. As far as systems I've built and/or maintained that used an RDBMS as the primary "data structure", the probability of tossing an RDBMS and going to something like files or linked lists is roughly around 1% per year. Switching to another vendor's RDBMS is roughly 4% per year (and heavy wrappers would have resulted in insignificant reduction of code change anyhow because of the high number of single or low-use SQL sections, if reasonable factoring of repeating concepts was already done.). And, I do apply FutureDiscounting to my decision calculations at a rate of around 10% to 20% a year, depending on the biz at hand. - t {No one is asking you to put numbers on Parnas's claims. We don't need numbers to grasp the simple way he points out a strategy that is intuitively obvious once it has been clearly articulated. However, it would give '''your''' claims '''much''' more credibility if you were to practice what you preach. How many times did you find "excessive reliance on wrappers" or "anti-collection-orientation of APIs" ''anywhere'' in his paper? How do you quantify "excessive"? How did you obtain "3%", "10%", "20%" and "1%"? No, don't answer that -- I know where you got them: You pulled them out of your ass in order to produce a thin (and meaningless) veneer of quantification.} * You are still not getting the "number thing". As explained in GoodMetricsUseNumbers, numbers are often a useful '''communications device''' even if they are ''not'' backed by an official double-blind certified published study. If I estimated the probability of an event is 30% and your estimate is 35%, then generally there would be little reason to stop at that point: it's a reasonably close agreement. There's no point in arguing about something we agree on even if it's not backed by an official double-blind certified published study. However, if my estimate is 3% and yours is 30%, then we know there is a big perception gap at that spot which can '''be explored further'''. As far as the 10% to 20% FutureDiscounting, that's generally what financial calculations use (and I'll pull a BookStop on finance theory here). But again, the actual value is not the primary point. A good numbers-based description would also allow one to plug in their own estimates if they don't agree with the authors' values. - t {And why would use or not of DBMSes have anything remotely to do with the conclusions of Parnas's paper? It's about modularization in general, and the concept is as obviously applicable whether a DBMS is used or not. DBMSes are irrelevant here.} Well, it's not obvious to me. The API's are too low level, making one likely have to reinvent collection-oriented idioms. Any savings from "hiding the implementation details" is possibly counteracted by having to reinvent those idioms. That is obvious to me. Collection-orientation is power, and 1972-style API's are not going to yank that power from my cold, dead fingers without numerical justification. API's and wrappers are often just '''too damn primitive'''. The paper is fighting over (comparing) 1960's versus 1970's technology when I'm familiar with 1980's technology that kicks both those decades' asses. It's comparable to a paper (tablet?) that compares sticks to rocks for starting fires. I gotta fucking cig lighter here, dude! Well okay, slight exaggeration; a book of matches. - t [[The only reason you have your lighter matches whatever is because of the principles described in the paper. This is a fact Interface does not mean API. Declarative languages are not possible without Modularization. Orthogonality follows nearly directly from it. You are in fact not familiar with anything other the first tool you learned to use, and you will stay that way god bless. Meanwhile why are you pissing all over this WIKI? I promise no one wants you learn anything. We just want to be able to learn from people like Parnas. Your only impact is obstruct that goal- you have no hope of debunking anything and will not repeal all of computer Science, so it must be that this is your purpose to obstruct others. Now how do you think we should deal with that? Let me guess --- you think we should let you piss where you please and nominate you for a noble prize for your insight.]] Re: "The only reason you have your lighter matches whatever is because of the principles described in the paper." - Like I mentioned above, SystemsSoftware has a different change profile than custom software. Parnas' suggestions may indeed be applicable to building an RDBMS engine (such as Oracle's) from scratch. But that doesn't necessarily extrapolate to software that ''uses'' an existing RDBMS. PickTheRightToolForTheJob. You are implying a universal truth. And Parnas' paper has not graduated to real science yet. It's only "interesting speculation" at this stage. GoodMetricsProduceNumbers and Parnas has none. Show specifically using representative scenarios outside of SystemsSoftware how it reduces finger steps, eye steps, and/or brain steps. I'm just the messenger. Insult the messenger all you want, but the fact reminds the science hasn't hatched it yet. And database-related tools were not my first tools. You got that wrong too, insultboy. [[No one cares. We have addressed at length what Parnas is trying to do and why. You know nothing about science. You know nothing about Art. You serve no purpose here. If that really insults you can change your behavior. Since you don't you clearly are not insulted, and in fact are proud of your work product. Maybe you should self publish a book so others who fully appreciate your genius will have a chance to worship you. Lord knows lack of content and accuracy will not impede your efforts and you may even win a real book contract. You are here to badger an audience you could not gather on your own merits. This makes you a Troll. Period. There is one thing that you have taught me -- the wiki experiment is not viable in the face of full open access. This is a lesson that Usenet has long taught and I fear the implications. The wiki form is even more vulnerable as it is impossible to filter. In future all successful wikis will require access control in the same way that all doors require locks. I hope you are proud of your accomplishment. I am curious as to how long this wiki will last as the Troll of admission increases.]] ''Only when you pass along someone else's opinion are you "just a messenger". At the moment, your words make you a liar.'' [[This is only true if he does not believe what he is saying. I am of the opinion that he does.]] [Another really sad aspect of top's rant here is that Parnap only mentioned one metric, and he presents all the information necessary to easily produce the numbers for that metric in the paper.] * Then focus on that instead of personal attacks and self-puffery. Dissect, magnify, refine, measure, etc. Spend your effort fleshing out Parnas' work instead of spending effort belittling others. * [You have a lot of nerve accusing me of self-puffery in the same edit that you equate excluding you from this wiki with excluding practitioners in general and claim that only you are providing "real scrutiny". But before I'm going to focus on fleshing out Parnas' work, I need to be convinced that it needs fleshing out. Your rants about metrics, high level to low level changes, and "realistic change scenarios" have all been insufficient to demonstrate that there is anything lacking in the paper.] * Sufficient metrics that return '''numbers'''. That is what's lacking. I thought I made that extra double clear, but I guess I didn't. Stop making excuses for Parnas; he didn't finish the job. He created suggestions, not proof. * [All metrics return numbers. It's a necessary property for being a metric. And how many metrics would be sufficient? What would it be sufficient for? You haven't even established that, let alone demonstrated that the paper is lacking in that regard.] * ["Usually nothing is said about the criteria used in dividing the system into modules. This paper will discuss that issue and, by means of examples, suggest some criteria which can be used in decomposing a system into modules." Please note that he was only presenting an suggestion. So why did you expect a proof of that? (Mind you, he did prove that decomposition matters by at least one metric for at least one system for a particular set of changes).] * [Finally, I wasn't making excuses for Parnas. I was telling you why I wasn't doing what you asked me to do. I'm not your slave, or employee. I'm not required to do something just because you tell me to. (Just another example of your self-puffery, I guess).] * Bow down and give me proof, or else lightning will strike your dog! * [Great. Now every dog I own has been struck by lightning. Thanks a lot.] * If you don't have a western-grown distaste for dog meat, you should thank me for assisting with your steak preparation. Fidoburger anyone? * [And to get (kindof) back on topic, I'll put the leftovers on a '''decomposing system'''.]{Emph. added} * The Parnas Harness(TM). And yeah, the topic name is rather odd. http://image.shutterstock.com/display_pic_with_logo/9534/9534,1195498599,1/stock-photo-a-rotting-and-decomposing-cow-carcass-in-the-desert-southwest-in-winter-snow-7120582.jpg [[There are no sad aspects, only tragic consequences for the purpose and goal of the wiki concept without access control. All that is necessary for this to become TOP's wiki is for the community to move on, which is why real communities die as well. Eventually as a neighborhood declines they with means to do so move on. It is ever thus. I still hope that the grand experiment can continue -- but I would not bet in its favor.]] * If you want to make a WalledGarden of like-minded individuals and shut out practitioners, go ahead and make your own wiki. I stand by GoodMetricsProduceNumbers, and if you don't, then we'll part ways. It's my opinion that you are afraid of '''real scrutiny''' of your pet ideas and sacred cows, and thus want to silence critics who demand RaceTheDamnedCar. - t * [[I am a practitioner. I do not need a wiki I have Parnas. This article was about a 30 Year old article and the fact that it should be read. I have no idea what you think you are talking about. Santa Claus does not think the Easter bunny is real -- alert the media.]] * I believe it would be helpful to the world (and me) to demonstrate and illustrate some of the claims in the paper using numerical metrics and realistic change scenarios. (Although the entire paper was not about software change, it is a large portion of the arguments.) There are many factors involved with software maintenance, and they may overwhelm the specific factors or issues raised in the paper. Descriptive arguments are usually not sufficient. '''If it truly is an important paper, it deserves to be dissected and cross-examined in more detail.''' ** {In essence, the paper ''has'' been dissected and cross-examined via hundreds of subsequent research languages, production languages, software products and experiments, even when they weren't specifically about validating Parnas's paper. Parnas was among the first to articulate the notion of essentially programming to interfaces (though not using that term) rather than programming to flowchart-derived functionality. That, of course, we know to be a significant element of many implementations and approaches. What makes Parnas's original paper important is the fact that it was among the first, if not the first, to document such a fundamental notion in its most universal and abstract terms. As such, it is noteworthy because it represents a historical starting point. Whether that notion is universally applicable or not, or how such a notion can be quantified or analysed, is a matter for subsequent work.} ** I can generally '''agree''' with that. I'd like to see you and the other person who seems to feel the paper is "sufficient" or "finished research" together discover where you two split ways. - t ** {No one said it was "sufficient" or "finished research" in its entirety, as if the vastness of the subject started and ended with Parnas's paper. However, it was explicitly identified as a seminal paper, which it is. What was meant is that there is no need for "source code" of some sort in order for the paper to be considered "sufficient" or "finished". There is no split of ways here.} ** Someone stated: "We don't need numbers to grasp the simple way he points out a strategy that is intuitively obvious once it has been clearly articulated." It's not "intuitively obvious" to me. (I've seen others express similar skepticism with regard to "wrapping all SQL", so I am not the only skeptic.) If I was stuck with 1972 technology, perhaps it would be, but this is not 1972. One approach to making it "intuitively obvious" is to produce numerical metrics against something developers care about to get things like, "Hmmm, it obviously reduced the '''number of''' code blocks that needed changing for 7 change scenarios." Maybe there are other approaches to make such "intuitively obvious", but numbers are one of the better techniques because one can potentially trace the objective value back to the source code or work effort. It's like dissecting an alarm clock instead of reading about philosophy of clock design. One doesn't have to agree with notion-esque philosophical statements because it's tied to an objective metric that is fully trace-able. What makes this ''concrete'' clock tick? It's '''no longer just talk''', it's actual stuff that one can see the "mechanics" of. That is, the "mechanics" of the improvement taking place under one's eyes where they can see every gear and lever in motion. ** {That it's not "intuitively obvious" to you is no reflection on the paper. It is generally considered "sufficient" and "finished" (and intuitive) regardless of your opinion.} ** No, it's because of band-wagon-ism or because they don't "get" collection-oriented programming or work in domains that need rigid bondage-and-discipline-style software. If it's truly obvious, then RaceTheDamnedCar with CodeChangeImpactAnalysis. Enough jabbering already. To expect the reader to just accept it due to being allegedly "intuitively obvious" is not acceptable. This ain't the dark ages. Intuition is often wrong when faced with many factors and decision branches. * [[Good. Then you can do a Google search and discover all the research on metrics that you claim does not exist. Then you will be contributing. http://www.aivosto.com/project/help/pm-oo-cohesion.html -- this took all of 5 seconds and I am not ready to do all the work for you. But if you are in earnest then you will start reading.]] * If that specific paper becomes tied to some strong claim made on this wiki, I may take it up. Note that a somewhat thorough analysis would use ''several'' metrics, including CodeChangeImpactAnalysis. One is not enough due to WaterbedTheory. And, it's not true that I claimed that no metrics existed. They do, but they are scattered. ''Resorting to insulting Top doesn't help.'' It's done out of wiki tradition. -- top ---- '''Domain-Specific Language''' The second half of the conclusion that talks about order independence probably refers to parts of the paper that give hints of creating a DomainSpecificLanguage (such as an API), which I generally agree with '''to a point'''. The DSL in this paper resembles a text processing API. However, if the DSL involves heavy use of collection-oriented idioms, then I tend to leave that to the database query language. This way, I don't have to create APIs around bunches of collection-oriented idioms to avoid the kinds of problems mentioned above. But unlike encapsulation purists, I treat DSP/API's as helpers, not wrappers. Note that I sometimes mix "pure" wrappers and direct database access. For example, if we only want to do something with a limited set of paragraphs in a text processor, then we could have: doFoo(glob, bar, "x > (a + b) and z <> 7"); The third parameter is a portion of a WHERE clause. This way I don't have to create a formal accessor for many variations of the "doFoo" process which only vary by the filter criteria. (Some have argued for single-use accessors for every filter combo on this wiki, but I couldn't re-find that debate.) -- top ------ '''On Implementation Hiding''' I'd like to point out that making the implementation flexible (easy-to-change) is more important than "hiding the implementation" in my observation (in my domain at least). The value of hiding the implementation is often greatly exaggerated. It is a character to strive for, all else being equal, but not at the expense of implementation flexibility. - t ''That makes a lot of sense if you're writing, say, business reporting applications. They're typically small, relatively simple, usually maintained by one individual, and undergo almost constant change that needs to be implemented yesterday. In that case, the syntactic infrastructure used to maintain implementation hiding is mostly an obstacle with little value. At least, it is in your own code. In reusable libraries and the like, it ensures that implementation internals can be improved from version to version without the risk of breaking your code.'' ''It doesn't make sense if you're developing, say, large-scale enterprise applications that involve dozens of developers and complex functionality that is relatively stable. If you expose implementation details, there's nothing to prevent your fellow developers from placing dependencies on your internal machinery. That makes it difficult and error-prone for you to make changes.'' And an RDBMS can often serve as the interface between modules for bigger systems. One way to avoid the pitfalls of big applications is to not make big applications. DivideAndConquer. Use the Nile model: small villages around the Nile that trade and communicate via the Nile. The DB is the "interface spec" for each app group. This will avoid a lot of the collection-orientation idiom interface repetition common with ADT-style API's. The interface then becomes more substantial, focusing more on the domain instead of setA, getA, deleteA, sortA, findA; setB, getB, deleteB, sortB, findB; setC, getC, deleteC, sortC, findC, etc. That is an ugly, bloated, busy-work style. Kill it, don't promote it. (See InterfaceFactoring.) '''Parnas opened Pandara's Repetitious Box. He should be slapped, not praised'''. (True, it may be useful for ''building'' the base of higher-level abstractions/tools, such as RDBMS, but to extrapolate has been a mistake.) And it's a lot easier to debug a domain model in the RDBMS than in RAM because you can use existing tools to study, sift, and report on it. Encapsulation generally gets in the way of that. As far as your comment on '''scaling'''; the documents, lines, words, characters in Parnas' example are all data structures, or at least can be represented as data structures. If we are going to have millions of those with hundreds of workers working on them at the same time, I'd much much much rather manage them all with a RDBMS than with RAM ADT's. (Me ''and'' power users.) - t That being said, the OO/ADT style does have it's place, but that place is not everywhere. ''FacePalm'' Personal insults? That's all you got? Slacker. ''Troll.'' Name-calling. Nice. All that education and all you have is name-calling? You are just table- and C.O.P.-ignorant. Parnas is obsolete. Use numeric metrics to prove I'm a troll, or shut the [bleep] up, you arrogant bell-bottomed 70's-hugger. '''I'm confident a document processor built with a RDBMS would score equal or higher on most representative numeric maintenance metrics thrown at it''' compared to the ADT/API/OO version. This is largely because new requirements can use the existing collection-oriented features of the DB to avoid having to reinvent them, or at least simplify the creation of them for each activity. If it's a one-off need, for example, you can often just write a throw-away query in one sitting. The encapsulation-heavy approach would usually be much more coding via explicit loops and explicit conditions and explicit marshaling (see ViewingAlgorithmsAsCollectionProcessing). This will be more code per new requirement, knocking the score over. Yes, encapsulation will help with certain things, but not near enough. It results in a meta language or intermediate language that is just too low-level, resulting in too much nitty-gritty work. And the DB approach could be used with '''multiple languages''', something behavior-centric paradigms have problems with. You, my friend, are afraid of science and praise ArgumentFromAuthority. That would make YOU the troll, not me. Kill me with science and numbers, not insults. If you think I am wrong, is it that you think my pet techniques would score lower, or that the metrics I propose are not sufficient to gauge practical utility? If the second, would you toss the metrics out, or suggest others in addition to them? - t ---- See EncapsulationIsNotInformationHiding, PerceptionOfChange, OoConflictsWithCollectionOrientation, TopMind ---- CategoryEncapsulation, CategoryInfoPackaging ---- SeptemberTen