Continuation of SeparationAndGroupingAreArchaicConcepts I believe your argument that you can ''meaningfully'' "bring together" snippets of code for editing, debugging, or inspection is predicated on the generally invalid assumption that the code chunks have a semantics (formal meaning and application) independent of their context. That this assumption is invalid also limits the degree to which you can even classify code in terms of entity/task/compSpace. -2 In fact, your entire argument for 'SeparationAndGroupingAreArchaicConcepts' seems dependent on such invalid assumptions. '''If''' context is relevant to the interpretation or processing of messages, code, etc. '''then''' separation and grouping based on identifying distinct interpretation and processing contexts is ''fundamental'' and cannot (usefully) be dismissed as an 'archaic concept'. --AnonymousDonor I'm not sure what you mean by "context". If you mean that classification requires programmer or analyst intervention when including something in a given file or routine is "automatic", then I partially agree. That something must be in a given file is generally a constraint forced on one by the compiler. In fact, that may not be true either because most compilers/interpreters allow one to put everything in one big file called "file" without any context other than the program code itself, which could be messy with variables with names like A, B, C and no functions or poorly-named functions. (And even if size constraints require splitting, the multiple file groupings could be rather random and undocumented.) Thus, file names, file divisions, and function names and groupings are all meta-data voluntarily provided by the programmer. Almost none of it is necessary for the computer to actually "run" it. '''It all depends on "volunteerism" anyhow'''. My suggestion merely allows one to take meta-categorization further by allowing any given code chunk to belong to potentially infinite number of classification sets; something file-and-function approaches don't do. As far as technology that forces developers to classify stuff properly, I doubt such exists, at least not in a practical way. But I am exploring enabling tools, not spanking tools at this point. --top Your hypothesis at what I meant by "context" seems to be pretty far off the mark. I'll try for a little clarification. The subjects of the "context" I described is "messages" and "snippets of code". In general, "context" for X refers to everything related to X that is not X. For messages, the "context" would include: who sends the message (where it came from), who receives the message (where it is going), and when was the message sent (relative to other messages in the past and future). For snippets of code, the "context" includes: with which explicit parameters is that snippet of code called, in which environment (implicit parameters - globals, environment variables, special variables, thread local storage, etc.) is that snippet of code called, who receives the return value for that snippet of code, the relative order in which snippets of code are executed, and under which conditions will that snippet of code be reached for execution. My concern with your suggestion is that context seems to be '''essential''' for interpreting messages, understanding snippets of code, etc. Indeed, I believe that context is '''essential''' to the point that you cannot readily classify most code in terms of entity/task/compSpace, much less usefully "bring them together" for debugging or avoid the need for SeparationAndGrouping for snippets of code. I'm not certain how "files" got involved. Many languages do derive semantics from filenames, file sections, order of code in file, etc. but my understanding is that you're ignoring those for now. I suppose we could consider 'context' in a more fractal sense by looking down at them from a higher viewpoint (e.g. whole 'script' files are effectively 'snippets of code' to be executed in a console environment, and 'filenames' in context are processed by makefiles). Function names serve great purpose in this larger context, being critical for executables (e.g. console typically executes 'main', thus 'main' is important in application context) and for modular programming function names are exported to hook components together (function names provide hooks into a module). But, excepting where the language itself derives semantics from the filenames or file divisions or function names (a few exist), these issues are somewhat superfluous with regard to 'separation and grouping' vs. 'context dependence'. If your conclusion is that we can be rid of ''some'' SeparationAndGrouping on the basis that it isn't providing any semantics, I'll agree. We could be rid of files, and function names are just pointers except where they are 'exported' into the object code context for use by the extra-language environment. But that really isn't strong enough a conclusion to support titular claim 'SeparationAndGroupingAreArchaicConcepts'. This page includes no provision at all for reducing or eliminating the practical requirement to group multiple snippets of code based on the parameters it receives, the conditions under which it is to be reached for execution, the environment in which it runs, who gets the return value, the relative order in which they must run, interdependencies, etc. etc. etc. And these issues will, in general, defeat or make useless your suggestion to classify 'chunks of code' by entity, computation space, and task. -- AnonymousDonor Keep in mind that I am focusing mostly on code maintenance and not on execution of code for now, but will revisit this later. Let's assume for now that we are using a compiler and all the code is automatically assembled into compiler-friendly files for the purpose of generating the final executable when a "build" is ran. In such as case, the existence of functions and files may still be tracked. We don't have to do away with the concept of functions and files to achieve better index-ability and tracking of code parts. '''It is not an either/or decision.''' How this works with code editors etc. may have to change from what people are used to, but let's save the topic of "new age" code editors for another day. Consider the following schema based on the prior examples: codeChunks ---------- snippetID sourceText functionRef sequence // ordering within function (double-prec.) ''How do you decide how much sourceText to '''group''' into each codeChunk?'' functions ------------ functionID moduleRef // file-reference functionName parameterDeclaration etc... // p.k. = (moduleRef, functionName) modules // maps modules to files ----------- moduleID fileName filePath categ_code_assoc // many-to-many -------------- snippetRef categRef ''How do you categorize a snippetID when the meaning of whole function/macro/statement/etc. can change based on the context in which it is applied?'' Etc... This allows the snippets to be put into functions and files as needed for compiling, but also allows the other meta-data to be tracked. It is a super-set of the traditional file-based layout and the prior schemas. We can track snippets as unique file-plus-function combinations without sacrificing the ability to give them other attributes for our code tracker. (One may want to stop using sub-folders if the meta-base can provide similar info, and instead put all the code files in one folder.) That being said, if/when such techniques become popular, we may want to change the way we organize code and move away from file-centric thinking. For example, EventDrivenProgramming tools and techniques tend to lean in this direction because the GUI engine is often the main manager of execution sequence, and not explicitly coded sequences. One thinks about code snippets in terms of event handlers associated with specific widgets and not functions in files. Although not all GUI kits use this approach, it was made popular by VisualBasic. But the problem with VB is that one could not search and analyze the code using query-friendly techniques. It kind of had its own private meta-base of code snippets with proprietary tendencies. (It could save as code files, but not very usable code.) --top I still feel you are assuming that this approach will actually work. I've explained why I believe it will not work without even beginning to touch on inverted dependencies and aspect oriented programming. Moving away from filesystems is an agreeable possibility IMO, but is hardly new (e.g. OCaml, Java, etc. are respectively based on module objects and classpaths instead of files). However, I don't believe you are, in any real sense, succeeding at your goal... you're just making really small files called 'snippets', making some attempt to classify each snippet, and the classifications will rarely be accurate or meaningful because how one classifies a snippet like 'x++' is highly context dependent (potentially including the context of the function call, as opposed to just the context of the chunk within the function). -- AnonymousDonor (AD) Every classification is "context dependent". That doesn't tell us anything new. A criminal may classify a kitchen knife as "a murder weapon", but that doesn't stop the store from classifying it as a "kitchen utensil" and customer software using this classification to help customers find the product while browsing. The classification of code chunks is primarily for humans, not computers. (Perhaps it can also be used for code validation.) --top You imagine that the problem can be "solved" by using a 'couple extra' classifications. I suspect you are treating 'context dependence' far too lightly. Go ahead and tell me what classifications 'x++' should bear. --AD Why are you asking me? Without a specific application or shop-specific classification guide, I couldn't do such. The classifications are invented by developers/analysts. Suggestions are given above such as GUI-related, database-related, security-related, etc. Perhaps some of this can be automated, such as classifying any braced code block, "{....}", calling the "runQuery" function/method as "database-related". However, automation is not necessary to achieve the basic goal. --top I'm going to contradict you on that. I believe that, without automation, your idea is nearly ''guaranteed'' to fail in practice. Relevantly, these classifications need to be source derivatives. But automating won't be too difficult. If annotating common existing languages, I'd use HotComments to do the job (similar to how HotComments are used to document code). ''You have not identified any general show-stoppers other than vague claims.'' ------- To help illustrate the problem, consider the extreme position: you decide that every code chunk can be broken apart into a database for later automatic grouping and separation. So, now your database has some 2000 'snippets of code' each consisting of one assembly statement (sometimes with slightly different parameters). All you need to do is include all the context for each of these 2000 snippets of code so that each of them will be executed under the correct conditions, in the right order, returning a value to the correct destination, etc. with all the various 'context' properties being correct. Unless you can find a practical way to do this ''without'' resorting to 'separation and grouping' (effectively gluing together code chunks larger than one assembly statement) based on contexts and relative ordering, I believe you cannot claim SeparationAndGroupingAreArchaicConcepts. SeparationAndGrouping as concepts remain applicable, useful, practical, modern, and in most senses the very opposite of 'archaic'. And that's even before considering hardware barriers on communications. ''I am not clear on this. What is "correct destination", for example?'' When 'returning a value', that value must go to the a particular register or memory address. If it does not, the code will run with errors. Thus, certain destinations can be called 'correct' while all others cannot. ''I don't know what kind of problem you are envisioning still. Does anybody else want to volunteer to restate from a different angle if they think they know what is being described? '' I'm taking the concept you're advocating (SeparationAndGroupingAreArchaicConcepts) to an extreme and asking if it is still valid. If not, the concept is not valid at all - it fails by an inductive principle since you won't be able to 'classify' any 'chunks of code' without first 'grouping' them. The extreme described above is for assembler code. Another extreme, for SKI calculus, is to refuse to group SKI statements, so you only have '''exactly three''' code snippets: S, K, I. That's it. All possible unique and independently executable 'chunks of code'. How does this fit into your whole system? How will you go about 'classifying' these three statements? ''The code that an existing COTS compiler gets does not have to look any different. Unless I know what problem you envision, I cannot "fix" it. Yes, existing compilers/interpreters have certain requirements and the food we feed them needs to be in a certain format, but that does not outright stop our goal of managing *code* with more powerful classification and query tools. It merely puts some preconditions on it.'' I imagine that we can usefully classify *some* chunks of code, so long as we first '''group''' the chunks of code (directly contradicting SeparationAndGroupingAreArchaicConcepts) into larger semantic units like functions or objects such that the code is reasonably ''specialized'' in its application. But my suspicion is that your approach won't succeed when the application of a code chunk is very context dependent. The idea breaks down when you're working with metaprogramming, macros, individual stages in event processing, queues, virtual dispatch, etc.. E.g. given ''int f(int& x) { return ++x; }'', how does one go about classifying chunks of f? When people start writing code tools that are used to build code tools (aka systems programming), 'f' is what code tends to look like - simple operations, meaningless by themselves, wrapped in packages like blocks or functions for application in a larger context. I understand that you don't do much systems programming. That's alright, but like all people your ability to 'envision' problems that may arise in areas outside your experience is extremely limited. ''As described above, the actual classifications are domain or shop-dependent. I don't know how systems programmers will want to classify stuff. Maybe they don't. If you don't need code classification to better manage code, then don't use it! (I did give classification examples for typical biz software.) Code management tools/techniques are NOT absolutely required to produce software. For that matter, neither is a compiler: write binary code directory into RAM.'' I suggest we move your suggestions to a page other than 'SeparationAndGroupingAreArchaicConcepts'. It seems you are unable to defend that titular claim for more than just "files" under a very limited set of circumstances. ''You need to be more clear on what is missing.'' Your ideas have some merit for tracking sections of code in limited circumstances, but that merit has nothing to do with separation and grouping being or not being archaic concepts. [The page's title is dire and obviously wrong, but I see some merit in an obvious derivation of Top's idea: It would be nice to -- for example -- be able to examine all the event handlers in one place, then examine all the database queries as a cohesive set, then view all the form code, or even look at all the "for" loops, or all invocations of function "x", and so on, in some clean and effective manner as part of an IDE's functionality.] ''Can you use something akin to formal logic to prove its "obviously wrong"? I am growing angry and am tempted to say something.'' [Such a "proof" was provided at the top of this page.] ''It wasn't clear up there either.'' I clarified what was meant by 'context'. I'd bet money that you didn't even bother ''attempting'' to apply said clarification back into the original statement. -------- Re: "...contradicted by how he attempts to apply it (e.g. he suggests grouping code into smaller "chunks", he refuses to acknowledge that functions themselves are described by groups of chunks of code, etc.)." Any classification can be ''viewed as'' a group and vise versa. I don't wish to get caught up in a definition battle of "grouping" because it likely won't go anywhere. The main issue is a complaint about the old style belief that one must "group related concepts" in code by making dedicated modules for various aspects such as SQL, GUI code, etc. This would be unnecessary if we had a more powerful system that didn't force mutually-exclusive choices. If I added all the conditions and disclaimers to the title, it would be mega-long. Titles are merely descriptions, labels, not logical proofs in themselves. If you are bothered by the title and want to rework it, let's kick around some suggestions. --top You've suggested nothing that gets away from grouping based on 'various aspects of code'. Nor have you addressed any of the ''valid'' reasons that people feel there is value in making separate modules for SQL, GUI code, etc. (such as the ability to make maintenance of certain aspects ''someone else's problem'', or the ability to ''link and test'' such code independently). * See "Downsized DBA Example" below. And titles don't need to include all the conditions and disclaimers, but they also shouldn't make strong, bold statements if the author plans to make more than a couple clarifications on scope. I'd suggest pithy, alliterative titles like 'CrossCuttingCodeClassification' for your ideas to 'fix' the problem. ''How about CrossCuttingCodeConcernManagement?'' Not too bad. Simple is also good, so something that is parallel to 'SeparationOfConcerns' would do well. 'ConnectionOfConcerns' is taken. A possibility is 'TrackingOfConcerns' (in which you'd be suggesting a variety of relational and annotation-based mechanisms to solve the problem of identifying/debugging/editing/etc. with concerns that are scattered throughout code). ''How about TrackingConcernsInCode?'' I like it - it is easy to inject into a sentence and as a topic title makes sense for the suggestions you've offered. It certainly is better than 'SeparationAndGroupingAreArchaicConcepts'. ---------- '''Downsized DBA Example''' My approach does not preclude that. ''Just to clarify, the comment to which you are responding has little to do with 'your approach'. It's a complaint against your opening statements, your assertion that SeparationAndGroupingAreArchaicConcepts. You have yet to convince me that your approach has much ado about separation and grouping.'' Suppose we wanted SQL code to only be managed by the database team (for now). In the code editor a code section would be classified as "DB-related". Any block with such a classification then would only be editable via the DB team and not the app developers per policy settings. Perhaps an "inline" checkbox could be created if we want a mere in-line, otherwise parameters are defined and an anonymous or auto-named function is generated for it (depending on the language). The advantage of this over manual separation is that one *could* see them together if they want. For example, if the department is downsized, the same person may do both app coding and DBA. They'd no longer want to hop around so much. How you see it is controllable. '''What is together and what is apart is merely a view'''. One is not forced to go to a separate module to see all the SQL or GUI code. The "hard" separation is the "archaic" I am talking about. It's an outdated mode of thinking because it is no longer necessary to do things that way. ''Methinks you generalize too much from experience mostly with limited procedural programming languages. I imagine a little thought-bubble above your head containing: "A function is just a block of code, you can 'merely a view' it as inlined!", but conspicuously missing is the accompanying thought-bubble: "Ah! But polymorphic dispatch, MultiMethods, TemplateMetaprogramming, etc. counter that idea - we can't just view function calls as inlined, not without knowing more context... potentially context only available at runtime."'' * It sounds like you are trying to trigger a paradigm pissing-match, for which we already have plenty of topics dedicated. Anyhow, every helpful tool/technique probably has its down-sides and can be "busted" or rendered useless under just the right condition. I won't dispute that. Different shops and different styles probably have different levels of problems. The devil's in the detail. Anyhow, what common problem are you envisioning with polymorphic dispatch? You mean like the "persistence driver" (cough) is decided at run-time? That doesn't stop us for displaying DB-related code in-line. But it's true the viewer tool may have to recognize polymorphic coded options. * ''Paradigm pissing match? Sigh. If you could be bothered to read thoroughly, you'd know I was '''actually''' calling you an fool who generalizes outside his experience. I did not insult any any paradigms. There is no 'problem' with polymorphic dispatch; the 'problem' is in your argument that "what is together and what is apart is merely a view" - a statement that happens to be contradicted by polymorphic dispatch. And, in the general case, this sort of problem ''does'' stop you from displaying code DB-related code in-line. Even the DB-related code to be executed can be determined polymorphically at runtime.'' * I never claimed everywhere-always. You made that up out of the blue. And you have not demonstrated an absolute polymorphism problem. Further, in-lining one's code view is not the only reason for the system I describe. Even if it did make that *particular* feature impractical, there are still other benefits. --top * ''Saying "I didn't claim everywhere-always" within the implied scope is approximately equivalent to admitting: "I was telling an untruth". And I haven't argued that there aren't benefits to TrackingConcernsInCode; my argument is against your other conclusions: that 'hard separation and grouping are archaic' and that 'what is together and what is apart is merely a view'. You shouldn't appeal to "other benefits" to attempt to defend these claims.'' * Remember that pencil-and-paper are "archaic", yet still have legitimate uses. Same with file-centric techniques. Your bias against me makes you interpret things in the most extreme way you can to fit your negative view of my viewpoints. * ''I don't know anyone (except you, apparently) who calls pencil-and-paper 'archaic', and I'm even less convinced of your claim regarding file-centric techniques. Regarding your last assertion: despite my negative impression of you based on our past interactions and my observations of your behavior when you interact with others, I do make a valiant attempt to read and respond to each of your topics with a fresh pair of eyes - I'm even careful to read all of what you've written before opening the edit dialog to reply (an ActiveListening technique I'm convinced you don't apply). If you honestly believe I'm responding the way I am due primarily to 'bias', then I invite you to go find other forums where you can receive other opinions.'' * Well, it's not working. From my perspective, you still seems like an insulting grump with preset ideas. Another advantage is that a block can be in multiple categories. A block that queries the user names and passwords from a user-info table may be under *both* the "DB" and "security" category. Traditional (hard separation) does not allow that. ''Traditional 'hard separation' would put those password hashing functions in one module and the table management in another and a third module would have the task of figuring out how to usefully interleave the two, but would effectively be under *both* the "DB" and the "security" category. I agree that the mechanism your advocating would be useful for locating and debugging, say, all code related to security. But I don't think that has anything to do with (hard) separation and grouping being or not being 'archaic'.'' How is it "under"? In our minds? Mutually-exclusive categorization via file modules *is* in my book archaic. (It's still probably the best KISS for smaller apps, though, the same way a pencil and pad is better than a DB or spreadsheet for short lists.) If we have 5 aspects, then we have a potential of 10 "link" modules for the combos (if my quick calculations are correct). ''Counts of "potential" link module combos aren't particularly meaningful (''potentially'' there are 2^5 combinations of five aspects ''per policy'', but you aren't going to implement all of them... you only need to implement one). And if you're going to fall back on an "in my book" defense regardless of the arguments presented to counter yours, then I'll let you 'win' - I lack the rhetorical tools to defeat FoolishConsistency.'' ''Reasons I favor hard SeparationOfConcerns among modules:'' * SeparatePolicyAndMechanism ** modules that support mechanism aspects (like hash functions) aren't enforcing a policy on how those mechanisms will be used, whereas if you combine code into one BigBallOfMud you'll generally be unable to fetch nice little policy-independent details like that hash function. This encourages the 'one module (or set of modules) per aspect' approach. If you want to work with databases, you import ODBC. ***''That page is non-specific and meandering.'' *** I assume you refer to 'PolicyAndMechanism'. A better page is here: http://en.wikipedia.org/wiki/Separation_of_mechanism_from_policy *** ''I suspect that is somebody's WalledGarden pet term. The examples are poor there also, most of it hand-wavy BrochureTalk.'' *** Uh... in ''modern'' ComputerScience it is anything but a 'pet term', Mr. old-timer-who-doesn't-read-papers-or-books. And if you want HandWavy BrochureTalk, look no further than what you've been producing lately. Anyhow, complaints about pages should be moved to the appropriate pages. ** other modules will support 'policies' that combine these mechanisms in some useful way. * Independent development and testing: concerns in different modules can often be developed and tested independently. This is necessary if one is going to contract work to another group. * Code reuse: modules that support mechanisms or useful policies can easily be reused from one project to another without dragging in a lot of cruft. In general, the reusability is inversely proportional to the number of responsibilities a module has. * Independent language support for CrossCuttingConcerns: modern and future programming languages offer increasing support for injecting 'policy' and integration issues back up into generic modules, allowing client modules to 'specialize' modules that are shared by the rest of the project. Examples of support include 'open' functions (adding new pattern recognition), 'open' classes (where modules can add methods to a class defined in another module), 'open' propositions (e.g. in Prolog), 'open' types (add new tagged unions to a data type), etc. This class of language support eliminates almost all 'penalties' of modules by allowing one to create generic 'policy' modules that can be specialized to the needs of a particular project. ** ''Because the run-time engines are becoming a case of GreencoddsTenthRuleOfProgramming. Convergence is slowly happening.'' ** I wouldn't put it on the run-time engines (none of these features imply reflection, and only advanced forms of reflection could force the run-times for compiled languages to have relational features), but I agree that you could call these relational-like language features - especially 'open' propositions. ** ''The compiler can also become DB-like. The complex classifications/aspects have to be tracked by something, and usually developers and (human) managers will want tools to help them search, sort, group, sum, and report on all those classifications/aspects. It's smarter reuse to use a DB instead of invent each of these features from scratch in the compiler/interpreter. Plus, one can use all that meta-data with off-the-shelf tools. If the c/i has a proprietary data structure, one cannot readily do this. '' ** I agree that an IDE will become more DB-like in order to help developers operate on code, but I don't see any reason for a compiler to do the same. ** ''To avoid a kind of internal MirrorModel.'' Perhaps I make more formal distinctions than you do on this subject - to a person who writes compilers, a compiler only performs code transformations (source code -> assembly code being a common one), so the compiler will never need to 'track' source code over time: there is no need for the compiler to keep a history or perform classifications beyond parsing unstructured data into some sort of c. ''Hell, even if source code were primarily represented in a relational database, for reasons like those listed above I'd want support for 'modules' - e.g. big FLIRT files consisting of the data to handle just a particular set of concerns, plus the language-supported ability to 'import' and 'combine' data from multiple modules into a larger project.'' * Why should I care what the hell *you* want? * ''I provided reasons. I even said: 'for reasons like those listed above'. I already answered this question.'' * Just because you are stuck in old ways of thinking does not mean everyone else should. * ''If it were "just because I was stuck in old ways of thinking", I'd not bother examining your arguments before dismissing them as incorrect and naive for various reasons... instead, I'd take a page from "in my book" and hold you to much greater levels of rigor to support your claims than the rigor supporting my own beliefs.'' * The future is in relativity engines. Mutually-exclusive (ME) "modules" may be fine for proprietary vendor library packaging and batch processes, but not for interactive and domain processes with interweaving domain logic. It's time for something more powerful that does not force a ME choice. File systems and OOP in generally have an ME problem. * ''You make a lot of claims here. Personally, I think the future is ''automated'' interweaving of domain logic, as per AspectOrientedProgramming, and that modules will hold strong.'' * AOP is an example of GreencoddsTenthRuleOfProgramming. I'm not against such as a general idea, but merely suggest that DB's be used to manage all that crap instead of spaghetti object pointers in filey code. * ''I don't understand your logic on calling AOP an example of GreencoddsTenthRuleOfProgramming, but I can't say I care one way or another. I also don't understand your implication that AOP is "spaghetti object pointers in filey code". That part I do care about. Why do you say AOP is SpaghettiCode involving 'object pointers'?'' * I should have used "navigational". However, maybe that's not accurate enough either. I'll re-work my characterization another time. * ''I've never used an AOP or SeparationOfConcerns facility that has felt remotely "navigational". I wonder which ones you've been using.'' ''I suspect that TopMind is concerning himself only with one 'issue': TooBigToEdit - most projects will be too big to edit in just one file. If 'TooBigToEdit' is the '''only''' reason to break a module into two modules, then I see a lot of benefit from the ability to just keep it all together for KISS principles.'' Where did I promote files? ''Where did I say you promote files?'' What am I allegedly promoting in your "too big" dig? ''Anything that 'keeps it all together'. For you, I suspect that would be a relational database.'' Together? Together can be relative. That's the point. You are still thinking physical. ''Neither files nor databases are "physical", Top - either of them can be distributed across networks, persistence resources, and access protocols. I think you too often forget that "relative" does not imply "subjective". I agree that "togetherness" is a matter of degree, a "relative" connectivity that can be described by such things as CouplingAndCohesion, can be measured by such things as drawing points and dependency-edges in a graph then computing density. But togetherness is no less real for being relative, and its reality can be felt in terms of 'real' costs, tools, conflicts, contracts, and services. So I'd appreciate it if you stop with your HandWaving BrochureTalk B''''''ullShit.'' They originally modeled physical things. Regardless, files are limited. Trees as large-scale organizational structures have problems, orthogonality being the primary one. Living with hierarchies because "we and our tools are used to it that way" is not good enough. Time to evolve. ''I find it amusing that you resort to the same sort of arguments you're usually deriding. How is: "Living with procedural programming because 'we and our tools are used to it that way' is not good enough. Time to evolve." IIRC, you'd usually point the speaker towards MindOverhaulEconomics and ProgrammingIsInTheMind.'' *You mean I'm guilty of acting like you? * ''Not at all. I don't consider those sorts of arguments 'bad' things - I'm a firm believer that we should advance our tools even when it advances them into territory with which I'm uncomfortable and unfamiliar, and I'm a firm believer that professional programmers, like doctors and engineers, have a responsibility to keep their minds sharp and skills up-to-date. So I'm not calling you guilty. I'm only noting an inconsistency - one that could easily become hypocrisy if you fall back on your normal argument in the future.'' * And, procedural is not necessarily out-dated just because it is older. It's limited dimensions of classification that is "archaic". * ''I didn't claim procedural is out-dated just because it is older. And repeating your inane assertions about what you believe to be 'archaic' does not make it true.'' ''That said, I'm with you on avoiding hierarchies as an organization and classification tool. As classification and organization structures in the macro scale, I agree: "trees have problems", well described in LimitsOfHierarchies. But for that macro scale there are FileSystemAlternatives, many of them non-hierarchical. So the question is: can you provide any good argument why 'files', which might be better described as 'small-scale data structures', "have problems".'' The mutually-exclusive problem is the primary one. Trees have poor control over overlapping categories. There's already examples in LimitsOfHierarchies. Besides, if you agree, why are you asking me to justify it? Shouldn't we only debate things we disagree with? ''People should debate or query each other when they aren't at an agreement (which is not necessarily the same as being at a disagreement). And I don't feel we are at an agreement, specifically, regarding the small-scale 'files'. (Note that I only indicated agreement about the macro-scale, the hierarchical 'file systems', not for 'files'.) Mutual exclusion has not been demonstrated to be a "problem" for files. The whole issue of "overlapping categories" doesn't seem to apply to individual 'files'.'' The known alternatives to file systems are either navigational/network (pointer/graph-based), or relational -like (set and predicate-based) structures/databases. Most code-based solutions, such as aspects built into the language, are generally navigational. * ''I need some clarifications here: "code-based solutions" to which problem? How much experience do you have with "aspects built into the language" that you believe you can generalize upon it?'' So we are at least agreeing that our code units (existing or new) would benefit from a way to have potentially multiple classifications to make it easier to track and manage code elements; and that classifications should be easy to add, change, and delete without limits imposed by our chosen structures? ''I believe we agree we could benefit from TrackingConcernsInCode with such features as:'' * The ability to annotate 'regions' of code as belonging to one or more 'purposes'. * The ability to 'view' (e.g. in a debugger) dataflows that are actively being processed through regions selected based on programmer interest. * The ability to 'hide' regions of code in which we are not interested * The ability to actively 'edit' in a view the code we control that is associated with certain regions or dataflows (with support from the IDE to warn of potential or likely breakage even in hidden regions) ''We also agree that these features can be accomplished mostly by an IDE, independently of the source form (e.g. even modules will work).'' ''I do not agree: that SeparationAndGroupingAreArchaicConcepts (especially wrgt. code ownership, sharing, security), that the organization (e.g. 'modules') of the actual code is '''merely a view''', that 'files' are problematic, or that TrackingConcernsInCode should be a first choice for managing CrossCuttingConcern''''''s (first choice for me is genuine KeyLanguageFeature support for CrossCuttingConcern''''''s and inverted dependencies).'' Such IDE's tend to reinvent a databases of sorts, in a half-ass way. And they are currently still file-centric. ------ '''re:''' code reuse, independent testing & development ''My suggestion does not stop these. (Also, the whole idea that things must be split into lots of little functions in order to test should perhaps be rethought. But even if you have lots of little functions/methods, classification of them can still be useful.) -- top'' I have a feeling that the "my suggestion does not stop these" is about as meaningful as "you can program functionally in SnuspLanguage"... it only takes extra work. Modules are ''designed'' for these purposes, they make them easy. How easy do you believe independent testing, development, code reuse will be with your system? Plenty of examples for modular systems exist. Can you run through a few UserStories of integrating independently developed code, reusing libraries, and independently testing code units in your system? (E.g. consider integrating 3rd party support for encryption, and 3rd party support for decoding and displaying video files.) One question to ask when finished is: does your approach result in some equivalent to mutually exclusive modules? (because, if so, then your arguments against them fall apart.) ''Like I said above, it does NOT necessarily remove existing file-based, class-based, and function-based modularity. You have not shown where it removes anything you love dearly. Although in the future I expect the use of the above for modularity would diminish in such an environment, relying on this viewpoint is not necessary to my base argument.'' You know... I'm not going to be convinced by what you "said" anywhere. I can't believe you anymore. After all, when you make claims, you don't mean "everywhere, always". So perhaps it DOES necessarily remove existing file-based, class-based, or function-based modularity... and it just happens to be somewhere or somewhen that your claim doesn't apply (after all, your claims seem to apply only when and where they are true, which might not be what is implied by the statement... I wouldn't want to "make something up out of the blue"). So, I'm asking you to convince me. I suspect you'll run into a few problems regarding independent maintenance that you're ignoring with all your HandWaving 'claims' and BrochureTalk, but I'm sure I can't convince you of these problems except by letting you run into them. --AD Consider this: table: functions ------------ funcID funcName classRef // f.k. to "classes" table nameSpaceRef // f.k. to "nameSpaces" table fn_contents // program code etc... (constraint: functName+classRef+nameSpaceref must be unique) table: func_aspects ------------- funcRef // f.k. to "functions" table aspectRef // f.k. to "aspects" table table: class_aspects ------------- classRef // f.k. to "class" table aspectRef // f.k. to "aspects" table This represents a template for use with a "typical" current language. A code editor could tie into it such that one would never need to touch actual files. A "make" step would generate files and run the (file-centric) compiler/interpreter. A developer could be completely isolated from "files". They only see name-spaces, classes, functions, and aspects; and access code through a CrudScreen interface that has find-lists, QueryByExample, and so forth. Thus, the "old" groupings are still there; they are just not file-based. --top I note you didn't provide any of the clarification I specifically requested (integrating independently developed code, reusing libraries, and independently testing code). Nice misdirection. I look at your tables and see a FileSystemAlternatives (ugh, plural problem) in which files are uniquely identified by: funcName+classRef+nameSpaceRef, and for which each file is an executable script. I also see a language that doesn't readily support data, types or sharing services (e.g. global registries, etc.) between projects and within projects, and that likely makes invoking functions ridiculously verbose. ''Where is this in that topic?'' I look at your question and wonder: Does TopMind honestly believe that all alternative FileSystem''''''s must be listed in the C2 WikiWiki page entitled 'FileSystemAlternatives' in order to qualify as such? * ''Sorry, you lost me.'' * Short explanation: a FileSystem alternative does not need to appear in the C2 WikiWiki topic 'FileSystemAlternatives' in order to '''be''' a FileSystem alternative. Your question "Where is this in that topic?" implies you believe otherwise. If you do believe it, it's a rather stupid belief. If you don't believe it, it's a rather stupid question. The questions a person asks can say much about that person. Based on yours, I suspect you were lost long before I got involved. * ''How about a clarification on "I look at that and see a FileSystemAlternative...". What is "that"? Is it something above, or something in the topic FileSystemAlternatives? Is "see" literal or figurative such as "envision"? I find too many possible but different interpretations. Are you suggesting I am making an FSA, or that it makes you think of one you've seen in the other topic or in your head?'' * "See" is quite literal. Your suggestion, your set of tables, is an alternative FileSystem - one that, as I had already clarified, has 'files' uniquely identified by funcName+classRef+nameSpaceRef. A FileSystem isn't much - just a system of names that offers the ability to create, edit, and delete contents and other meta-data at the scope of those names. You think you're getting away from FileSystem''''''s. With that suggestion, you aren't. * [Indeed. Top, it appears you haven't defined a FileSystem alternative, but an alternative FileSystem. It's still a FileSystem, implemented on top of a relational schema.] * ''For one, files are usually not used at the function level. Second, the funcName+classRef+nameSpaceRef "key" is mostly to accommodate *existing* file-oriented languages, and thus will mirror that in some ways. (A DB-friendly language may use a different idiom altogether.) But the label "file system", whether it fits or not, does not change the nature of it. It has abilities that existing file systems do not, and adding those abilities to a file system would turn that FS into a database of some sorts. It's a super-set of a file system.'' ** Different languages organize and divide code at different levels. Many languages favor a 'class level' (like Java) or an 'object level' (like Smalltalk), or 'block' level (like Forth) or a functor level (for Oz/Mozart and ML). Function level is a viable option, allowing functions to more readily be coded independently and shared among projects, but has a high expense in terms of verbosity and also has costs in terms of 'closing' functions (as opposed to 'open' functions like MultiMethods and such). If you did more than dabble in PLT, you'd be aware that the 'FileSystem' as recognized ''from within the language (and IDE)'' only rarely corresponds to the 'FileSystem' provided by the OS - e.g. you don't name filenames when importing Java classes or ML modules, and there would be no issue at all if the source for these units was stored in a database instead of a filesystem. ** ''It was merely an example geared toward a "typical" language, as already stated. Nowhere did I say it was generic. (Making it generic would require more indirection. Either that, allow some of the schema to be custom-tuned per language as long as it satisfies certain constraints.)'' ** If you're really interested in coming up with better ways to do things, I strongly recommend you learn what has already been done, tried, failed, succeeded. A good place to learn is LambdaTheUltimate. ** ''ArgumentFromAuthority. Anyhow, why do you keep focusing in this? Tracking existing language structure is NOT the "meat" of my suggestion. It is a distraction. The meat is open-ended categories per segment (plenty of meta-data room) and data-base like querying and view creation of such info.'' ** Making a strong recommendation to a neophyte in a field doesn't even qualify as an 'argument', much less ArgumentFromAuthority. Perhaps I should add some elementary logic courses to the recommendation. Anyhow, I keep "focusing" in this because YOU keep "focusing" on it; I am, after all, '''replying to your questions''' and '''reviewing your assertions'''. If you stop asking questions and stop making errors in your assertions (which prevent me from accepting them), then I'll stop responding. One simple mechanism: you stop focusing on it, I stop focusing on it, because you can't ask questions or introduce new fallacies if you keep your mouth shut. That's a simple concept, so even you should understand it. ** Personally, I believe the "it is a distraction" comment on your part has got it right. As I noted above, your whole example seems to be a misdirection from the section's heading subject, which regarded only concerns of of 'code reuse' and 'independent development' (including maintenance) and asked for some user stories regarding these... something you responded to with distraction tactics by producing the whole "consider this" section, which even you regard as irrelevant and not worth "considering". ** ''I don't know what you are rambling about. Please stick to technical issues. I don't want to argue about arguing anymore. Let's take one at a time. Please show an example of my suggestion stopping code reuse.'' ** Please show me an example of your suggestions allowing code reuse. You've been suggesting repeatedly that code shouldn't be organized into separate components for separate responsibilities, so first assume people follow that suggestion and don't organize their code in that manner, putting that responsibility as much as possible on the tool. In general, the reusability is inversely proportional to the number of responsibilities a component has. So, I think it quite reasonable to raise code reuse as a concern. Further, the 'meta-data' associated with code regions might or might not be able to transfer between projects. So I'm concerned about the practical reusability of the whole 'code' (including hand-produced organizing factors, like metadata), and I'd appreciate seeing how that is as 'easy' as modules in the system you are proposing. ** ''My suggestion is not about reuse either way. It does not hinder it, and does not necessarily help it either, other than maybe making it easier to find similar regions using the meta data to avoid reinventing the wheel.'' ** You keep repeating stuff like "it does not hinder it" as though repeating it is true merely because you said so. Whatever. Go ahead and believe your own BrochureTalk. I can tell this idea won't go anywhere with you, anyway, since if you ''actually cared'' about bringing the idea to fruition then you wouldn't hesitate to investigate concerns. ** ''If you envision a specific problem in the model in your head, but do not describe in sufficient detail about where this model flubs in your head, I cannot help you adjust the model in your head to match the model in my head (or confirm a real problem). If you do not wish to discuss this any further until you have a full working system at your fingertips, then just say so and leave the discussion in a polite manner. --top'' ** Every time I raise an issue in such a manner that you actually understand it, you drop a supposed feature (as with inlining). Since you're barely wrapping your head around many of the issues I'm raising, I'm only left with this thought that there are many more supposed features that exist only in your head. I, frankly, believe you are going about this all wrong: I think YOU are at fault for asserting these features exist without first proving they exist in the general case, whereas you think I'M at fault for not properly disproving the features you've been claiming. If people were to follow your philosophy regarding this, I could claim Santa Claus exists until you disprove it, and drug companies could sell experimental drugs that are claimed to cure various maladies until FDA proved them unsafe or ineffective. That philosophy doesn't work elsewhere in the professional world; I can't imagine why you think it works for you. I suggest that YOU be the person to politely leave the discussion, or at least make it clear that these things are 'wishlist' features. Perhaps we should move this entire discussion, along with the original page, to: 'SeparationAndGroupingAreArchaicInTopsDreams' until you can demonstrate your assertions. ** ''I base my suggestion primarily on the code and code styles I actually encounter. That's where most tools come from originally. I never promised that it would work in all niches all the time in all cases. '''It doesn't have to be perfect, only better than what exists now.''' And in-lining was a bonus, not a prerequisite. And the limitations you find would require either special languages or complex almost-AI code analysis to "fix". I am trying to find solutions that work on typical and common languages, not invent new languages. And, you have not found any down-sides that kill the '''main goal''': to associate aspects and track interweaving aspects of code chunks. If you are confident you can find enough holes, go for it. But I won't believe you until you actually produce the holes. You have 10 times more confidence in your hole-finding abilities than I do. The rate and severity of holes you found so far are normal and expected. I see no reason to fear. --top'' ** I have no issue with your "main goal". I think it's a fine one. But you also keep making HandWaving statements '''in bold''' and asserting that it is the job of other people to 'kill' your claims. I think that's unprofessional. You have 10 times more confidence in your wild claims than I do. I see no reason to expect a useful, working project to emerge from all your BrochureTalk. ** ''You asserted that my suggestion sucked, to paraphrase, thus you have at least some burden to show the suckage. Anyhow, we should keep each other in check, hopefully in a diplomatic way.'' * ''That being said, please clarify "makes invoking functions ridiculously verbose". What is doing the "invoking"? '''The example is not an interpreter''' (maybe in another generation we can consider it).'' * One function invokes another, of course. You may be imagining you can change how code is logically organized without changing how functions are 'invoked', but that is because you've not yet attempted to take these ideas of yours and actually implement them. You'll need to make tradeoffs in areas that you've been glossing over with HandWaving and BrochureTalk - e.g. language decisions to accommodate keeping those 'region' annotations up-to-date while actively maintaining code, compromises between looser grouping of code and verbosity (as evidenced through formulaic repetition of common 'import' statements or full dotted-path annotations), etc. The 'verbosity' I mention is due to one such trade-off that I anticipate (based on my experience with PLT and language design) would be required to make the language actually work with the function-definitions each being cordoned off into its own little 'file'. ** ''Why don't you spend as much effort with specifics and examples as you do talking down to me? Show you are smart, don't just claim and lecture me. You have not shown any specific "verbosity". It is a mental unicorn until caught on film. In fact, '''my suggestion does not have to change the actual code at all'''. Zilcho! --top'' ** You seem to think it is my responsibility to provide greater rigor in attacking your HandWaving BrochureTalk than you've provided in propping it up in the first place. Your mistake. I want to see the rigor for that bold-faced statement you've just made. Provide the "specific examples", "Show you are smart, don't just claim and lecture me", do what you're asking of me, lest you be a bold-faced hypocrite. I've already asked for examples of that claim '''regarding code reuse and independent development'''. Show me how to maintain the region annotations in my database when someone else is producing incremental releases from their own systems, which I can't access. *** ''Please flesh out your scenario with more specifics. Who is changing what, when, and why?'' *** Consider two parties, Brynhildr and Grunn. Brynhildr writes media codecs, and Grunn is writing a media player. Grunn wants to use Brynhildr's media codecs, and obtains license to do so, but Brynhildr's not about to start integrating her codebase with Grunn's. Media codecs change rapidly over time, so Grunn needs to regularly update her media player with the new codecs. When debugging, Grunn's media player occasionally has bugs that cause crashes deep inside Brynhildr's codebase, and so Grunn decides to load her database with 'region' information about Brynhildr's code. Unfortunately, it seems that every time Brynhildr comes out with a new incremental release, Grunn's region descriptions become invalid. Instead, every update requires Grunn to reproduce this data. By hand. *** Of course, one could fix this by making more than "zilcho" changes to the code, and annotating the code with some form of attributes. *** PageAnchor: outside-code - ''Such is probably technically difficult no matter what. That is not the kind of situation where my suggestion would help, I agree. I generally assume most code will be changed *inside* the system. If parts are changed outside of the system, then its difficult to know where to keep and apply the meta-data. Marker codes, such as "segmentID" can be enbedded in comments, but most likely outside vendors won't go along with that. One possibility is to match the best we can at the function level based on same-named files and paths from the vendor's last version. But the bigger picture is that you've only pointed out a minor limitation that all known systems also suffer from. '''It won't be worse than them for those code parts'''. Besides, why would we need to often debug vendor code that we are not supposed to touch anyhow? You seem to be stretching for problems. It's good that you identify such I suppose, but you do it is such a rude and round-about way.'' *** RE: ''I generally assume...'' - well, that's the problem right there, isn't it? You assume. You generalize. You make declarations about what is true based for everyone based on what is true in that tiny corner of the programming world in which you've been living. And, in doing so, you end up making bold declarations about inlining of code in an IDE that can't be done if polymorphism or open functions are involved. *** In any case, supposing the code is updated independently of the region data, you can expect it to easily fall out-of-date... even if you're only changing code *inside* the system. *** ''Reply at page-anchor outside-code-2. As far as polymorphism and inlining, that was already discussed earlier. Inlining is not a necessary feature and if certain coding styles make it difficult, it's not a show-stopper.'' *** I can make up more examples, but I really consider shooting user-stories at my ideas to be my job (to help both prove my ideas and filter them), and I (perhaps unrealistically) expect the same from your ideas and you. ** ''However, there may be benefits making changes that move the burden of classification to the new system. For example, take the suggestion that triggered this all: "put all SQL in separate module/unit so that it is together". With my approach, you don't have to do that because the tracking system can "bring together" all SQL code under your fingers. Thus, you can relax from using the language's division techniques. This makes *less* verbosity from the code's perspective because there are less functions to declare. (This excludes functionalization to reduce duplicate code.) --top'' * I've also anticipated and expressed concerns about other problems with such organization of code at the actual editing level (regarding code reuse and independent maintenance, as per the header of this section) but I don't feel you're willing to explore them. Perhaps if you were walking the walk, instead of merely talking the talk, you'd actually be more capable of and more interested in hearing the concerns of others. * ''Projection. No examples and no specifics on your part. Walk the walk instead acting like a know-it-all. Murder me with specifics; kill me with specifics; stab me with specifics; but don't do vague hand-wavy talk-downs.'' * I've got my own projects where I'm "walking the walk". This one is your baby. You raise her. I'll "murder you with your specifics" after you '''actually have specifics'''. But all you have now is vague HandWaving and BrochureTalk, so all I can respond to you with are broad warnings based on my experiences. I can't provide a rigorous counter-argument until you have a rigorous argument. I'll wait. * ''If you have specific questions about it, then simply ask.'' * I did. I'll repeat them: ''Can you run through a few UserStories of integrating independently developed code, reusing libraries, and independently testing code units in your system? (E.g. consider integrating 3rd party support for encryption, and 3rd party support for decoding and displaying video files.)'' --AD * ''Those are titles, not details. --top'' * That's a dodge, not an answer. ''And those 3 dimensions are primarily only to fit it to existing languages, not be the entire classification system.'' That really doesn't matter. What matters is that, at some point, you've got a unique identifier (URI) for a block that carries the 'contents'. Even if you represented program code (sequences, expressions, function-calls, data) directly in the database, all you'd be doing is representing more formally structured data at that point - the ability to represent data structures in files is something that has been promoted as a feature of the KillerFileSystem, but it does mean giving up on the PowerOfPlainText. ''Anything more complex/flexible than a file system with meta-data abilities is probably at least border-lining on '''being''' a database. It then becomes a matter of WHICH KIND of database is used. Back to the 'ol navigational-versus-relational fights. You appear to be agreeing with me without knowing it. Juicing up a file system to add what I ask for is producing for you a database. --top'' ------ PageAnchor: outside-code-2 I'm not sure what you are envisioning. File dates can be used to detect out-side changes (for the code that is stored in tables instead of files) if we wanted that. One would generally not "register" the vendor's code in the system, or at least mark it as being read-only from our code-manager's perspective. My preliminary configuration for your scenario would put the vendor's code in regular file folders (because you have not identified a need to manage them through our tracker because they are used as-is), but our own company's code in the tracker, and thus in the code repository database instead of files. The build sequencer would place or copy the generated files adjacent to the vendor's codebase text as needed. For example, assume that in the code build that is to target the compiler, we target a folder called "build_B", and the vendor's code is copied to a sub-folder called "vendor_X" under the build_B folder. build_sequencer (table) ------------------ sequenceID ordering sourceType // function, module, namespace, aspect, filepath, etc. sourceName destinationFile includeFilter // list of aspects to include (blank=all) excludeFilter // list of aspects to exclude etc... Example Contents: sourceType sourceName destination ---------- ---------- ----------- module internal_foo [root]builds/build_b/[same].lang filepath c:/outsider [root]builds/build_b/vendor_x Actually, it might be best to use the OS's command line or script language for file copies rather than reinvent it for our tool. But it is shown here as part of the builder "language" to simplify the example. In reality, it may look more like: sourceType sourceName destination ---------- ---------- ----------- module internal_foo [root]builds/build_b/[same].lang execute "c:/vendor.bat [root]/build_b/vendor_x" It just runs a given OS command with a command-line parameter that is substituted by our tool. (The alignment is off in the example due to the length of the command line.) --top ------ '''OOP as a solution?''' If we were to go back to the heart of oop (I am this, and I have these qualities, and I can do these things) life would be so much easier... * ObjectOrientedProgramming only helps with SeparationOfConcerns for ''unrelated'' concerns. It doesn't help with CrossCuttingConcern''''''s. So some people are investigating various forms of AspectOrientedProgramming or multi-dimensional code, whereas TopMind is aiming (with this topic) to make BigBallOfMud easier to process in an IDE by providing a multi-dimensional view on the code. * ''I'll invoke GreencoddsTenthRuleOfProgramming on this. An AspectOrientedProgramming solution with sufficient power to search, re-display, and manage all those aspects would either reinvent a database or use a database.'' ** AOP ''isn't'' about search and display or aspect 'management'. Those features could usefully be part of the IDE, but AOP and other SeparationOfConcerns mechanisms are about allowing the programmer to add, remove, and manipulate project features without invasively modifying the source code. Methinks you have a big misunderstanding of what AOP does. * ''Further, my suggestion could be used with '''existing languages'''. Building aspects into languages may to useful, but it would limit our language choices. Maybe in the future most languages with have aspects of some kind built in. Until then, we can use better code managing tools for existing languages. Tons of cross-cutting aspects is going to be a BigBallOfMud no matter what. Databases are one of the best tools for managing large volumes of info. It's a matter of managed balls versus non-managed balls, not the existence of balls. --top'' ** I have my doubts about your assertion that we're going to have a BigBallOfMud "no matter what", but I do agree that support isn't readily available in most languages today, and that I'd rather have my mud well organized. ** ''Please show me ANY tool that tames BigBallOfMud, such as many-to-many relationships, that is not DB-like. You claim my approach is the wrong way, let's see the right way. (GOF patterns are growing into BBMs.) --top'' ** 'DB-like' is such a vague property that I haven't an idea where to start. In any case, I didn't claim your approach is "wrong" - I've stated several times that having the ability to track code with the facilities you suggest is useful. If you thought my description that "TopMind is aiming to make a BigBallOfMud easier to process" was some sort of insult, you're in error. The main place I have issue is with your illogical insistence that "because my approach is useful, separation and grouping must not be useful... they're even 'archaic'". That argument really doesn't follow. ** ''I thought you were calling my technique a BigBallOfMud, not the problem itself. As far as "archaic", hard-wired and/or single/limited-dimensioned separation and grouping *are* archaic. So is rolling collection engines from scratch. Better hardware is gradually allowing us to start paying homage to collection engine reuse and EverythingIsRelative relationships. --top'' ** I do agree that the goal to support 'arbitrary-dimension' and 'ad-hoc dimension' separation is a fine goal, but neither your techniques nor those I favor are yet available to the common programmer. You can call 'single/limited-dimension separation' "archaic" when it ceases to be a relatively common means of organizing code. ** ''Common = non-archaic? Anyhow, you seem to be caught up in the definition of archaic (as you view it). It's a somewhat fuzzy term and trying to force it into some precise meaning is coming across as a bit obsessive. Let people have opinions. If you want to start a topic to explore that term, that's fine by me, but I doubt it will lead to a pleasant consensus.'' ** You should not call 'archaic' that which is still in common use today. Look the word up in any dictionary. Are you going resort to HumptyDumpty "words mean whatever I want them to mean" equivocation fallacies like you usually do? ** ''That's bull. COBOL is something that most practitioners agree is "archaic", yet still very common. The world's economic infrastructure depends on COBOL. My interpretation is mainstream, so you can take your humpty and stuff it up your dumpty.'' ** COBOL would not be considered 'archaic' if new COBOL projects were started on a common basis. After all, LISP was invented before COBOL but is not considered 'archaic'. If you are going to use COBOL as the measure of what you're going to call 'archaic', you still should wait until such a time as it is no longer common to start new projects with single/limited-dimension code organizations before calling them 'archaic'. Abuse of words only hurts your case, and attempting to justify your abuse of words through bullshit analogies only makes you appear a raving lunatic, so you'd do better to stop. ** ''Here's somebody else using "archaic COBOL": http://redout.org/ipb/index.php?s=11b3c95715f09b7e700605dac9342ae9&showtopic=12796'' * ''For example, what if we want to see if we are using an existing aspect or a new aspect to avoid unnecessary synonyms while coding a new sectioin? For a code editor to do that, it would have to keep track of all prior aspects (or search them from scratch). It would have to build or maintain a list of all usage points for each aspect. Should we invent a gimungus "list processor" for this? Or should we use an existing tool with lots of collection-handling and persistence idioms already built in? We will want searching, sorting, grouping, summing, and reporting features '''no matter what''' for non-trivial code tracking. Either the language reinvents a DBMS or uses an existing one to deliver these. Pick wisely. Please tell me how to have a *good* aspect management system withOUT DB-like features. --top'' ** AOP isn't about aspect management any more than Procedural is about procedure management. Code management systems will almost always be DB-like in some way or another, especially once support for multiple simultaneous programmers, persistence, versioning, etc. get involved. ** ''So you are in some ways agreeing with me.'' ** In some ways, yes. Can't we just get back to basics? ''This is getting off topic, but I've seen very little coded demonstrations of OOP making the code clearly "better", except in narrow circumstances/niches. ArgumentsAgainstOop'' ------- '''Multi-Scoping''' In most languages, variable scope is determined in a more-or-less hierarchical fashion. However, perhaps the scoping could also be set-ified in the language. A given code block could potentially have multiple scopes by listing which scope aspects apply to it: code_unit foo { scope: blerg, znog, foo; regularStuff(...) ... } The priority for any overlaps would depend on which is listed first. It may remind some of FORTRAN "common" blocks; but hopefully it is more natural and flexible than that. ''I'm somewhat curious how such a language would operate. It makes some sense for '''global''' variables to be 'set-ified', but such variables aren't hierarchically organized in most languages so I don't imagine you're speaking of them. That leaves the lexically and dynamically scoped variables from method calls, which tend to be instantiated once for each instance of a call then cease to exist when the call is no longer ongoing. How would this 'multi-scoping' apply to such variables? What does it mean for one procedure to have access to a variable that would otherwise be scoped within another procedure?'' ''Or is this just an idea you're throwing at the wall to see if it sticks?'' Think of each scope as kind of a global associative array. You can only access the array if you mention the array's name in the "scope" clause list. Except, you don't need array syntax to access the members. Conflicts could perhaps be settled by scope-clause mention order. This would not necessarily replace current scoping mechanisms, but rather complement them. Scoping declaration could be done in such a way: var thing scope foo; --top ''I'm not certain what it means to think of a ''local'' scope with its limited lifespan as a ''global'' associative array. '' * I think there's a misunderstanding here. The named scopes would not be local, but rather "static". The above "thing" declaration is declaring a static variable that is similar to a global, except that it is "hidden" unless a given routine mentioned scope "foo" in the scope clause. ** Ah. Well, I suppose one can selectively 'unhide' hidden global vars, or treat functions as namespaces. ** ''Yes, there are indirect ways to half-emulate it. Note that function-space could also potentially participate in scoping sets.'' ''A typical call-stack looks something like the following, with the downwards being higher in the stack:'' THREAD_INIT (OS data): vars 'pfnThreadProc' 'pUserArg' ThreadProc: vars 'pUserArg', 'result' TaskA: vars 'A', 'B', 'C', Helper1: vars 'arg', 'H1', 'H2' Helper1: vars 'arg', 'H1', 'H2' Helper1: vars 'arg', 'H1', 'H2' TaskAFunctor: vars 'arg', 'D', 'E' TaskAFunctorHelper: vars: 'F' ''In a running system, there could also be a number of such call-stacks. In a functional program, one might never return to some of them (continuation passing style). In some languages like Lisp there can be some 'special' variables that are normally available in all later scopes. But, before I could even think of applying this idea of yours in such situations, I can't figure out how it applies even for a simple procedural-language call-stack. How would you go about doing so?'' ------- Incidentally, WikiWiki has more or less the same issue: lots of content that is difficult to search and inspect. A partial solution was to create category topic tags. These do help, but their granularity is often too large. If we go with a ParagraphWiki and use similar tagging techniques except via a database, then it tends to resemble the kind of contraption I envision. --top ''Agreed, a finer-grained Wiki is an interesting idea in many ways. Not certain how practical the 'paragraph' granularity is. I'd like to see a GraphWiki, perhaps with support for SemanticWiki tasks.'' Related wiki engine topics: ExtendingTheWikiParadigm, FlikiBase ----- See Also: SeparationAndGroupingAreFundamentalConcepts ---- CategoryScope DecemberZeroEight