What would information technology (IT) be like without academics? Would it impact software as much as hardware? ------ From CrossToolTypeAndObjectSharing: I suspect that a lot of things we use would have eventually been "discovered" organically [without academics]. Even IBM-card processing machines kind of resembled relational and DB operations such that one machine may filter, another machine group and sum, another join, another union, another sort (sort and join may have been using the same machine), etc. But without a parallel world to test on, it's only speculation for either party. It's my opinion that academics over-emphasis their importance in IT. Perhaps it's human nature to magnify our role's importance, regardless of what it is. -t ''It's unquestionable that many discoveries would be made without academics, and indeed many important discoveries are made outside of academia. The RelationalModel could easily have been invented at a kitchen table, and in fact was invented by DrCodd whilst he was at IBM's San Jose research lab, so it's (a trivial) debate as to whether it originated inside or outside academia. Popular documents don't say, but it could have as easily have been inspired by looking at paper spreadsheets, or from pure whim, as deep mathematical thought.'' ''What characterises academic work is not the origin of inventions -- which can, and often do, come from anywhere -- but the rigour with which inventions are critiqued, compared, tested, and evaluated, and the degree to which the implications of these are explored or applied to other fields. As such, the real impact of academia on the RelationalModel is not where it came from, but the way in which subsequent academic work has proven (for example) that relational expressions can be transformed in specified ways without changing their semantics. This has a serious impact on automated optimisation, which is of obvious pragmatic value, but one that can only be guaranteed to be correct via appropriate mathematical analysis.'' ''The non-academic alternative would presumably be pure inspiration followed by exhaustive testing. This process may miss out on valuable transformations, and thereby limit opportunities for optimisation, and it cannot guarantee that there aren't transformations that will, under obscure or non-obvious conditions, produce erroneous results that brute-force testing might fail to reveal.'' The history of relational suggests it was not "academic testing" that projected relational, but it's ability to express typical queries in compact ways compared to the NavigationalDatabase''''''s of the day. In other words, brevity in expression. Witnesses talk about "query shoot-outs" with regard to brevity. There was not one person who pursued it, so different stories may be different. The account I read emphasized it's EconomyOfExpression, but others may have liked other features of it. Further, automated optimization probably required too much horse-power in relational's early days. ''I didn't write that academia "projected relational" (whatever that means). Please read what I wrote -- in particular, the fact that I wrote "'''for example'''". To my knowledge, Oracle 2 supported automated optimisation as early as 1980, not that it's relevant to what I wrote.'' We'd probably have to test a parallel world without automated optimization (or alternative optimization) to compare on that factor. ---- I could write efficient algorithms without learning O(n log n) etc. Just time the damn thing. I could write a useful language without understanding the concept of TuringComplete. I haven't used LinearAlgebra to help me program yet. However, learning Computer Science in school helped develop in me a passion for programming, and that is the most beneficial outcome of academia. It's the experience of learning that is more valuable than the content (in this area). ''"I could write efficient algorithms without learning O(n log n) etc. Just time the damn thing."'' ''I've no doubt you could write them. However, timing them with a stopwatch only gives you an absolute measure of '''one implementation''' in '''one case''', which is dependent (obviously) on numerous variables. BigOh gives you a universal relative measure of asymptotic performance, allowing you to effectively compare one algorithm against another, entirely independent of implementation.'' * That's not quite true. One can graph time of a given implementation under different loads and see how "sharp" the curve is. * ''That's simply an empirical (and unreliable) way of determining BigOh, which is (I presume) not what the original author meant.'' * Please clarify "unreliable". Big-O is not perfect either because removed constants may not really be constant, especially if things like internal caching and poor garbage-collection algorithms muck things up. My point is not that BigOh has no value, it is that it has marginally small value compared to investing the time and effort on BigOh research/application/writing versus practical value of analyzing and timing algorithms. I can almost guarantee a programmer would be better if they completely skipped college and instead dove into extremely complex business programming with cross-cutting concerns, multi-level security/logging/transactional/validation logic and constantly changing requirements (none of which is covered in any depth in college and is absolutely essential). ''In the "real world", both BigOh and run-time analysis are important. In terms of selecting algorithms, BigOh can be crucial, allowing you to trivially discard the O(n!) algorithm in favour of the O(n^2) one when n is predicted to be large, or perhaps choose the O(n!) algorithm because it's easier to implement and n will be, at most, 3.'' ''As for programmers being '''better''' without college, I doubt it. Assuming they knew how to program in the first place, they'd probably be adequate for producing ad-hack simple applications, but would have neither the breadth of technical knowledge nor the independent learning skills required to be effective in a variety of roles, especially as the industry evolves, nor would they have the academic skills (critical thinking & evaluation, rigour, etc.) needed to make sound rational decisions. The latter, of course, would be fine assuming they'll never be moved into a decision-making role...'' * I don't think my development style would differ much if I never went to college. While college did a decent job of describing how computers work "under the hood", as far as software organization principles, it provided a minor contribution at best. It supplied me with ideas and possibilities, but did nothing to test them or offer effective ways to test them. For example, my "Software Engineering" textbook went on and on about the virtues of trees, both category trees and processing trees (StepwiseRefinement). Only through trial and error did I discover that sets were usually a better choice than trees. I kept having to fudge my trees to fit exceptions to the rule. One day I said, "screw this, I'm going to use sets this time", and was pleasantly surprised. (The author was an IBM-IMS fan I think). Software engineering had and still has DisciplineEnvy. Unless one focuses on a few narrow metrics, nobody has tamed the enormity of the measurement problem; and without decent measuring, there cannot be "hard" science. -top ''Regarding what your college covered, yes, if it didn't cover complex business programming in all its manifestations, it should have. Some colleges and universities do. More should.'' See also AcademicRelevance ---- MarchZeroNine