The PrincipleOfLeastPower is the notion that a subsystem/language/etc. which is designed for a certain task (as opposed to being a general purpose system/language) should have sufficient capability to do that task, and no more. '''Advantages of the PrincipleOfLeastPower include:''' * Security. If users have the ability to configure a system with a general-purpose programming language, they have much greater power to do mischief. SeparationOfDataAndCode is a long-standing security BestPractice. (Related: PrincipleOfLeastAuthority.) * Ability to reason about and manipulate the settings. If a subsystem/language is not TuringComplete, then a program can reason about its effects without actually running it as a program. If a subsystem/language is TuringComplete, doing so is (in general) equivalent to the GeneralHaltingProblem, which means that its behavior cannot in general be predicted without actually running it through to completion (if in fact it completes). (See RicesTheorem.) '''Disadvantages:''' * Loss of flexibility/extensibility. You can only specify things that the designers of the system intended; with a more general purpose approach one can add new capabilities. ** But be aware that PrincipleOfLeastPower can be leveraged to improve flexibility/extensibility in a tiered or stratified language where one gains 'power' as one rises through the strata. Such stratification allows one to take advantage of LiberatingConstraint''''''s in lower layers without removing any power from the overall system. There is a cost in terms of 'immediate' convenience, but flexibility is maintained and composability (especially black-box composability where you don't have the source) is enhanced. A simple example of this approach is EffectTyping. A more complex example is my five-layer system described in QuestForThePerfectProgrammingLanguage. -- DavidBarbour ** ''I generally agree. The goal may be to be a part of a larger system (stack of tools) and be the best part possible for its role rather than be the entire system. But this does add the burden of being integration-friendly.'' ** No, no, it '''reduces''' the burden of being integration-friendly. '''To integrate one system with another is to compose two systems.''' Languages layered or stratified based on PrincipleOfLeastPower are more readily composable for two reasons: (1) you can make useful guarantees about the properties of composition of lower layers without knowing the exact details of the code being composed, (2) you have many options regarding at which 'strata' your system will compose with the language, and you may thus choose the most appropriate one based on what level of 'power' your system introduces. Anyhow, '''integration''' becomes easier. '''Implementation''', on the other hand, is a bit more difficult; a language with many strata will generally have a couple syntactic and semantic primitives for each strata. -- DavidBarbour * Actually applying PrincipleOfLeastPower requires determining a ''minimum'' set of capabilities or features required for a subsystem to perform its task. As such, it ''literally'' becomes an optimization problem in the mathematical sense (that of seeking minima and maxima in general, as opposed to focus on computation time/space). Just like hand-coded time/space optimizations, applying PrincipleOfLeastPower tends to take additional effort and affects flexibility and extensibility. As it is an optimization problem, one might be well served by the RulesOfOptimization: OptimizeLater - it is better to use too much power and get a working, elegant solution to start, and it helps you learn exactly how much power is required. Then aim at the 'bottlenecks': apply PrincipleOfLeastPower only where the greatest reductions can be usefully achieved. As a language design principle, of course, one is limited by issues of compatibility with the code people write while you're deciding how little power you can get away with, so perhaps some BigDesignUpFront is called for, or at least limiting distribution. '''Examples of the PrincipleOfLeastPower include:''' * XmlDataTypeDeclaration''''''s and XmlSchema''''''s. These MetaLanguage''''''s (one is a subset of XML proper; the other an XML application) both can be used to describe the structure of a valid XML document; they contain only the features necessary to do that. See XmlIsaPoorCopyOfEssExpressions for discussion on this. * Use of ConfigurationFiles (which contain raw data, in some form) to specify configuration information for a subsystem. ConfigurationFiles are scanned/parsed by a program to set up the system. ConfigurationFiles of this sort contain no executable code. * SeparationOfDataAndCode in general. * ProtocolStack''''''s (''i.e.'' the SevenLayerNetworkModel). * Use of DomainSpecificLanguage''''''s (which may or may not be TuringComplete); see InterpreterPattern. '''Successful counterexamples''' (to show that sometimes extra power is a GoodThing to have - at least sometimes): * PostScript. Certainly, a TuringComplete programming language is not needed to describe images for rendering on a screen or printer. The design of PostScript has been a smashing success. ** '''Incorrect; Postscript is TuringComplete because it needed to be. LaTex is not TuringComplete (I think), and it does a very good job, but it cannot do all the things that Postscript does.''' ''Wrong: LaTex is TuringComplete, since TeX is. Because it needs to be.'' *** PostScript probably doesn't need to be TuringComplete. I would love to see a proof of that statement. I think PostScript is TuringComplete only because it's easy to get there without ever thinking about it, and it's easy to shrug it off and say "hey, it ain't harming anyone." ** I quite understand why TeX needs to be. I meant the LaTex extensions by themselves, not including TeX primitives, are not TuringComplete, IIRC. To parse them is to understand them completely. :-) Non-TuringComplete text/page/document description languages are very handy, because one can e.g. fairly trivially strip them down to plain text for further processing. * Unix rc files (executable configuration scripts). Some Unix admins love these, some hate these. Much of the configuration info in a Unix (or Unix-like) system is found in a set of executable shellscripts (which by necessity run with root privileges) in the /etc directory; many of these files have names like "rc.whatever". Changing many configuration parameters requires modifying these scripts directly. Convenient for the Unix guru; a RoyalPain for those trying to write graphical configuration/system administration tools for Unix. * Full programming languages used as macro languages for applications. Examples include VisualBasic being used to write macros for MicrosoftOffice; LispLanguage in EmacsEditor; PerlLanguage and PythonLanguage in BbEdit; shell escapes in many Unix utilities. VB's capabilities have often been misused to create "word viruses" (and a word processor should ''not'' be a breeding ground for such a thing). So far, I have yet to hear of an Emacs virus. Given that many people use Emacs as a shell rather than just an editor, such a thing could potentially be destructive, assuming one could be written. ''See EmacsLispVirus.'' * JavaLanguage and JavaScript used for client-side applets. Unfortunately, this has become primarily a way to deploy annoying advertisements, but Java is one of the few languages suitable for running untrusted code. TimBernersLee wrote an article called "PrinciplesOfDesign" (http://www.w3.org/DesignIssues/Principles.html) which describes the PrincipleOfLeastPower. A similar principle is described in ConceptsTechniquesAndModelsOfComputerProgramming (p323; this is the Mozart/Oz book -- '''highly''' recommended), under the name "Principle of Least Expressiveness": "When programming a component, the right computation model for the component is the least expressive model that results in a natural program." * Note that it doesn't say to '''always''' use the least possible expressive model; that will result in unnatural (awkward, hard to write, hard to read) programs in at least some cases. * Note that it says '''model''', not '''language'''. It can be counterproductive to be forced to switch languages often in order to get more or less expressive models, although at times switching languages is most effective, depending on which languages are available. * Note that they are talking about general models of computation, in particular including at least: ** Declarative sequential model (strict functional programming, deterministic logic programming, dataflow/logic variables, higher-order procedures). Component behavior is independent of rest of program. ** Declarative concurrent model (explicit threads, by-need/lazy computation, data-driven concurrency, demand-driven concurrency). Almost as simple as declarative sequential. ** Declarative model with exceptions: allows nondeterminism to be visible. ** Message-passing concurrent model (declarative plus ports/communication channels). Allows nondeterminism, and allows nondeterminism to be restricted to particular components. ** Stateful model (declarative model extended with explicit state). Allows components to keep history. Typical of sequential OO programming. ** Shared-state concurrent model (declarative extended with both explicit state and threads). Typical of concurrent OO programming. ** Relational model (declarative extended with search). Search space explored by testing choices until result is satisfactory. Encompasses Prolog-style logic programming. ** Constraint programming and still other models are discussed elsewhere in the book. * All of these models are TuringEquivalent, demonstrating that there is much more to the principle than distinguishing between TuringEquivalent and sub-TuringEquivalent models. See also ModelsOfComputation, TuringTrap, TuringTarpit. ** Oh yes, absolutely; that's what the book is '''about'''. They barely touch on sub-TuringEquivalent issues. It doesn't even talk about the Chomsky hierarchy, mentioned at top of page, which is appropriate for examining sub-TuringEquivalent mechanisms. ** Quote: "...and briefly explaining what new expressiveness each concept brings. All models are Turing complete, i.e., they are equivalent in computing power to a Turing machine. However, Turing completeness is only a small part of the story. The ease with which programs can be written or reasoned about differs greatly in these models. Increased expressiveness typically goes hand in hand with increased difficulty to reason about programs." (appendix D.3 "Concepts", p846) ** Unlike most largely-handwaving (even if correct) wiki discussions about expressiveness, they are quite specific, as the above list quoted from the book indicates. ---- Also, PrincipleOfLeastPower implemented via permissions can prevent a bug from going out of control. ''And if implemented with ObjectCapabilityModel, it can do so without a centralized trust authority, in a composable manner, and without the expensive proofs and challenges of identity required to establish permissions.'' ---- See ProductivityRant, LittleLanguage, IssuesForLanguageDesigners (adds to the above discussion) See ModelsOfComputation, TuringTrap, TuringTarpit, PrincipleOfLeastAuthority