GoalBasedProgramming as-yet under-developed paradigm for programming that is '''consequent/executive'''. (Relative to the other major paradigms: imperative is '''mechanical/executive''', functional is '''mechanical/evaluative''', and constraint/logic is '''consequent/evaluative''' - see ThereAreExactlyThreeParadigms for associated discussion.) The fundamental operations required for GoalBasedProgramming are: * (a) the description of goals * (b) the description of constraints on action * (c) at least one ''executive'' statement indicating to the compiler or interpreter that it is to fulfill those goals within the given constraints. ** ''So, how is (a) different from (c), or can (c) be implicit? By (a), do you mean describing/implementing the details of the goals, or simply stating the goals to be achieved? If you mean the latter, than isn't that the same thing as (c), which basically is the goal of a program?'' ** You could say that (a) is stating goals that one ''might'' wish to achieve... which is not the same as saying that you actually wish to achieve them. The distinction is useful if you wish to have FirstClass goals and the ability to manipulate the set of goals being achieved by the program (adding to the goals, removing from them, etc.). I think it reasonable that you can't have GoalBasedProgramming without FirstClass goals any more than you can have FunctionalProgramming without FirstClassFunctions. You could make (c) implicit so long as there was some means of 'quoting' goal-descriptions that aren't to be fulfilled immediately and some means of 'including' sets of goals that were previously quoted, but then one still has (a) and (c) (in the form of quoting and including). ** ''Ok, so (a) means part of the domain model -- the model of the actions available to the constraint solver. (b) then is another part of the domain model -- the modeling of restrictions. (c) then is the program itself. So basically GBP really is simply (1) a domain model (probably libraries), and (2) an application of the domain model to a goal (the program). Technically, both aspects could be considered as parts of a single domain model (with respect to the program), since (2) is simply a constraint on program execution, semantically no different than (1). Of course one would want to separate domain model from application for reusability purposes of the domain model (ProgrammingInTheLarge).'' ** Goals aren't really 'actions available to the constraint solver'. Goal descriptions would include the logic for determining whether a goal has been achieved. Constraining actions by describing a limited set of legal actions (or a language for them) would be part of (b). You say that (a) and (b) can be merged, and they could be... but constraints (statements of 'always' or 'never', permit and rules) are not really the same as 'goals' and need special consideration whether or not you drop them into the same bucket, so I believe it worthwhile to distinguish them. Even if both (a) and (b) constrain the potential set of valid programs, they do so in semantically distinct manners. Goals only trims possible programs eliminating those that fail to achieve certain end-results (where you go); the 'constraints on action' trims programs that violate intermediate properties (how you get there). As far as 'libraries' go, yes, I expect one could use (a) and (b) to aide in construction of domain-related libraries. A potential approach simply leverages 'assert' (for preconditions and postconditions) and a fictional executive called 'magic'. Another author suggests use of 'satisfy', which may be clearer. assert(42 = x); assert(42 = x) magic; satisfy(43 = x) assert(43 = x); The first assert is a regular assert as it doesn't follow a 'magic' in its scope. The latter is a postcondition for 'magic'. A compiler could fulfill the postcondition by replacing 'magic' with 'x=43'. (syntax note: 'x=43' here is an equality test, not an assignment). For the 'satisfy' solution, the '43 = x' might be replaced by the assignment statement 'x = 43'. ... Or, of course, it might replace it with 'Kill all humans, then set x = someHideouslyInefficientCalculationThatReturns43()'. The need to constrain actions '(b)' becomes obvious when thinking about ''all possible'' ways the compiler could decide to get x set to 43. In addition to the listed primitives, there are some requirements if one wishes GoalBasedProgramming to be even moderately ''efficient'' with regards to expression, evaluation, and execution: * (a) Declarative, High level logical postconditions - essentially queries of the sort you'd find with ConstraintLogicProgramming. The 'declarative' aspect means that the interpreter does not need to worry about how the mechanical (and procedural) evaluation of the assertion parameter will affect the postcondition, and known declarative evaluation facilities are simply more efficient than procedural reasoning. * (b) Declarative, High level constraints languages. A constraint can, for example, be a condition (state) that must occur, or a condition that must not occur, an action (event) that must occur, an action that must not occur, a temporal condition, or a sequence of conditions, or a parallel condition, various contingencies with special 'rules' about how to handle them, etc. * (c) The ability to offer both procedural and declarative plans or strategies for accomplishing the postcondition. This gives the compiler a fast option - a pre-fabricated plan to accomplish the goal - but still makes it clear to the compiler that it ''is'' free to find its own way (if it desires). Also, it makes it easy for the compiler to ''drop'' a strategy should it be deemed a failure. Multiple strategies can be offered, and it is up to the compiler to analyze them, fill in the details, figure out which ones might work and which need to be scrapped (in addition to adding any strategies or plans of its own, should the compiler find itself with some extra time). * (d) The ability to declare heuristics regarding which 'plans' are better or worse than other plans. These help the planner decide which plans to favor. Default, largely generic heuristics should also be provided. * (e) A support-agent for the compiler that has (at least) an AssociativeMemory of previous attempts to build plans, the search-paths taken, failures, and successes, that can suggest a strategy that is likely to be 'good enough' just by glancing at the problem (through 'experience'). It would be a goal of this agent to essentially memoize the situation imprecisely enough that an edit-compile-test cycle (even with slight changes) tends to get better plans each time, spending less time re-computing the exact same stuff it computed on the last edit-compile-test cycle, starting with a 'learned' good-enough strategy, and spending time trying to make it better only if needs the priority and there is time to spare. This isn't strictly part of the language definition, but it would be an essential part of the interpreter's environment. Essentially, GoalBasedProgramming starts with ConstraintLogicProgramming, but focuses on building plans or strategies to accomplish a goal. Unlike ConstraintLogicProgramming, the strategy usually can't involve trying every possibility - trying even one thing might change the situation which would affect state and require re-planning. GoalBasedProgramming is also discussed in ThereAreExactlyThreeParadigms and CritiqueOfIntentionalProgramming. ----------- ''Ever checked out ConstraintImperativeProgramming? Alma, Turtle, and Kaleidoscope are languages of that kind. You can get ideas from them. I don't like the way your code works, though -- the semantics of assert() become overloaded. Look at DesignByContract, which should be integrated with GoalBasedProgramming. Also, instead of using a magic() procedure and asserts, why not create a new statement: the satisfy() statement, which invokes a constraint solver on a boolean constraint passed to it. For example: satisfy (x = 4) //this will assign 4 to x'' I hadn't heard the phrase 'constraint imperative programming' before, but I'll certainly run some searches and check it out. I thank you for pointing me towards them. I invite you to create the 'ConstraintImperativeProgramming' page, along with pages for AlmaLanguage, TurtleLanguage, and KaleidoscopeLanguage, if you already know much about these. In any case, DesignByContract and GoalBasedProgramming are not conceptually identical and should not be merged or integrated. DesignByContract is an approach to ''verifying'' potential solutions by means of asserting preconditions and postconditions. GoalBasedProgramming aims to ''automate the creation'' of solutions by asserting the same things, freeing the programmer from concerns regarding 'strategy'. As far as the issue with overloading 'assert' - I certainly agree that there are better ways of going about it; something like 'satisfy(4=x)' or various other postconditions would certainly do the job. Ultimately the same set of concerns apply, though - restricting strategy with rules and heuristics, controlling side-effects; figuring out how to specify what isn't allowed to be changed (the inverse FrameProblem), invariants that must be maintained across the operation, etc. ''Thanks for the feedback. As far as DesignByContract is concerned, is there some negative aspect I am overlooking on integrating it with GoalBasedProgramming? I can only see positive benefits from integrating the two, and it would be a nearly seamless semantic fit! Anyways, I'll see if I can dig up a couple references to ConstraintImperativeProgramming, and some examples of code in the languages.'' They aren't semantically identical, only related. DesignByContract approaches program-design in terms of '''verification''' of human-written strategies based on stated preconditions and postconditions. GoalBasedProgramming is the '''automation''' of strategy based on human-written goals. The difference is critical. Linking them is okay, but combining them is not. Besides, there's the whole 'this phrase really is trademarked' issue for "Design by Contract". ''Ok, I understood the differences the first time I read this page. What do you mean by "linking" them? How is combining them not okay? How else are you going to accomplish GBP if not by modeling the domain that you solve constraints over when you invoke magic()? For imperative operations, HoareLogic (or something similar to it) is perfect for modeling imperative domain semantics and providing the constraint satisfier with the knowledgebase it needs to utilize them. You shouldn't feel limited by the trademarking on "Design by Contract". DbC may be a trademarked term, but that doesn't trademark its ideas: it is grouded in HoareLogic. They don't own the right to designing things based on HoareLogic. I envision the execution of a GBP language to be similar to Prolog.'' In general, WikiWordsAreConcepts; combining two different concepts into one page will only force people to be more ambiguous and less specific when they name the concept. And linking them refers to including the name 'GoalBasedProgramming' in the 'DesignByContract' page and vice versa. ''Ok, I get it: we had a misunderstanding. Probably my fault. When I mean "combine" them, I don't mean on the wiki, I mean in an implementation of a language -- in practice. You don't even have to use the term DbC: just provide the automatic compiletime checking of the hoare clauses for logical consistency, and you have implemented the esssence of DbC.'' Ah, yes. That sort of combination is quite reasonable, especially since all the logic necessary for DbC-style verification would already (necessarily) exist as part of the GBC support. The domain will, of course, possess some explicit or implicit modeling, potentially as part of a database for an inference engine that models a great many other domains. But the fact that we model domains is hardly the critical aspect of DesignByContract - it's common in explicit form to all approaches to ConstraintAndLogicProgramming, and in implicit form to everything else. As such, the question isn't pivotal or relevant to any argument for combining GoalBasedProgramming with DesignByContract. ''I am not sure if this explains it, but when using DbC clauses, one has the epiphany that the pre and post conditions actually are a part of the target domain model! For example:'' proc Uncheck(CheckBox c) require c.IsChecked; begin c.IsChecked := False; end; ''can be rewritten as:'' proc Uncheck([checked]CheckBox c) c.IsChecked := False; end ''As we can see, the pre and post conditions are critical to the correctness and rigourousness/thoroughness of the domain model implementation.'' HoareLogic is not unique to DesignByContract. Even if it is good, or even 'perfect', it isn't a reason to combine DBC with GBP. However, it is also not 'perfect' for GBP. HoareLogic handles some aspects of the semantics for a constraint solver, but does not include (for GBP) the necessary support for rules and strategies on which programs are acceptable (e.g. there is no clean way to say: you aren't allowed to kill your opponent before making the next move), and neither is there support for heuristics for determining which programs are 'better' than others (e.g. no clean way to say: efficiency is good!, no way to deal with fuzzy risks and anticipated consequences, etc.). I.e. it is, at best, a partial solution. ''Well, it doesn't seem like a hard thing to create a language that combines both HoareLogic and PredicateLogic. Than you _can_ say things like "player cannot kill opponent before making next move" -- you can state them as tautologies in predicate logic. You can also include time constraints into the goal to satisfy, if you want efficiency! For example:'' always[minimize now()] //global constraint that says "always seek to minimize the current time for all points of program execution" Such things don't fit into the HoareLogic. With an appropriate set of extensions, I am certain you could eventually reach a logic sufficient for describing program goals. Issues like "player cannot kill opponent..." are more difficult to handle due to the inverse FrameProblem introduced whenever one tries to constrain actions. ''How don't they fit? You can always add pre- and post- conditions that basically imply something like "this action has taken time-order N-squared". Another way to do it is to say time(action A) > time(action B). But basically, optimization might simply involve exposing the efficiency semantics of actions, strategies, or whatnot to the constraint solver or something.'' Hoare logic is based in preconditions and postconditions. It doesn't provide means for describing heuristics and optimizations (such as 'minimize') that would require ''comparative'' application to distinct solutions. It also doesn't provide support for spatial or global constraints (such as 'always'). Thus, such things don't fit into the HoareLogic... not even if temporal assertions are supported. In any case, there is nothing 'simple' about exposing heuristics and strategies to the constraint solver; I've explored the concept quite a bit in my own time, and I don't believe you're aware of just how deep that rabbit hole goes. At the moment I consider GBP well beyond state-of-the-art and requiring StrongAi as described in ThereAreExactlyThreeParadigms and CritiqueOfIntentionalProgramming. The question 'how are you going to accomplish it?' seems moot. But since it was a NamelessConcept found in those other two locations it deserves a dedicated WikiWord such that it can be described in both its capabilities and limitations (if it existed) and for describing the reasons it doesn't and won't exist prior to the emergence of StrongAi (which would largely absorb and obviate GoalBasedProgramming). You state you envision GBP as similar to Prolog, and there would be some similarities... but also quite a few differences. Prolog only needs to locate values, and thus it can backtrack at will (locating a value has no inherent side-effects excepting time, space, and energy for computation). Backtracking isn't usually possible when dealing with ''real actions'', and BrownianMotion gets you nowhere fast, so instead one must create and plans, execute a partial plan, then (usually) determine if things seem to be going as planned and whether to continue or re-plan. In GBP, the planning to accomplish goals, dealing with contingencies, and re-planning on-the-fly in response to observations (i.e. non-static GBP) would all have a significant impact on the execution process. These are things that simply don't have any correlation to the Prolog environment. ''I brought up prolog because it allows both imperative-style and logic-style coding, and essentially provides the semantics of preconditions (like in DbC). It serves as a proof-of-concept that the benefits of GBP and DbC can be reaped together, and I mentioned it all on the assumption that you thought DbC and GBP could not possibly be combined.'' Ah, well it seems you've resolved that misunderstanding. I was talking about how the WikiWords oughtn't be combined. ------ Note: DeclarativeMetaprogramming comes very close to GoalBasedProgramming. They may even be identical for forms of DeclarativeMetaprogramming that allow for flexible runtime construction of procedural code formed on the basis of runtime observations and code-time policy. However, it would be difficult to justify 'static' DeclarativeMetaprogramming as readily qualifying. ----- ''Hmm, if you can make x equal to 43, and it will be equal to it.. why would you be asserting it?'' A necessary part of GoalBasedProgramming is relaying the 'goal' by use of the language. In this case, 'assert' is performing double-duty, specifying a post-condition for 'magic'. If there is some confusion, though, the '''x = 43''' expression is an equality test, not a statement of assignment. Another author suggests use of 'satisfy', which may be clearer. ''i.e. I see kind of a chicken and egg problem. If we can make the program do anything we want, then the program would have to know everything about everything... every algorithm in the world. Or it would have to offer some form of limited known magic only? That still begs the question: are there some more real world examples that would make more sense than the assert example..'' Technically, the compiler of the language can always utilize the one-million-monkeys on one-million-typewriters approach to solving the problem - brute force '''search''' on all possible algorithms that can be described by strings of length N or less, followed by looking at all strings of length N+1 or less, and so on - then look at each proposed solution and use heuristics and proofs to determine whether it fulfills the specified goal. So 'knowledge' isn't a fundamentally '''necessary''' thing, at least not so long as you don't give a damn about compile-time. In a more realistic scenario, the language implementation will utilize a combination of known strategies, heuristics, programmer 'suggestions', memory of previous compiles (so it doesn't waste time searching for the same solution twice), and (of course) some limited amount of search. In those cases where it finds what seems to be a set of suitable solutions, these solutions can be further tweaked (e.g. in response to proof attempts, or via use of genetic programming techniques) in order to seek a more 'optimal' solution. In those cases where it fails to find what seems to be a suitable solution, the compiler will politely request the programmer provide some additional suggestions or strategies, or even give real-time advice on the search - at which point it could, using an associative memory, 'remember' this search and use it for future compiles and creation of strategies when it encounters similar cases in the future. A better example than the 'assert' above can be found in the CritiqueOfIntentionalProgramming (which intends to make it clear that there is no 'real' magic going on here). ---- See Also: DeclarativeProgramming, DeclarativeMetaprogramming, FunctionalProgramming, ImperativeProgramming, PrologLanguage JuneZeroEight