(Based on discussion in WhatIsIntent) People will not agree on base definitions. ''It seems as though you confuse communication errors with model errors. LaynesLaw is a problem because people in disagreement about definitions simply aren't speaking the same language and are fighting about the language with which to discuss the meat rather than about the meat. One solution is to change languages, but that puts a huge learning burden on all people who wish to observe the conversation. It's fine to point out that communications problems exist - especially in English - but I'm still unclear as to how this is relevant to any meaningful argument. Words used to discuss concepts may be relative or arbitrary, but that doesn't make the concepts being discussed relative or arbitrary.'' I disagree. Any language based on abstractions and approximations will have the same problem. A UsefulLie is not necessarily a perfect lie or tool. ''Any language based on abstractions and approximations will lose information or allow for model errors when applied to a universe about which we possess incomplete information. But since the "same problem" being discussed in this topic are the 'communication errors' and not the 'model errors', you are incorrect in your assertion that any language will have this problem. Your mention of UsefulLie''''''s clearly indicates that you are still confusing the two. It is possible to achieve perfect communication of imperfect models.'' You don't know that. Modeling differences can produce communication errors. ''Modeling differences only exist because of communication errors. It is true that some languages, such as English, cannot perfectly communicate models and will thus result in modeling differences that may result in further communications errors. But other languages, including maths and programming languages, can essentially achieve perfect communication of the model. This doesn't mean the model being communicated will be correct, but it does mean there will be no modeling differences; any error can be fully blamed upon the model itself rather than upon communication thereof.'' ''People communicating in a vague language such as English often reduce vagueness via active listening: asking appropriate questions then looking for the answers, responding with examples or analogies ('so it's like ...') or at least predicting some then confirming them, or even solving problems and providing answers that have a very low probability of being correct unless the model was communicated successfully. Proper communication in English requires that the listener meet the speaker half way, for it is by doing so that arbitrary levels of precision are obtained and communications error is reduced in an otherwise vague or ambiguous language.'' ''The vagaries of English do not entitle persons to calling just any interpretation of another person's words to be 'correct'. Instead, such interpretations, when held in an internally consistent manner, must further be confirmed against the support mechanisms: predictions, examples, analogies, problems and solutions, etc. Even then, there may be some modeling differences, but the error due to modeling differences can be reduced to a degree that it isn't significant compared to the degree of error inherent to the model itself.'' My observation is that the biggest problems are usually related to the application of the model to the real world rather than flawed models (internally inconsistent). Software magnifies this because most of the contentious issues are not related to connections to the real world, but rather internal organization. "Wrong output" is by far usually easier to settle compared to internal organization issues. ''Perhaps application of a model to the real world is a problem. But it is not a LaynesLaw problem or a communication problem. Indeed, even if there is contention as to exactly how one ''should'' go about applying a model (policy), there are languages that do allow one to perfectly communicate exactly how one ''shall'' go about applying the model.'' ''RE: "flawed models (internally inconsistent)" - I'm under the impression you didn't catch my meaning with regards to modeling errors. Any reasonable model will only be flawed in the external sense, such that it makes predictions that are either imprecise or inaccurate. Models that are flawed in the 'internal' sense are flawed ''independently'' of the universe in which they are applied: either they can't make any useful or falsifiable predictions, or they can make predictions but predict contradictory things such that no matter what you observe the model is simultaneously wrong and right. When I discussed 'modeling errors' above, I was talking only about the external errors - i.e. the "wrong output" stuff. I won't deny that the other errors may exist; I simply wasn't giving them any thought.'' ''As far as application being "the biggest problems", I have my doubts. I imagine it depends on the model, but most models are designed for application to the real world and thus avoid making application particularly difficult. Coming up with a 'correct' model (one that has a high degree of accuracy and precision) or an 'efficient' model (one that requires fewer resources for computation) or a 'simple' model (one that is easier to communicate, teach, or implement) may often be a greater problem. If your observation is that application of the model is "the biggest problem", it might be because you haven't been doing the work on developing or implementing the model.'' ------- People will not agree on base definitions. The best you could achieve is to have both parties agree on the root abstractions, and then build precise derivations based on that. But there are at least two problems with this: * Not likely to happen for anything non-trivial. ** ''It has happened. Programming languages and mathematics are both examples of non-trivial sets of root abstractions that allow for precise derivations.'' ** That's true only within the artificial universe it creates. Linking to the outside would (inputs and outputs) is usually where the problems are. ** ''Eh? I said nothing about what happens "within" the artificial universes (anything 'within' an artificial universe isn't really 'communication'). Programming languages and mathematics allow perfect communication '''of''' artificial universes '''between''' things in the real universe. And "linking to the outside world" ''is not'' a 'communications error' problem. It is quite possible to perfectly communicate erroneous modeling data.'' ** Please clarify "between things". And let me restate that. It is ''applying'' such models to the external world where the problems usually start. AllAbstractionsLie to some degree. * Other readers may not agree with the agreements made between 2 parties. ** ''Abstractions are not propositions, and so 'agreement' is only relevant insofar as achieving successful communication is necessary. Disagreeing with abstractions is an illogical, irrational, and ultimately irrelevant behavior that is of no logical consequence. Disagreeing with propositions is another matter.'' ** Please clarify. If somebody uses a flat plate as a representation of the Earth in a small simulation, one has every right to complain about it as a "proper" abstraction. Or are you a flat earther? ** ''Abstractions include concepts such as 'round', 'flat', 'plate', 'baseball', and 'unicorn' held by themselves or combined into further abstractions ('flat baseball'). Logical propositions include such things as 'The Earth is round' or 'The Earth is flat' or 'A plate adequately represents The Earth for purpose of this simulation'. 'The Earth' is a name, which is neither an abstraction nor a proposition. Only with logical propositions can one usefully disagree. In English, the available abstractions are often vague, but if the two parties "agree on the root abstractions, and then build precise derivations based on that", then all they have are precise abstractions with which they may later express logical propositions, commands, queries, names, even more abstractions, etc. Because the two parties have not yet agreed upon any logical propositions, there is no statement with which it is sensible to 'disagree'.'' ** It is the same thing, just worded as a proposition. ** ''Relevantly, 'it' (by which I assume you refer to the proposition "A plate adequately represents The Earth for purpose of this simulation.") has been '''properly categorized''' as a 'proposition'. When you said "complain about it as a proper '''abstraction'''", you were using the word 'abstraction' incorrectly. And this is significant because what you said earlier: If two people are just getting started and have agreed upon a set of root abstractions and ways to derive more but have not (yet) made any agreements on propositions (such as common assumptions or axioms), then they have essentially agreed upon definitions but have not yet agreed upon what is a 'correct' model. And this is not an unusual position to be standing at; it happens all the time when working with new languages (including technical jargons and working definitions). It is much better than remaining in a position of arguing over definitions.'' ** Also one can change their agreement if they find problems. For example, if a person initially agrees that abstraction X is sufficient to represent real-world object Y in a model, but it later turns out that X is not a sufficient abstraction of Y, then the person may dismiss the model. Hopefully they give valid reasons and a better abstraction can be found. It is not uncommon to find flaws with base assumptions or modeling abstractions after some experimentation. For example, often a 2D model of a sphere (circle) is sufficient for heat dissipation simulations. However, not for all situations, especially if there is a lot of rotation-related convection. But the 2D model is cheaper. ** ''I don't disagree with what you say here. It's only important to realize that there is a major difference between agreeing on abstractions and agreeing that a particular abstraction can be applied in a particular circumstance. E.g. there is a significant difference between the ''abstraction'' 'vehicles greater than three meters tall' and the ''proposition'' that 'vehicle with VIN blahblahblah is greater than three meters tall'. When you said "agree on the root abstractions and derivations", at least in the literal sense, this implies only agreeing on a set of 'root abstractions' and 'derived abstractions'. It doesn't mean coming to any agreements of the sort you describe above - no axioms or common assumptions or measured values other sorts of propositions.''