(See also: DynamicTyping, DavidThomasOnTheBenefitsOfDynamicTyping, PhlipOnBenefitsOfDynamicTyping) Also see BizarroStaticTypingDebate, BenefitsOfDynamicTypingDiscussion, WhenIsManifestTypingConsideredaGoodThing, IsDynamicTypingSufficientlyEfficient, RuntimeTypeMutability. --------- List of potential or claimed benefits: * Simpler languages * Smaller source code size because fewer and/or shorter declarations * Quicker to read because less type clutter to distract and stretch one's view * Easier to implement limping if needed by the domain (see LimpVersusDie). * Lack of compile time, meaning quicker turnaround * Can pass variables/objects between routines/modules without having to know or declare their type * Source code can better resemble pseudo-code rather than bureaucratic mumbo-jumbo * No need to learn the distinction between a static versus dynamic way to do similar actions in a given library. One way only. * Easier to test or debug units because creating or faking interfaces is easier (less code) * DynamicLanguageLint can get you many of the benefits of static typing without the costs * MetaProgramming easier and simpler because you don't have Nanny State typing system complaining * Better reuse because less likely to be tied to complex, heavily-defined structures/types and easier to mock up replacements. (DynamicTypingAndReuse) Please include discussion and disagreements below, not in the list itself. ---- This is very easy to reason about. The set of dynamically typed programs is a straightforward superset of the set of statically typed programs. Suppose we have two languages, LS, and LD. They have the same syntax and semantics, except that LS is statically typed and LD is dynamically typed. [[The set of dynamically typed programs is also straightforwardly a subset of the set of statically typed programs, by the embedding described farther down this page. The straightforward embedding of a statically typed language into a dynamically typed langauge requires reimplementing the type checking of the static language. How else could you capture type-directed behavior like the "type classes" in HaskellLanguage?]] [[Below it is stated that the implementation of LD performs a static analysis, and can emit the diagnostic "I cannot prove that this program is free of type errors". This implies that LD is using SoftTyping, ''not'' DynamicTyping.]] What is the difference between them? It is this: all LS programs are valid LD programs, but the converse is not true. Therefore LD makes more kinds of programs expressible. ''This is absurd. Assuming both languages are TuringComplete, both must have an equal number of expressible programs. What you're saying here is that LD can express that same number of programs in more ways. In essence, you've proven that dynamic typing makes things more complex by adding more ways to do the same thing. I actually like dynamic languages but I think this is not the way to demonstrate they are in some way better than static languages.'' -- BrianRobinson '''Expressible is not the same as computable. To express something is to write some sequence of symbols to make a well-formed program. Turing completeness tells us that all sufficiently powerful languages can compute the same functions, but not necessarily by expressing the same program in all of them. What I'm saying here is that the same program is actually expressed. The actual program written in LS is also a program of LD, without modification. The same expression.''' * This assertion ignores TypefulProgramming, where the exact meaning of the expression varies based upon static types - especially the types of operations described ''after'' the operation in question. It is important to consider not just whether the program in LS is '''a''' program of LD, but also whether it is '''the same''' program in LD as it was in LS. Both LS and LD can actually be the same language implementation, LSD. The only difference between the two languages is a boolean configuration flag. Prior to runtime, LSD processes the source code and performs deep inference to find type errors. When it finds type errors, it rejects the program. When it fails to find type errors, it has to decide whether the program is in one of two categories: known to be free of type errors, or not known to be free of type errors. And here is where the configuration flag comes in: if no type errors were inferred, but the program is not known to be free of type errors, and the flag says "behave like LS", then the program must be rejected. But if the flag says "behave like LD" then the program can be run anyway. However, there must still be safety, therefore when behaving like LD, the implementation must associate run-time type information with each object, so that errors can be caught. When behaving like LS, the type information can be discarded, since the program is proven to be free of type errors. * Note that the notion that the source-code is statically preprocessed also violates many of the supposed benefits of DynamicTyping, such as easier support for scripting. Is DynamicTyping being defended here? or is LD something else entirely? (SoftTyping has been suggested.) ** Why does scripting need to be treated specially and differently? Scripting is still programming, and should be done without errors shouldn't it? There are examples of "scripting" out there which is strongly typed, such as Oberon which is interpreted and garbage collected. I don't understand why people think scripting somehow needs to be dynamic moreso than other programming. Is it to save time? save keystrokes? Why just for scripting and not for regular programs too? what's the big deal with scripting needing to be dynamic? Explain why. ** ''The reasons are listed near the top. The tradeoffs need to be weighed. Supplying lots of extra info so that the compiler can potentially warn about or prevent some problems is nice, but the extra verbiage and details has downsides, as already described. I cannot explain all of it, it's just experience from using both styles. If you like types, fine, go work for a company that uses lots of type-oriented programming and be happy there, but you won't get very far being an annoying evangelist here unless you present a blockbuster objective proof.'' ** Uhm, actually the type system is not just for the compiler, nor is it for optimizations. It is for humans. When I see a function that only accepts an enumeration (clRed, clBlue) as the incoming parameter, I know that the function has a contract that only accepts red and blue as the parameter. Without this contract, I could accidentally send in clYellow in the slot. The type system is a human contract, not just for the compiler. You make the mistake of thinking that strongly typed languages are for compiler optimization or the compiler brain. Type systems are primarily useful for humans. The fact that static/strong typed languages are generally fast, is a bonus. Premature optimization was not the goal. ** ''I meant the compiler warning humans, not itself. I agree that typing can often serve as kind of documentation also that would probably need to be in comments if using a type-lite approach. But again there are many factors involved. One does lose some nice features by going type-lite, but that doesn't necessarily mean there are more downsides than upsides. The hardest part about technology decision making is weighing many tradeoffs.'' ''It should be noted that for a sufficiently flexible type system, there will be programs for which TypeInference is undecidable. This has been proven in the literature.'' Note that LS has absolutely no safety advantage with respect to LD, because the same static analysis is done. The only difference is that LD accepts some programs after emitting the diagnostic "I cannot prove that this program is free of type errors", whereas LS adds "and therefore I'm rejecting it because I have no way to ensure its safety once it is executing". ''Except that most languages LD '''don't''' perform the "deep type inference" (or even shallow type inference, for that matter) which you speak of - most of 'em will accept code like the following (in PseudoCode):'' define a := "This is a string"; define b := a * 74.329; ''In other words, languages with dynamic typing (many of 'em, at least) depend on the runtime to catch ''all'' type errors.'' ''It should also be noted that TypeInference is being confused with TypeChecking. Ensuring that a programmer's annotations are consistent is a far easier problem than automatically generating type annotations to a program at compile time. The latter is still an area of active research.'' The only reason for such rejection is so that the LS implementor does not have to support a run-time typing mechanism, and implement the complex optimization strategies for dynamic programs. Valid LS programs can be spun into blind machine code with no run time type checks, and no extra bits of representation in any object. ''In some cases, it goes further. In some application areas, such as MissionCritical software, proving the absence of type errors is the key concern; not the performance improvements associated with StaticTyping.'' All religious debates between static and dynamic typing advocates, at least the clueful ones, hinge on whether or not this is a reasonable tradeoff. One side insists that the flexibility is not needed, and that it is inefficient; it is often observed, however, that programmers introduce their own error-prone, clumsy and inefficient dynamic typing hacks in static programs. Also, modern dynamic languages have good optimizing compilers, and provide ways to add declarations to code "hotspots" to assist the compiler. ''Likewise, most statically-typed OO languages include some level of dynamic-typing feature, even it is just typesafe downcasts (which either demonstrate the conversion is correct, or fail gracefully.)'' There is a deeper issue in that when you have LD which accepts programs that cannot be proven to be error-free, you will tend to write such programs all the time; in fact, write mostly such programs! Therefore the diagnostic "I cannot prove that this program is free of type errors" will be noted, but ignored by the users of the language. Advocates of static typing want to modify other people's behavior so that they do not write such programs, and do not ignore that diagnostic. This is a sign of some psychological illness: identifying some mathematical subset, such as programs which cannot be proven to be type-error-free as "supreme evil" and wanting to control other people's behavior with respect to this evil. ''Illness? I don't think so. Perhaps there are some out there who think that DynamicTyping is evil and should be banned... but I'm not in that camp. Both have their uses.'' One reason that the diagnostic is easily ignored is that it carries no information which the programmer didn't know already. Suggesting specific spots where declarations might be added would be far more useful. ---- ''There'' '''must''' be some advantages of dynamic type checking. What in the world are they? '''If you don't have it, you will badly reinvent it''' Or worse, someone will '''really''' badly reinvent it, and you will have to work with it. See GreenspunsTenthRuleOfProgramming, ExampleOfGreenspunsTenthAtWork. '''The programming language is simplified''' Static typing adds to the complexity of the programming language. When you define a new language, do you want to focus your effort on the type system or on giving the programmer the functionality he needs? (Sometimes a static type system makes a language harder to extend, as Java and generic classes.) '''It can help with refactoring:''' * As someone who has only become enlightened recently (and still immersed in a statically-typed world), I cannot be too entirely specific. I have noticed that I waste a '''lot''' of time when refactoring fiddling with the types of things. I'm a greenhorn at this, and would be overjoyed to see this comment replaced with some more solid answers. (One example can be found in the first half of KentBeck's book TestDrivenDevelopment. A sizable portion of the refactoring he goes through in the money example is due to Java's typing. -- BilKleb) This is more about the lack of static typing, not the addition of dynamic typing. A lack of static typing can help you to refactor incorrectly! Suppose your refactored design requires something as simple as an extra method in one interface. Without static typing, you would modify each instance of client code so that it called the new method. You would then need to modify each object that might be used by that client. Which objects are they? Oh dear... With static typing, add the new method to the interface. All implementors will not compile until you've provided them with an implementation of the new method. It's that easy! ''But UnitTest''''''s, if used, will also catch the (generally small number of) problems that static compilation will catch, in addition to catching logic and other errors. My experience with a DynamicTyping language (Smalltalk) is that a very small percentage of run-time errors are due to mismatched types... maybe 1 or 2%.'' But the programmer manually has to write unit tests, those tests can contain errors. How do you test the tests to make sure they are even right? For each and every program you have to write tests, rather than just writing a compiler once and only once, that does the checking for you. Your suggestion of writing unit tests, is an improvement to programming how? It is not an improvement, it is a step backward. A similar argument could be used to advocate using C's pointer to char, instead of using automatic strings. Why would you want to do something manually? It makes no sense. Why would you want to manually allocate strings and write code for that to verify your strings are correctly being used? Why wouldn't you want to progress and use something more safe and sound? Why do you want to step back in time and make things MORE difficult by having static typing reinvented using potentially faulty unit tests? Why wouldn't you prefer to have a tool do the work for you? ''You are making the false assumption that heavy typing can replace most or all of what unit tests do.'' Actually no, nothing stops you from doing unit tests in addition to letting static automation also catch bugs. If you can automate something such as checking the types, then it should be automated, not done manually using unit tests. Unit tests should be saved for stuff that can't be automated by a compiler or interpreter. Furthermore, one should actually try to write correct programs using your brain, because unit tests will never ever cover the entire codebase for all possibilities. The possibilities are endless in programs. Consider testing a compiler to make sure it adds and subtracts all values properly.. you would have to test ALL the numbers in the entire world (limited to how many bits your CPU is) which is practically impossible. Tests are always only a sample, and never are complete. ''There are a lot of other factors involved, such as code size. Bloated code causes mistakes also. We'd probably have to see the difference it makes by looking at it case by case to settle this. Further, use TheRightToolForTheJob. Static typing is better for SystemsSoftware but not for custom applications in my experience. This is because the customer not knowing what they want is often the bottleneck in custom software, requiring one to be a nimble coder; but is not the case for SystemsSoftware where BigDesignUpFront is the usual practice.'' '''It can help you develop with less distractions:''' * The problem with types is that they force you to say things that you don't know are true, just to get the compiler to bless your program. "Here is the interface between these two objects" is a good example of the kind of thing you are forced to say before you know it's true. The interface between two objects invariably changes. I like not having to specify it when it is sure to change. Instead, I can say "This object needs an object like that in order to work". I know this is true (for now) because the code demands it. When the code changes and the interface changes, I change the code (which I know needs to change), but I don't have to change anything else. -- KentBeck, on ExtremeProgrammingWithTypes ''But strong static typing can help you run your program without so many error message distractions that which some would have been caught at compile time. Although, I can see that many won't be caught at compile time, and one disadvantage of the compiler moaning is it causes code to become more full of casts or hacks to escape the type system. Consider though, that languages with strong/static typing such as freepascal/delphi have many dynamic emergency features such as Variants and Array of Const (pass as many different types into a single procedure or method as parameters). I consider languages that give you an escape system, just in case you need it, compromises... maybe a better compromise than always having just dynamic types. Another way to emulate dynamic typing is through a struct or a record with an enumeration pointing to the kind of type currently set. It requires more work than a truly dynamic typed language or a built in Variant type (which is a dynamic type in fpc/delphi), but the record trick works when I need it.'' * I find that the longer compile times cancel out the time saved by catching stuff earlier in the process. Also, "suspicious code checkers", similar to C's "Lint" utility, can be employed with dynamic languages to catch many silly typos. This, it is not necessarily an either/or argument. But on the other hand, you have to manually remember which things ''do'' need to be changed. A static type system is a way of automatically being reminded of such things. Why do things manually? '''You can have a polymorphic interface without "defining an interface":''' * This is what happens when you realize, "oh, so this would work with any class that supports the 'print' method." If two classes have method(s) with compatible signatures and semantics, then you can use one in place of the other - without having to define a formal interface class and change both classes to "implement" it. ''This can be a good thing if you can't or don't want to change one of the classes, or if the classes don't have a convenient common ancestor.'' ''You mean StructuralSubtyping? That's orthogonal to dynamic typing: it may be most famous as a feature of things like PythonLanguage and RubyLanguage, but it is also available in purely statically typed languages like ObjectiveCaml.'' But templates/generics do the same thing, without sacrificing the compile time check to ensure that all usages ''do'' have the correctly named method available to them. ''Depends on the implementation of generics. C++ templates let you do whatever you want; errors are only detected when a template is instantiated and the compiler notices that the type being used doesn't support a specific operation requested of it. EiffelLanguage, on the other hand, provides bounded polymorphism - by specifying a base type that all type parameters must subtype, but prevents you from any operations not defined on that base type.'' Sacrificing? This check can still be made in a dynamically-typed language, it's only that the compiler may or may not have sufficient information to produce a definitive answer. '''Compilations go much faster''' * This becomes more true as projects get larger. With a statically typed language every class that gets compiled must find things out about every class it derives from, every class it uses for member variables and every class used for locals and parameters. With dynamic typing, all the compiler cares about is super classes. When dependency chains start getting long compile times increase dramatically. ** But take note that statically strongly typed freepascal and delphi compilers are faster than nearly any compiler in the world, and produce fast programs according to shootouts and benchmarks. Compile time can depend on the module reliance.. as each module can be compiled separately. ** Indeed; Haskell takes ''forever'' to compile software, compared even to GCC; but, of that time, type inferencing and verification is but a fraction of a second. Most of the work the compiler is doing is code transformations ''after'' it's already proven type safety. --SamuelFalvo But that just defers the work to runtime which degrades performance. What's worse is that while the compiler has to do this work once, the runtime has to keep doing it over and over again. Sure, the runtime can crib notes to make it go faster on successive calls, but it's never as fast as a compiler can make it (is it?). What would be really great is if we had fast dynamic typing style compiles during development and then we could switch over and do a statically typed build just before deployment. ''You can already do this in dynamic languages which support optional type declarations. CommonLisp and DylanLanguage, for example, do this.'' ''(Well, I kind of like the way the Dolphin tutorial workspace invites you to compute ''200!''...)'' ''Re: "compiler has to do this work once". Not during debugging and testing. One may have to recompile many times.'' The compiler generally keeps the compiled pieces of the program in memory though. So compilations do go fast. Usually, you just compiled the unit or file with the changed source code.. not the entire program. '''It helps you better apply the OnceAndOnlyOnce principle:''' * It could be argued that TypesAreRedundant But without static typing, each time you implement an interface, you are providing a redundant "definition" of it. '''How is that different from static typing, where you declare conformance to an interface, and then code a redundant definition of it that supplies the method bodies?''' Each time you call through the interface from client code, you are adding more redundant (but probably partial) definitions of the interface. ''Calls are not definitions; and anyway they are manifested in static and dynamic code.'' So where is the single definition of what the interface actually is? Nowhere! A single interface definition, to which all implementors and clients refer, is a ''perfect'' example of the OnceAndOnlyOnce principle. (Note: this is something that is missing from c++ templates, because they are unbounded, but in practice it isn't a problem because everything gets checked at compile time anyway.) '''It can allow a simplification on the users' side at the expense of a little wizardry on the implementation's side:''' * Typical Smalltalk editing environments can more easily integrate spiffy features. For example, near-instantaneous compilation to bytecodes, because the compiler doesn't need to make sure that each method call is valid. When you change an interface, nothing else in the system needs to be recompiled. But on the other hand, a single definition of a given interface will tell the compiler which things need to be recompiled. If you edit the interface, then you should have changed all clients/implementors accordingly - so the compiler checks for you. This works most optimally when interfaces are minimal (one or two methods each.) A given object will likely implement several such interfaces to build up a useful feature set. The Java standard library has many elegant examples of this. * The Perl analog of yacc, "Parse::Rec''''''Descent", accepts a grammar definition in the form of a string and generates classes for each of your rules as it parses. ** That's eminently typeable in a properly dependent typed language, such as Cayenne or Epigram. It could also be done in Haskell, Tim Sheard's Omega or DependentMl if the the string was replaced by a suitable PhantomType. Once you start to do really complicated things in this area, dependent types could potentially save effort because they can rule out behaviours that are difficult to eliminate by unit testing. --RobertFurber ''There are more useful programs than there are useful statically typed programs.'' * Almost all type systems are decidable and hence must reject programs that are valid but cannot be proven valid under the type system. This is more or less unfortunate depending on what the type system allows. Take reading from a file for an example of where type systems get in the way. There are two normal events that can occur when reading from a file: you get some data (say, a character) back or you get to the end of the file. In a dynamically typed language you can either return a character or a special EOF symbol to indicate these two events. Most statically typed languages don't allow this, instead requiring side-effects on parameters to store the data and return a status code from the function (e.g. CeeLanguage and JavaLanguage). When I first read "file" I thought you were going to have a much better example. Here's the statically typed HaskellLanguage answer Maybe Char. And the characterization of "most statically typed languages" is obviously false, even C returns a "special EOF symbol" rather than chicanery you describe. Furthermore, CeeLanguage and JavaLanguage are not the best examples of static typing (they may well be some of the worse). Standard Haskell's type system is decidable and almost completely reconstructible (inferable), with extensions you can have more power but (potential) undecidability, and finally you can always use/simulate dynamic typing when your type system fails. So while the point is true, I don't see it as much of a point. -- Darius ---- A fascinating article on the topic, "Why Dynamic Typing": http://www.chimu.com/publications/short/whyDynamicTyping.html ---- For the opposite question, see: WhenIsManifestTypingConsideredaGoodThing ---- Since I first read this stuff (a week or so ago) I've had some refactoring work to do, and as I was doing it, I continually found reasons to thank my development environment for being very strongly type safe at compile time. I would estimate that it saved me hours and enabled me to make some bold changes, secure in the knowledge that the compiler would identify all the other necessary changes that resulted from them. The original list of points makes me think of someone saying "I don't have to bother going to the toilet. I can just sit here and crap in my pants. It saves a whole lot of effort!" -- DanielEarwicker ''OK, so you do not want to crap in your pants. Does that mean that you need some mechanism to prevent you from doing so, or do you rely on your own skill and judgement?'' Note that 99% of the animal kingdom gets by rather nicely with this rules. Indeed, during 99% of human history, this was the case. Perhaps YAGNI still applies. -- StephanHouben 99% of animals crap on themselves? It's truer to say that most take a lot of trouble to do it somewhere out of the way from where they have to live. YouAintGonnaNeedToilets? I hope we can leave that as a personal decision for each WikiWiki citizen to make for themselves! It's up to you! ---- ''I would estimate that it saved me hours and enabled me to make some bold changes, secure in the knowledge that the compiler would identify all the other necessary changes that resulted from them.'' Do you have UnitTest''''''s? If you took the static typing away, would they catch these errors? If not, what do you do when you make non-type-related changes? ''I do have UnitTest''''''s. The only really sure way to know that they would catch all the errors that the static type system catches is to perform a thorough static analysis of them... which is exactly what a static type system does for me! Do you know if your UnitTest''''''s are a complete validation of type-related mismatches in your code? If you have no static type system, the answer is either "no" or "yes, because my program is very short and simple." Are you trying to sell me the idea of manually hand coding UnitTest''''''s instead of using static typing to take care of that part for me? Thanks, but no thanks... A static type system is just a way of automating the parts of UnitTest coding that it is theoretically possible to automate. The remainder has to be hand coded as UnitTest''''''s, but why add to that burden? The two techniques fit together - see UnificationOfStaticTypesAndUnitTests.'' You've divided up the UnitTest universe into Type Tests and Behaviour Tests. Under what conditions would the Behaviour Tests alone not be sufficient? If the behaviour is correct, how could the types be incorrect? I've never seen anyone write Type Tests in a dynamically typed language, and I've never seen a case where they would have been of any use. ''Obviously if you have a complete set of behaviour tests, then you don't need type tests. But of course, if the behaviour is perfect, you don't need behaviour tests either. These things are redundant mechanisms for checking correctness in whatever you're doing. Any harm in adding another one? Type safety really honestly doesn't waste any time or effort, it saves it it bucketloads. And it checks the type-correctness of your UnitTest code as well, so it speeds up the writing of correct UnitTest''''''s.'' If the behaviour is perfect, you still need tests so that when you change the behaviour, you can make sure it's still perfect. Redundancy has diminishing returns. What percentage of errors do you suppose are type errors? What percentage of errors do UnitTest''''''s typically catch? You just said, "Static typing has no cost, and it helps, so why not use it?" That's the question that all those reasons at the top of the page are trying to answer. Static typing ''does'' have a cost. ''A salient point. Solutions have costs; when dealing with costs, as KentBeck stated in XPE, the best thing to have is ''options''. Familiarity with the BenefitsOfDynamicTyping is one way to have options in this case. Contrasting Java and Smalltalk - the Chimu article referenced above appears to do that in an interesting way, though to be honest I have only skimmed it yet - could be a good way too. Another way that might be interesting - what if the polarized participants on this page switched roles and tried to argue the opposite point of view ? (I'm being idealistic now...)'' I'm willing to try that. How about a BizarroStaticTypingDebate? I can't see how a statically typed language can force you to use its type system. Watch a rookie C++ programmer at work! They have no problem getting round it. It's always optional. A programmer always has the option to sling everything into an associative array, and I do that where dynamic extensibility is the main requirement. But there's a pattern in my experience: whenever a rookie programmer in the place I work "side steps" the type system, the result is a disgusting design! One minute you have a beautiful polymorphic system, the next minute, someone has added a "type field" to the class that has to be checked everywhere to handle special cases... ---- Don't confuse the typing axes ''manifest--latent'' with ''weak--strong''. LispLanguage is '''strongly''' typed, but with '''latent''' types. C++ is '''weakly''' typed, but '''manifest'''. That's why you can cast a pointer to any type you like. Whether or not it'll work depends on the semantic of your program. You can't change the type of a Lisp object - all objects know their types. (Forget ''change-class'' for now - that's a different issue). ---- My experiences as a recent DynamicTyping convert: I was trying to implement a really complex algorithm in Java. (If you want to know, it's a seriously tweaked MarkovChainer, which alters how it looks up the next word depending on how common the previous words are in the input text.) I couldn't do it. The domain was pretty unknown to me, so I was hoping I could implement the simplest version and then slowly refactor my way into the really subtle stuff. But every little refactoring seemed to involve me touching eight classes, just changing types in method signatures, and the slowness of it - combined with the EssentialComplexity of the problem - made it pretty much impossible. I gave up. Months later, I decided to try coding the thing in RubyLanguage. (I believe that you can't really learn a language until you do something difficult with it.) And things moved a lot faster. I would focus my attention on just one class, messing around with interfaces without worrying about the interactions between the classes. I just worried about how that one class talked to its own UnitTest''''''s. This is impossible in a statically typed language like Java - change the method signature in one class and you have to go in and change every other class that uses it just to get the thing to compile. But with Ruby I could try 20 different variations of (say) T''''''upleMaker, not worrying about how I was breaking M''''''arkovChainer, W''''''ordRanker, etc. for the time being. Then when that T''''''upleMaker was settled down, I could run all the UnitTest''''''s, and hunt down the 30 or so errors to change the client classes where needed. ''You said with java, when you change one class, you couldn't get other class to compile. And you have no problem with Ruby because it doesn't tell you that. But, in fact, all other ruby class of you still can NOT run. So is there any different from java remind you of what you haven't corrected and Ruby let you go on and program die if you run it? When you work with Ruby You only work with the class you are modifying and its UnitTest. So why not do that with java. Why did you compile other class if you don't use it? you don't have to always use javac *.java. you can choose to compile just some classes. But you compile other classes to be SURE that every thing known of change of other classes it involves with. Isn't that good?'' The Java compiler will flag a lot of things as errors. Not all of these are actually problems with the program. Many of them will simply be hoops you need to jump through to assure the compiler's type system. Secondly, the compiler's blessing in no way guarantees that the program is semantically correct. It only guarantees that you're not breaking the somewhat arbitrary rules of the type system. You still need to test it. ''It isn't impossible; rather, you need to use a good Java refactoring tool (e.g., Eclipse) that allows you to easily change all references in a heartbeat. I started off in the 70s with dynamically typed languages: I worked as an APL developer. Then lots of Lisp, Prolog, Focus, and Smalltalk in the 80s. However, with the introduction of really powerful refactorings, and code templates in Eclipse or IDEA for Java, I now seldom miss dynamically typed languages. That said, I think Python and Ruby very fine languages. -- CraigLarman'' I'm a big believer that ContinuousRefactoring is a very good way to find the right solution, and that the need for discovery-through-refactoring increases as the problem gets more complex. But you need a language that lets you refactor continuously, and dynamic typing seems essential to that. Interesting corollary: Before I started using UnitTest''''''s and other XP practices, I didn't have to worry about the hard stuff nearly as much, since I was spending so much more time on the easy stuff. Perhaps it's a sign of success when much of your programming time is spent on hard stuff - it shows that your skills of reuse are getting better. -- francis : A good editor can help immensely with this as well, however. I'm no huge fan of static typing, but having an editor which can rename methods, change parameters/parameter types, create stubs, etc., is a tremendous help when dealing with static types... come to think of it, what would it be like if one could have an editor which could '' hide '' all the static typing details? You'd probably still have to deal with types as far as using libraries, but internally, shouldn't it be possible to create interfaces, methods, etc behind the scenes in response to a new unexpected method call? --WilliamUnderwood (younger) It seems to me that static/dynamic typing is primarily a user-interface issue. When you _want_ to specify types it's annoying to not have that safety-net. But when you don't care about types, working with a static type system is stifling. In either situation, a good programming environment will help a lot. EdGroth : I'm almost of the opinion now that it's actually the opposite: you use static typing when you _don't_ care about types; you just let the computer deal with it. When you use dynamic typing, you're taking over responsibility for these issues from the compiler. In a concrete sense it's a question of indirection, in that a dynamically typed system will always have at least one layer of indirection more than the equivalent statically typed system. A smart runtime might be able to perform some magic to move the cost of that indirection (Jit, optimizing based on accessibility, etc.), but the cost is always there. -- WilliamUnderwood (older, wiser?) ---- Shouldn't we make a distinction between dynamic typing and type-free languages? Type-free languages do not carry any internal type indicators/flags along with variables. Their values are more or less just strings as far as the language is concerned. I prefer type-free over dynamic typing. ''Doesn't sound like that useful of a distinction to me; it sounds more like implementation details. Whether an object is represented by a bunch of bits and a vtable, or by a string--some operations are supported by some objects and another. You can assign meaning to the expression "5" * "4"--most folks will assume that this is "20". Multiplying "cat" with "foo" is another matter. Ignoring bizarre languages which '''do''' define such multiplication; attempting to multiply two strings in this fashion will either result in an error or UndefinedBehavior (preferably the former). Even in the absence of a declarative type system and type annotations, there is a clear difference between those objects which can be multiplied together, and those which cannot. And that distinction is an example of a type.'' {So any distinction could be called a "type". That is just a label.} The one gotcha you have to be aware of is using the right kind of compares. This is why Perl, for example, has "==" for numbers and "eq" for strings. Personally, I wouldn't use that particular syntax if given a choice, but the concept is there. ''If you have to worry about how to spell the equality operator - that's another type distinction.'' {Typos are typos. What is your point? It is better documentation also. You don't have to hunt in the declarations to see what will happen. Further, a database or remote service could change the type of an ID from numeric to string, for example, without resulting in recompiling our code. Plus, they make this kind of thing less ambiguous:} a = 5 b = "foo" if a > b .... versus if a s:> b ''Where "s:" means compare as string. We can also add other letters for optional case sensitivity control and for ignoring leading and trailing white spaces. It is hard to cram all that into the concept of "types".'' ...... As far as why I prefer type-free over static typing, it is because it simplifies the code, and I believe ThereAreNoTypes. ''The same can be said for dynamic typing as compared to static typing (or more correctly, ManifestTyping).'' Types are too artificial, especially for dynamically changing things. They make you spend too much time trying to force the world into taxonomies which are not really there. ''But there are taxonomies. There will always be taxonomies - and non-artificial ones, at that. They might not be hierarchies - and hierarchical type systems are limiting, I'll agree. But taxonomies do exist, whether you like it or not.'' {Yes, but they tend to be situation-specific and fragile. Global taxonomies are nearly non-existent in real-world things, especially those created by humans as opposed to natural laws. And, creating local in-code micro-taxonomies for everything is a waste of code and time. Typing tends to assume some grand, global, stable classification for stuff, and this is a not the case. Why hard-wire your code around something so fragile, situational, and temporal? It might be livable, and arguably beneficial, for '''simple base type''' concepts like numbers and strings, but it does not extrapolate well to more complex and dynamic things. Types take too much set-up work. If they are generally only relative and temporal, then that set-up work is not justified.} Related to "type-free": PowerOfPlainText ---- In my (limited) experience, DynamicTyping means more typing - on the keyboard - from a software engineering point of view. Imagine a classic static SQRT function: double sqrt(double x) { return ...; // implementation irrelevant } What's the general design contract of this method? // in: a double // out: a double // semantics: square root This static version validates the in and the out part. Validating the semantics is up to the unit tests. Dynamic version (pseudocode of course): sqrt(x) { if (x instanceof Double) { return ...; // implementation irrelevant } throw new TypeMismatch("Parameter x to sqrt must be Double"); } Clearly, to implement the same contract in the same strength, a lot more typing is needed. Of course DynamicTyping people don't write the typecheck, meaning they don't respect design contracts. Which leads to obscure errors coming from deep inside your program (doesNotUnderstand anyone?), breaking apart implementation hiding. ---- ''This is very easy to reason about. The set of dynamically typed programs is a straightforward superset of the set of statically typed programs.'' But there's also a simple transformation from a dynamically typed program to a statically typed one: Introduce a single type Univ which is a tagged union of all primitive types, extend all operations to operations over Univ, tag all literals with their type. All but some of the tagging would be done in a library and thus be essentially invisible. In that sense, the set of dynamically typed programs is a strict subset of the set of statically typed programs. Actually, dynamic typing is static typing with only one type, therefore is no typing at all. Maybe there are also other, true benefits? ''To be more correct about it: Dynamically typed programs are a trivial case of static typing, where all terms are assigned the same (universal) type, and certain operations are checked at runtime for correctness. It's not hard to implement a dynamic typing framework in a statically-typed language--you can even do it in C. Of course, the typechecks at runtime are what static typing fans object to; both for performance and correctness reasons (primarily the latter).'' ''By introducing more complicated type systems, it is possible to reduce the number of runtime typechecks, by proving that a given term always has a specific type (or its subtypes). The ideal is to eliminate all runtime typechecks; indeed, many statically-typed languages disallow dynamic typechecks (and perform TypeErasure, making them impossible anyway).'' ''However. There are programs (including real-world ones) and useful langauge features, for which static typing is not possible. Static typing zealots seem to think that such programs/features should be avoided completelty. Dynamic typing zealots seem to think that this is cause to throw baby out with bathwater, and abandon static type systems, under the logic that if they can't be perfect; they're useless.'' ''Needless to say, both camps of extremists annoy me. :) -- ScottJohnson'' ---- After reading all that, I'm noticing a theme that doesn't seem to be explicitly stated. The difference between StaticTyping and DynamicTyping, is not that they do anything substantively different, its merely a difference in where and when the type checking is done. They're both typing, are they not? There are a few arguing that ThereAreNoTypes, this is not the same thing as DynamicTyping. Calling it DynamicTyping implies types exist, does it not? ''Many seem to use "typeless" and "dynamic" typing kind of interchangably, or at least consider no typing part of dynamicness because it is often context-based in a conceptual (human viewpoint) sense. (Related: TypelessVsDynamic)'' Anyway, in a DynamicTyped language, the responsibilities of any desired type checking is moved to the programmer, and this implies that its done at runtime. As shown above, this results in code being put into every class to check type safety, but not only that, since its all done at runtime, it is now the responsibility of UnitTests to check that type checking is being done where desired. This is a bunch of additional code when type checking is desired. In a StaticTyped language, the compiler does type checking , as well as make optimization decisions based on the type. People argue however that this limits you. Does it, really? In Java, you can take and return Object, and use reflection to "bypass" the compile time type system. And types can be converted from one to another with polymorphism. However this is a bunch of additional code when DynamicTyping is wanted. In summary, the real difference is StaticTyping gives the compiler more information, thus allowing it to catch more errors, as well as make better decisions about optimization. The rest all just reeks of different flavors of SyntacticSugar to me. ------- One place strong typing bloats up code is in cases where a given function/method is just passing a variable on to another routine. If we are not going to do anything with it other than pass it on, then it makes little sense to have to care what its type is. // weak typing function foo(x) { ... if (condition) { passAroundStuff(x); } ... } // heavy typing function foo('''myType''' x) { ... if (condition) { passAroundStuff(x); } ... } ''Actually the strong typing (why rename it to "heavy typing"?) makes sure that you don't call the wrong function which only accepts say an integer and not strings. The passAroundStuff is now very dangerous because we don't know what it is accepting as a parameter. Now there is no contract. Without a contract you've lost valuable information about the software project. This is going a step backward, not forward. It is like using '''untyped pointers''', passing around data without any '''contract'''. Why someone would want to use dangerous untyped pointers is beyond me, when they could otherwise avoid it. Avoiding the typing means a potential for bugs and errors. That's the '''whole point''' of type safety, to protect you from making silly mistakes. Humans make mistakes all the time.'' ''Actually I know why you renamed it to heavy typing. Heavy typing is a pejorative and is an attempt at insulting strong typing, because heavy means difficult, hard, annoying to lift, big, ugly, etc. Whereas "strong" implies a more positive tone. By picking the word "heavy" you've now gained some ego and pride, defending weak solutions to strong ones.'' No, you don't know. You cannot read minds. No human has ever demonstrated that they can read minds directly without special equipment. Don't pretend like you can; it makes you look foolish or immature or have excess AspergersSyndrome social skill problems. If I really wanted to make it pejorative, I would have used "nanny typing", borrowing from "nanny state" that conservatives often use as a pejorative against alleged excess government (which ironically they themselves want to enforce abortions, marriage flavors, marijuana laws, and a big military.) ''I agree politically with you.. A conservative wishes to push his bible nanny crap on people (church becomes government) even though ironically a true conservative is against government (libertarian anarchist, a really old conservative idea).'' I used "heavy" to indicate a style of relying on types. "Strong" tends to mean that the compiler/interpreter strongly enforces types. One can make "heavy" use of types without expecting a compiler or interpreter to ask for it. * ''Well without actually checking the types, what good is the type system then? You could heavily comment your source code withi fake types (there is a page on this wiki somewhere about this) and that would also be fake typing and you could do that heavily, but that kind of defeats the purpose of typing which is to do type checking. Many strongly typed languages have ways to escape the strong typing when you really need to, such as using variants (basically a variable that can be multiple types like dynamic typing) or by using untyped pointers which allows you to pass in anything to a function as a parameter. So the idea is to be safe by default, and escape the safety only when you need to. It is silly, IMO, to be dangerous by default, and have almost no safety at all (stringly typed or blobbly typed is madness, IMO)'' * Maybe dynamic languages are just not for you. If they make you uncomfortable, don't use them. My preference is type-free languages that don't have or don't rely on a "type tag" for variables, but I can see use for occasional types in order to interface with other type-heavy systems a bit smoother. I agree it's possible to use statically-typed languages dynamically, but they usually are not built well for that purpose and are awkward in that mode. *''I don't think there are type free languages because I think most definitely ThereAreTypes. There are types of apples, and math has types. How would you do a simple addition or subtraction operation without basic math types?'' http://www.purplemath.com/modules/numtypes.htm * Without a formal and objective definition of "types", it's hard to say, "types of apples" is a human-created concept that exists only in human minds (and may exist in different ways in different heads). Apple DNA doesn't know or care how similar it is to other DNA. It just does what it does in a mechanistic way. Related: LimitsOfHierarchiesInBiology. * ''Are you saying there are no number types and math is wrong too? Types of apples is a very useful concept. Don't you have different colors of socks? or colors don't exist either, and is just a UsefulLie? Realize that this is a extremely childish viewpoint you have, that everything is just muddy water and we can't differentiate between anything. Please stop DodgingTheIssue: are you seriously saying that math has been wrong all this time, and there are no number types as listed at http://www.purplemath.com/modules/numtypes.htm'' * Other animals see entirely different "colors". Most of the color models (if not all) we use are merely approximations, UsefulLies. Different light-bulbs can make colors seem to change, and in different ways for different persons because of the way spectrum profiles overlap. Why do you mistake human abstractions as absolute truths? You are doing a Catholic Church and making Earth the center of the universe. The universe doesn't give a shit about human abstractions/simplifications. They are in human heads only, and different heads can use different approximations. Many times we agree on the same abstraction/model, but not always. I have no idea what point you are trying to make by repeating the URL to the numbers webpage. Yes yes, we know that "strong typing" is suppose to help leverage the machine to "protect" us from grave dangers. I've used both for many years and feel which is the best depends on the type of application. PickTheRightToolForTheJob. Type-heavy code is harder to read, which causes mistakes in itself. I sometimes compare it to a car so chalk full of padding and airbags that it blocks your vision such that you have accidents simply because you can't see the road as well. * ''Maybe you should look into the Go Programming Language which claims to be a language with sound typing, but that still is as easy to use as a dynamic language, and is garbage collected. Go programming language does not appear to be overly verbose with lots of crap in it. You say type heavy code is hard to read... some of the hardest code to read is actually php code, there are tons of examples of PHP spaghetti messes out there. Often giving a contract about your function makes the function easier to read because you can refer to the contract. If you were reading a house architecture contract you would want things to be declared clearly without playing guessing games about whether they mean Inches or Millimetres.. wouldn't you want a strongly typed contract stating they measured the wood that holds up the house in meters or feet? What if they just didn't declare the type of measurements at all in the contract and left it up to the person reading? wouldn't that be silly? That's what I find in a lot of PHP code.. what is this variable that magically appeared here? why wasn't it declared? What are the restrictions and constraints?'' * I am not a fan of PHP either, as far as dynamic languages. But without seeing what you saw, I cannot comment. It may just be bad programmers writing bad code, which can be done in any language. * ''What languages are you a fan of, that allows you to do what you need to do? I don't care if you hate all langauges and want to write your own: that is a valid position too. My ideal language would be something that actually supports tables and a query language, but also supports procedural programming and extended procedural programming (OOP).'' Type laden code also takes longer to change. If you work in a domain that changes the code often, then requiring more keyboard work to change it around costs time and money. It can be roughly compared to the '''Japanese Zero''' in WW-II battles. For a while, Zero pilots had an edge because their planes were so light and nimble that they were able to take down the heavier, well-armored American planes by outmaneuvering them. Eventually Americans learned to successfully use multi-plane team strategies against them, but still had a disadvantage one-on-one and so usually avoided such situations. I would note that in the real world when contractors contract to other contractors, etc. projects can get very expensive because of the overhead of writing, checking, and managing contracts. You can spend more on lawyers than engineers and laborers. It's red tape. A similar pattern can be found in type-heavy source code. ''The Zero example is an interesting choice. While nimble, they achieved that by not protecting the pilot (no armour or self-sealing fuel tanks in most) and taking much longer to build (all in one piece, versus wings and fuselage separately). And even so, within a few years, more powerful allied aircraft could take them on easily. Maybe there's a rough comparison there to improved static type systems? -- ScottMcMurray '' ---- I have almost 30 years of working with a language where you could only define arrays and nothing else (no type declaration at all). Along with many other projects in other languages, I completed over 1,000 projects, big and small in this language and I never missed defining variables at all. Procedures were implemented in the language which means that the whole calling space was available and there was no return code. Some code could be called as a function but no return was provided (I returned data by putting it in a table). These limitations caused no problems in programming or maintaining the code. At one customer location, a complete manufacturing system was created including Order Entry, Invoicing, WIP, Raw Material, Scheduling, EDI, AR, etc for a company with over 300 employees. I totally get the discussion about static versus dynamic variables and function parameter passing but I am missing the real programmers who have vast experience with both types of languages. I currently am completing a new language that currently has about 50,000 lines of C. The language I am creating has both static and dynamic typing and totally polymorphic parameters for it's functions. You define what a function can expect as far as parameters but the caller can substitute variables of different types or leave any out if they wish. If you are the only person using the function, then you can assume that you are calling the function correctly or you can check the parameters for type and correctness with a single line of code. If you only call the function once and you created it, would you need to check that you used the parameters correctly? No functions are global so checking all calling instances is quite easy. Messages that are not functions are required for concurrently executing "Server Objects" to communicate. These Servers have interfaces that use a MAP notation that means that interfaces can change drastically without any change to existing messages (and no need to recompile). All modern IDEs do code finishing for you so that you already have the type and sequence of all functions and object and structure variables as well. I rarely ever make type errors and I rarely ever need to globally change the type, sequence or number of function parameters because most of my functions are only called once. (The functions I use a lot is a much smaller set than what I have in total.) It is possible if some people code many lines without compiling that type could be an issue but I rarely code more than a few lines without a compile. In the language I am creating, no program takes longer than milliseconds to compile regardless of the size of the system so hundred's of compiles an hour is of no consequence. I check all my code after all changes by actually running it. I might also make a "unit test" but most of the time just running it with a few examples is enough. In my C project, the fact that the syntax and parameter checks are correct doesn't mean my program is correct at all. Every professional programmer knows these things. -- DavidClarkd ''It's unrealistic to think you can reliably validate complex software by running only a "few examples". Validating a programming language, for example, requires running thousands of lines of code in individual tests. A minor change to the lexxer or code generator, for example, can easily (and subtly) break parts that "a few examples" won't catch. It's also far more time-consuming, error-prone, and requires considerably more effort to manually run a few tests than to simply hit the "go" button on a test suite that runs hundreds or thousands of tests, and watch a green bar go to 100% or (if there's an error) turn red.'' What techniques you use to validate your software depends on many things, including the audience, complexity and kind of software you are creating. My comment about testing software by just running it, was in connection with "application code", not my language project. My language has a very robust error system and I am in the process of creating a large automated "unit test" system that will be used to validate the language. ''My comment applies equally to "application code". Trivial, throwaway applications may not warrant UnitTest''''''s, but any non-trivial application would certainly benefit, if only to automate the testing that you'd otherwise waste time and effort doing manually.'' True enough but most of the application code I have written was made up of small stand alone programs. If the application was non-trivial then appropriate testing was created. I am not a fan of "one size fits all". All problems aren't of the same complexity or risk. Testing code requires different levels of effort depending on the project. Most of the application code I have written was done directly on the running production system with no full day interruption for any reason (including the hardware) in over 25 years on microcomputer hardware. My current system allows quick online changes for cosmetic changes, personalized redirection for projects lasting hours and a test system that offers a more long term isolation of a developing module or group. All levels of change can be accomplished on the production system while it runs including schema changes, interface changes, new functions etc. The old idea of having a separate test site and infrequent and formalized versioning, will become a thing of the past. The world changes all the time and so must our software systems. ---- CategoryLanguageTyping, CategoryTypingDebate