[From WhenIsXpNotAppropriate] "Pure ExtremeProgramming might not be optimal when... '''Strict Upward Compatibility is Required:''': Your ability to refactor will be severely limited by the business requirement that new products be upward compatible with previous releases. ---- ''Frameworks, when released to 3rd parties, are often subjected to this restriction.'' I don't do XP, but I've done incremental development of framework code, and found it quite useful. It's hard to know, up front, exactly how all the proposed features will work together until you've tried it with one or two applications in a "friendly" environment. Of course, once a framework is released to a 3rd party, strict upward compatibility is usually a requirement -- and a severe restriction on refactoring. Wouldn't research projects benefit from the increased flexibility of XP? How can you know, up front, what the requirements are, or what design will work, if you're pushing out into uncharted territory? Of course, descriptions of real-world experiences showing the limitations of XP on real projects would be far more useful than my idle speculation. ;-> -- JeffGrigg ''I don't buy this - make a rule that old tests have to run forever, plus upward compatibility is always a business trade-off - just ask Apple.'' Yes, the test idea is key: If a company wants to maintain and enhance a product over time, even if it's a library, they need some flexibility to add features. So, real products in the real world publish a "documented interface" of functionality the customer can count on to be there for the next few versions. If you use "undocumented" or "deprecated" functionality, then your application may just break with the next release of the library -- shame on you. ExtremeProgramming doesn't seem to provide a clear way to define this interface to automated external clients. Conventional development would see this as a design activity, with heavy documentation and draconian change control procedures. Perhaps "the XP way" would be to evolve the customer interface for a while, then "freeze" it at some major release point, and somehow try to control changes after then... by use of FunctionalTest''''''s, for example. -- JeffGrigg That sounds like a good idea. When an interface is used internally it only needs UnitTest''''''s, but the second you release it to an external party it becomes a feature, which needs FunctionalTest''''''s. Some random thoughts: *Could this be achieved by just moving tests from the unit test suite to the functional test suite? *FunctionalTest''''''s don't have to be 100% all the time. Is this somehow related to how a published interface may change over time? Is this relevant? -- AndersBengtsson I don't think the XP community has addressed this, but I would think that once a functional test succeeds, you should make sure that it never fails again in the future. If FunctionalTest''''''s fail, after a release, then the system is regressing. Certainly, if you want continued conformance to a published interface, Automated''''RegressionTesting is a good thing: I guess you would call them FunctionalTest''''''s. Given their level of detail, they might come from or look like UnitTest''''''s. (At some point, we start splitting hairs about definitions...) -- JeffGrigg