This is a problem I've run into in my last three projects. I've run into some patterns and AntiPattern''''''s that might be helpful. I found these patterns working on online services. I believe they're also applicable to IT/MIS environments. Building TestEnvironments for Y2K testing is going to be critical! Please let me know if you "have" any of these patterns. I may need to start by rambling; sorry. -- PaulChisholm Some software can be tested anywhere. (ExtremeProgramming, at least at ChryslerComprehensiveCompensation, presumes all classes can be unit tested with a push of a button. ''In VisualWorks this can be done in any image anywhere. In Gemstone it's a bit trickier.'' -- Ron) Other software depends on outside elements. (Example: Software that uses a database needs a database. Sounds simple? Such software typically needs to know the name of the host to connect to, the database password to connect as, and the password to connect with. Another example: Web CGI scripts need a Web server.) Such software needs a Test Environment in which to run. AntiPattern: '''Test Environment Will Look Exactly Like The Real Environment.''' Oh, no, it won't. Host names (and IP addresses) will probably be different. (Example: The "test to destruction" performance test had better connect to its own database server, not some production server!) The size of the database will be different in test and production. (I know of a system where every instance of the database was thirty gigabytes. A lot of software was ''never'' unit tested because there was no instance of the database available for unit testing! (This did ''not'' happen at my current job.) Pattern: '''Test Environment Will Look Exactly Like The Real Environment Except For X'''. Identify what will change between test and production. (It had better be data!) Parameterize it. Put the parameters in a configuration file (see below). '''FakeTheSideEffects'''. One system I was involved with connected to an outside host (whose name came from other data, not a Configuration File) and updated data on that host. Uh oh; how do you test that? A co-worker (Rob Johnson) found a solution. Write a module that ''either'' updated a host or "faked it," logging all the information that would have been sent. He tested both paths of this module under controlled circumstances. He then used it in "fake" mode with a variety of real-world hosts. (The "real or fake" switch was controlled by a Configuration File.) This would have been harder if a single test needed to cause a side effect, and then later depends on it. One thought: Have a series of translation rules, which control what system got updated? '''Installation Test''': Setting up or updating a Test Environment is complex and error prone. ''Therefore:'' Script your installation. Test your installation scripts (and plan to do so!) Use IdempotentDesign so installation scripts can re-run. Develop *un*installation scripts, and test them, too! '''Pristine Environment:''' Un-installation is never perfect. Installing, then un-installing, can leave side effects that further installations can (accidentally) depend on. Get to an environment where the side effects never happened, and an installation that worked fine in one environment (test) can crash and burn in another (production). Oops. Bad news; I recognize the need for Pristine Environments, but I never found a good way to produce one. (See ProducingPristineEnvironments). '''Configuration File''': (details forthcoming) (much more to follow) -> (last edited March 22, 1999) ;) ---- CategoryTesting