''From XpFaq ...'' '''How ya test a UI? ( GuiTesting )''' * I've got a specific user story and I just don't know how to test it. You write the inside UnitTest''''''s first. And anything you change gets instrumented with full new unit function with coverage. I really need everyone to realize you write the "business layer" object first, _then_ the database and its archives and the UI. Leave that for last. Run with an icon on the desktop that says "runMyProject". You are on site to join UI to an algorithm. The algorithm's the hard, original part that you are getting paid for. The UI will use a widget library finished by someone else; it's not original. Start with the algorithm. * Fairly simple stuff, the customer would like an opening screen when the application runs with the ability to turn it off, if desired. Only after your logical module is sound do you tackle adding a UI. You need to add the ability to set any most interesting variable found deep inside said logic box, preferably via a log file. Then you trace that on a screen that the users can bind RealityVariables screens to. * The programming language is Java. The opening screen is going to be a GIF with some text written on top of it. Noted. * Here are the tests that I want to do: For each one, draw its card only first, if the OnsiteCustomer agrees you should. To be TestDrivenDesign you must always write test code before you write function code. Research it. When you "get" it, you will achieve XP. * a) that the proper text is displayed. Some of the text is copyright and version info. I want to make sure that I get it right. Test the text before it crosses from the logic unit to the UI. * b) that the opening screen displays when it is supposed to. Add that to the text tests. * c) that the opening screen disappears after the proper time delay. * d) that the opening screen doesn't display when it is not supposed to. * I can obviously visually determine if the tests pass, but how can I claim ... Then visually determine same by looking at them. How are you going to keep the nascent UI out of your face here but latest in the "phases". * c) that the opening screen disappears after the proper time delay. Instrumenting the control that should have gone after that one. * d) that the opening screen doesn't display when it is not supposed to. By now the Unit Tests will have been invoked by any remaining suture bugs. You will probably have thought of 3 to 7 times as much testing. CodeUnitTestFirst. * I can obviously visually determine if the tests pass, but how can I do this in an automated way? Can I rely on some of the Java variables that indicate whether a Window is visible or not? That was going to be my first take on it, but I'm not sure if that's good enough. What happens if the state says it's visible, but it really isn't? i.e. Do I assume that the Java library has been tested or am I testing that at the same time? Then visually determine if... they don't bother to instrument at this level... * How have others tested GUI things like this? And this is a simple GUI problem! 8-) Yes, including using test-first on the primordial UI controls too. ---- '''Do you *literally* code the tests, or do you just pseudocode them? because eg. the specific names of the code to be tested haven't been defined yet.''' You really code them. Then you try to compile them, and feign surprise if the target things don't exist yet. This implies you are using one button on your IDE to verify you really ''finished'' coding the tests. Repeat, fixing anything the compiler asks for, until all errors are gone. Then repeat against all assertion failures. Then repeat against the entire test suite. At each point you are testing as close as possible to the point of change, thence the next larger region, thence the next larger territory. Recall that, traditionally, most new bugs appear where code was most recently changed. XP exploits this bias to conquer it. At each point you use a tool, either the compiler or your testing code, to determine when to move to the next level, and not brain cells. Brain cells are a much more precious resource - use them wisely. -- PhlIp ''PhlIp, on the XpMailingList, also wrote the snappy description below:'' * draw a UserStory * cut it up into EngineeringTask''''''s * write one line of test code * compile it. Your target object has no method yet; the compile fails add the method; leave its innards blank * compile & run. If you get "all tests passed", you were not ... assertive ... enough in the test code * add an assertion requesting the expected behavior * compile & run. Bang! * add one line to the method * compile & run. Bang! * add one line to the method * compile & run. Pass! * Add any further assertions you can think of for this engineering task to the test code * repeat that tiny cycle for each assertion * examine the code & refactor if needed (per the book Refactoring, you test at each intermediate step during a refactor) * repeat for each EngineeringTask * try to refactor * Integrate * repeat for each UserStory ---- '''How do you write acceptance tests ? How are they different from unit tests ?''' See AcceptanceTestExamples. From what I understand, AcceptanceTest''''''s are derived from UserStories, and are thus comparatively high-level. UnitTest''''''s are derived from EngineeringTask''''''s, and are thus comparatively low-level. Both should be AutomatedTesting if at all possible. ---- How would one go about testing while prototyping? In my current (non-XP) project, I cannot know beforehand if what I create is accepted the way I imagined it or whether it needs to be radically changed after it has been presented to the customer. (As soon as they see it, they can decide whether they like it or not. Discussions without a working prototype tend to be unproductive in this research project.) Should I write a lot of tests for code that will probably be thrown away anyway? If I do, I am wasting resources and lowering my own motivation. If I don't, the chances are that I will move on and the code will continue to exist without tests. I guess that it would be reasonable to write tests after a piece of code has been "accepted" -- it has been evaluated by the customer and has not changed substantially for some time. However, XP insists on the test-first-code-later approach and "test everything that can possibly fail" and "a user story is only finished when all the tests run 100%" etc. A more general and tricky question would be: is eager testing always good? Are there strong economic reasons to skip testing altogether for certain types of code or in certain stages of development? I believe there are, and that the decision should be derived from a comparison of effort and profit (rather than by following some "extreme rule of thumb"). Unfortunately, while the cost of testing is clearly measurable, the gains are not. Oh well, I guess the usual answers apply. "Use your own professional judgment" and "there is no silver bullet". Which is a profound way of stating "I don't know". --JPL ---- ''Moved from UnitTest'' Two Questions: 1) If UnitTest running time gets too long, how does one approach trimming and optimizing them such that no coverage is lost. My understanding of unit test writing is from CodeComplete, which discusses writing the tests from both control and data points of view 2) Do UnitTest cases get refactored as well as the production code? -- RonGarcia 1) My experience is that very long running tests and those that require manual verification (GuruChecksOutput) are run less often than the other tests. The ExtremeProgramming advocates say that their tests run so fast that they can always run all of them at a whim. Others, like MicrosoftCorporation, split the tests into groups, running slow and expensive batches at night or over weekends. 2) Yes, it makes sense to refactor JavaUnit (for example) code from time to time to make it better meet your needs. But you should avoid redesigning the testing library and your production code at the same time: Ordinary humans, like us, can really only do one thing at a time. ;-> -- JeffGrigg UnitTest''''''s need refactoring to improve their documentation value and to keep them prepared for changes that result from changes in production code. Note that there is a distinct set of bad smells, and an additional set of test-specific refactorings involved. See RefactoringTestCode. -- LeonMoonen Are there any specific examples of these ''distinct'' CodeSmellsInUnitTestCode? ---- Question: If you code unit tests up front, how can you code them if you do not know the interface of the thing you're testing yet? -- Serge Beaumont You make it up as you go along, which is what you would be doing anyway. At least, it's what I would be doing anyway. --AlastairBridgewater Question: How much overhead is there in moving unit tests when you refactor between classes? Is it a drag or does it not happen much? -- TomAyerst It happens occasionally, but it only adds 20% to the effort of refactoring when it does. --KentBeck Recall that one must refactor the UnitTest''''''s to prove the functionality moved from one class to another, and only then move the functionality. Such a Unit Test, like any XP Unit Test, pulls one in the correct direction. It is not make-work added after a fix effort. --PhlIp ---- Question: How should unspecified edge conditions be dealt with? For example, a method that parses a line of CSV, given a blank string as input. It's an edge case, so we would usually want to add a test for it right away, but there's not yet any basis for whether this should parse as 0 fields or as 1 blank field. Should I... * Treat the behavior as undefined, so no test? * Treat the behavior as undefined, so have a test containing just a comment to that effect? * Choose one of the reasonable behaviors arbitrarily, and write the test to indicate and enforce the choice? * Make it an exception case, and test for the exception? ----- See also UnUnitTestableUnits MockObject. ---- CategoryExtremeProgramming CategoryFaq