I have never been a particularly morally upright programmer, I am as happy as anyone to skip half the phases of the design methodology, but I cannot understand how people can write code without testing it. If I don't do UnitTest''''''s how do I know when I am finished? Don't other people get a real buzz when the see their code run, even if it is on a silly example? -- JohnFarrell ''See http://www.shoooes.net/ for an example of the afforementioned buzz of seeing your app work, in progress!'' ----- If you '''are''' doing UnitTest''''''s, how do you know you're finished? Let's say you test for 3 hours without finding another bug. Are you finished? Does the code have a sufficiently low bug count (of yet-to-be-discovered bugs) for your business needs? Do you stop when you get tired of testing, or when some arbitrary deadline has arrived? And, how do you know that your tests were effective, rather than just a waste of time? -- JeffGrigg In ExtremeProgramming, you know you're finished because your UnitTest''''''s cover the functionality described in your EngineeringTask''''''s. But this is like asking "When do you know there are enough features?" Somebody has to decide how much functionality to put in, and somebody has to decide how much to test. With XP, at least there are specific rules for what UnitTest''''''s to write. -- BrentNewhall ---- I'll take that question, Alex. First let's observe that if you don't test, your situation is uniformly worse. The questioner doesn't offer a better alternative, he just raises issues without answers. Now let's turn to those issues. If you write tests with your brain engaged, they'll test for the kind of defects you historically make. They'll test the edge behavior of the classes, and so on. Testing isn't about spending time, it's about translating your requirements into tests. So when your tests work, everything you understand about your requirements works. As for knowing whether your tests are effective, the fewer complaints you get after release, the more effective your tests were. When you do get complaints, you of course (a) write a test that shows that bug, and (b) write all the related tests you can think of for the offending object, and (c)improve your testing behavior to test things like whatever that turned out to be. Testing is one of the few computing practices that actually works. ----- Yes, testing works. And, there are a number of good techniques that will help you make good tests. Some are... * TestBoundaryConditions - values at or near values that change the behavior of the program should be checked. * CodeCoverageTest - were all the statements in the code tested? * BasisPathTesting - are all the possible paths through the code tested? But, the common approach of writing all the code, and then doing ad-hoc testing may not be the most effective approach. We all know that we should make our test plans before writing the code. Building UnitTest''''''s and setting up RegressionTesting early in a project can pay big dividends. The common practice of writing all the code and then testing until one runs out of patience or time does not produce predictably good results. -- JeffGrigg Strawman, Jeff. The definition of UnitTest''''''s here is in no sense as ad hoc as the "common practice" to which you refer. ''Let me try to provide the XP UnitTest alternatives to the above list.'' * TestBoundaryConditions versus Test First Programming - Not sure how XP addresses boundary conditions - see below. * CodeCoverageTest versus Test First Programming - Code statements are only added to cause a test to be passed; i.e., untested statements are impossible. * BasisPathTesting versus Test First Programming - Code paths are only added to cause a test to be passed; i.e., untested paths are impossible. Aside - I have to admit my response for TestBoundaryConditions seems very weak. An XP UnitTest seems geared towards verifying algorithmic integrity but not branching integrity. For example, TestBoundaryConditions would require a test for the lowest possible value in a range and the highest possible value in a range, i.e., catch off by 1 errors. An XP UnitTest seems to imply that a single test of any value in the range would suffice. Is this significant or am I missing something? -- WayneMack ''The first, single test inside the range shows that the algorithm around that range is correct. If you have need a change in behaviour once 'n' hits '10', you need a test to prove that the change occurs at '10', and not '9' or '11'. And of course, if you only define a test for one point in the range, you haven't really tested the algorithm yet... the simplest thing that could possibly work is to return a canned result. In effect, you don't'' get ''to put in that boundary condition until you've written the failing test for it. -- WilliamUnderwood'' ---- If you are testing to find bugs then I'd say the number of bugs that I code is small so testing is a waste of time. Better to let the system tests uncover any bugs. If you are testing as part of the design process then you couldn't code without testing. ''"Programmers are incapable of testing their own code." Sadly, I have been told this to my face from both programmers and managers.'' I would say it's a true saying to some extent. But that doesn't mean what they can test isn't worth the effort. ''The problem I have with the above statement is that it becomes an excuse to push off testing onto someone else. I have real problems with the belief that someone who knows nothing about a system (i.e. Independent Test) is better qualified to test it than someone who is intimately involved with the system. If the developer does not know what the system should do, how could he possibly implement it?'' In defense of the statement, "Programmers are incapable of testing their own code": * Programmers often CantSeeTheForestForTheTrees (like anyone). They're so intimately familiar with the code that they can't come up with a set of tests that covers all the code's functionality. * Testers should be devious. They should come up with all sorts of devious tests for security holes and library interface assumptions and UI problems and any number of other issues. Developers often don't have that training or skill. That said, if the statement is used as an excuse to shift responsibility, then it's a bad thing. If it's used as part of a thoughtful response to the question of who does testing, then it can be a good thing. -- BrentNewhall ''Why should the tester be any more knowledgeable about what the program should do than the programmer? Was the tester better informed of the target use of the software than the developer? Is the tester more intimate with the actual code and its possible failure modes than the programmer who wrote it? Isn't saying the "Developers often don't have that training or skill" shifting responsibility?'' ---- I think everyone has to test and excuses shouldn't be accepted. However, there will always be unanticipated ways to break the code. I used to write AppleTalk network device drivers and servers for PCs. We had the official test framework sanctioned and used by Apple. Even though we passed the tests it didn't take long for obscure bugs to crop up that were due to semi-legal network use by commercial apps. By far the most meaningful test was to run every major Mac application against our servers. That kind of thing leads lazy programmers to claim they can't test their code but that's just a lame excuse. --AndrewQueisser I can relate to building something to a published specification only to find some of the major players violate the specification in unpredictable ways. I would say that interoperability testing that is limited based on resources is drastically different than operational correctness testing. In the first case you are experimenting to discover things you do not know. In the second case you are verifying things that you do know. ---- I would also like to ask the opposite question, "How can you test without coding?" ''Obviously, the test fails until the correct code is in place.'' ---- Re: "Programmers are incapable of testing their own code." I've said that phrase to both managers and other programmers - but with one additional key word: "'''Professional''' programmers can't test their own code." Of course you can (and should) test your own code, but any professional organization should have more than one set of eyes on a product before they send it out. Professional writers have proofreaders, copy editors, line editors, etc. to check that their work is sufficiently professional and does what it's supposed to. As programmers, we usually don't have that luxury. But we should. Being the only person to test or QA your work makes it amateur work. No matter how good you are, you'll never see the bug that you don't see, and you're very unlikely to notice that you've misinterpreted a requirement that seems perfectly clear to you. -- RayTaylor ---- Although this page is primarily intended to highlight the benefits of TestDrivenDevelopment, which I agree with and practice myself, I can't help but respond to the original question by stating the not-quite-so-obvious: formal methods. FormalMethods needn't be a super-heavyweight process. If you program in ForthLanguage, you're already familiar with HoareTriple''''''s, except in Forth, they're called ''stack effects.'' You find them documented in ''stack effect diagrams'', like so: : FIND ( key table -- record# T | F ) ... ; In Hoare notation, you would write { key table } FIND { (key in table) => record# T | (key not_in table) => F }, although using the proper mathematical symbols. In essence you're relying a bit on DesignByContract; every procedure, function, et. al. has a weakest PreCondition, which ''must'' be satisfied for correct operation. Combined with the desired results of a procedure or function, you can use the weakest PreCondition of each program step or loop body to derive whether or not your code ''even can'' function correctly in the first place. Assuming it can, then the result of your efforts will be the weakest PreCondition under which your procedure ''will'' work. No two ways around it. I often use TDD for business logic, and HoareTriple''''''s for work involving I/O or UI work (where TDD testing becomes impractically difficult to perform). I know I'm finished when a UserStory's AcceptanceTest runs to completion. I may not be able to test everything exhaustively (Selenium isn't terribly fast at exhaustively testing every combination of input parameters, which is why it's best applied as a scenario tester); use of FormalMethods covers that aspect. Example: a recent webapp I'm working on, from initial concept to first alpha release in only 13 days. 14 user stories completed, and not one unit test written. Heresy, I hear you scream! Yes, but consider that this particular webapp is very nearly all UI code; not the easiest thing to test. Ergo, most of my code is written with proof annotations, pre-, and post-conditions all documented and checked by hand. If I can surrender 14 stories in 13 business days using FormalMethods techniques in an iterative development protocol, then the claim that FMs are just too heady, slow, burdensome, documentation-centric, or otherwise an impediment to productivity is patently and demonstrably false. Since I'm hand-checking my code, I won't dismiss the possibility that I've missed something. But, I've a high degree of confidence it'll work as expected under load. ---- CategoryTesting