This discussion started from TriangleXpMeetingTwo, in which DuffOmelia suggested topic "(D) Why do we use stupid languages (XSLT, Ant, ...)" I was very unhappy to have missed the meeting. However, I've tracked XSLT and its predecessor DSSSL for years, and would hardly call them "stupid" languages. What gives? Is it how programmers misapply them that is stupid, or is this merely a testament to INML (it's not my language)? Anyone from the meeting want to clue me in? - MitchAmiano ''The topic title was intentionally contentious ;) The issue - as you surmised - was that there are languages (XSLT and Ant were the two examples we discussed) which tend to be overused, and are very difficult to test. There is an xUnit-hybrid for XSLT, but from the account given, you end up writing a significant amount of test code for relatively simple transforms. For this same example, the transform could be written and tested much more easily in Ruby. The point in Ant was similar. How can you test an Ant script? It's easier to write build scripts in a HighLevelLanguage that can be tested. I'm paraphrasing some of the points from DuffOmelia - who suggested the topic. As it happens the project I work on uses Ant extensively (overuses?) with scripts that are hundreds if not thousands of lines long. I'm not sure how simply our build could be written in a high level language, but I accept the point that our build is not testable. And we also use XSLT in places, and we don't have tests for that.'' -- AlexChapman Grammar production systems are more or less black-boxes. Invasive clear-box testing ala xsltunit doesn't seem to make much sense to me because of the potential for heisenbugs and the ugliness/verbosity of the resulting codebase. When testing XSLT, I tend to take the approach of sketching out what the output and inputs should look like (in text files), then using that to drive the design of the stylesheet. As I finish off changes to a transformation, I run the transformation and either visually compare or run the cases through diff/cmp to verify that what was expected was what was output. I generally wouldn't try to verify that the underlying XSLT engine is working properly unless it is new and untested. In terms of jUnit, I don't see a direct apples-apples comparison between XSLT and Java, so "stupid" may very well fit the result of trying to apply the same techniques to both. -- MitchAmiano ''Is there something special about DeclarativeLanguage''''''s (XsltLanguage, JakartaAnt, StructuredQueryLanguage, etc.) that makes them HardToTest?'' (No, the problem is with languages that lack support for type-checking and testing; XsltLanguage ''could'' have been designed with internal checks and the ability to add 'verification' unit-test input/output, but it wasn't.) The DeclarativeLanguage / ProceduralLanguage distinction is flawed, but in each of the examples provided you aren't writing a program as much as configuring one. Given a (presumed working) black box and two inputs, the output is presumed correct by induction (even if one or both of the inputs is invalid, in which case the black box - presumably working correctly - spits out errors.) What is it about the behavior that you want to be assured of through test-first? I'd want to know about those issues that cause me the most headaches and those (perhaps wrong) assumptions that I'm most likely to make.... element structures and attribute value ranges. Those I can handle preferably through DTD/Schemas and/or by writing a separate transform file with nothing but test cases in it. Check the transforms themselves by looking at the result. ''Pardon a Xslt/Ant beginner... but what's hard about specifying an input and an output, and verifying that the script generates that? I understand the usual difficulty of such tests being brittle if done at too large a granularity; is this the only issue? Perhaps some tool of the form "notify me if this ever changes, for my approval" would work (GuruChecksOutput, but I think it's appropriate in this case). --WilliamUnderwood''