I am (rhetorically) building an eCommerce solution which when it goes live expects about 10,000 hits per month. However, the client is spending money on marketing, so we expect hits to increase dramatically in the future. DoTheSimplestThingThatCouldPossiblyWork dictates that all thoughts of the future are to be discarded. So, I develop the application using a non-scalable architecture, because its simpler to do so, and it works NOW. This saves the client bundles of money, all of which is lost six weeks later when millions of people discover that they can't log onto the eCommerce server and defect to a competitors site, hence driving the client out of business. (However, this is good financial practice for the client because she can now sue me for malpractice and get more than her company was originally worth). I like the ideas presented here, but they are dangerous. They don't seem to logically work within the architectural domain. It would be nice to organically grow an architecture based upon RealTimeRequirements, but in the real world it is both necessary and practical to predict the future. If the future can be accurately predicted then we can build for it now, and save money in the long term, perhaps even gain a business advantage over our competitors. Predicting the future can be risky. XP dictates that the practice is so flawed that the risks outweigh the benefits, therefore one should never do it. Classically programmers tend to add GoldPlating their work with FutureProofEnhancements and these often end up looking like a 1970's HomeOfTheFuture. I say that these are both extreme positions (and that's a bad thing). We should be identifying and examining risks all of the time. If we aren't adept at doing so we should hire people who are. RiskAnalysis should allow a more accurate and less risky decision on whether to apply DTSTTCPW or to GoldPlating. -- BryanDollery ---- There is still a business tradeoff to be made- feature risk vs. scalability risk. The best the programmers can do in this situation is surface the costs and benefits of addressing both risks, and give the business decision makers a rich (and evolving) set of alternatives for addressing both risks. -- KentBeck ---- How many hits you need to support is a '''business''' decision. Business should have told you how many hits they were expecting, and when. If they didn't, you should have asked. DoTheSimplestThingThatCouldPossiblyWork and YouArentGonnaNeedIt refer to how you implement the requirements business gives you, not to those requirements themselves. -- JohnBrewer ---- I don't think Kent and John are being clear. Bryan, you are absolutely right that you should be worrying about scaling your system. John and Kent are both saying that the business people should make scalability be one of the stories. Once you tackle a story that says "Handle one million hits a day" then you have to make your system scalable. This is absolutely no contradiction with DoTheSimplestThingThatCouldPossiblyWork or any other aspect of XP. There is only one way in which this is not handled perfectly by XP. Sometimes business people do not realize that scalability is a problem. Good developers will ask them questions, raise issues, and educate them if necessary. If the business people say they will worry about it later, that is their judgement, but it is your job to make sure that they know it is something they should worry about. So, stories do not always flow from business to development. Sometimes development suggests stories to business. But stories are still the responsibility of business. -- RalphJohnson ---- Right. It's a BusinessResponsibility to figure out how much the system needs to scale and when. But it's a DevelopmentResponsibility to make sure that business addresses the issue. And if development doesn't push back when business tells them they only need 10,000 hits a month in the above example, then I indeed think development has committed malpractice. -- JohnBrewer ---- ''Surface the costs and benefits of addressing both risks'' has got to be 100% right. As is developers being proactive in asking the question if the business seems likely to neglect scalability risk. But let's say the technology being used is pretty new and scalability issues aren't so easy to predict. In this case ''surfacing'' may require building a version that allows ongoing simulation/testing of scalability, as far ahead as possible of the "real thing". I know the term SpikeSolution has been used in XP but I'm talking here about an early interim version, not delivered to final end users (in the above case anyway), but definitely growing into the real thing, in a way that minimizes risk in an area of uncertainty. Is there a term in XP for this? (See ExtremeProgrammingSummary under SpikeSolution for the last thing I remember reading / writing on this, which seems far from conclusive on the right term.) -- RichardDrake "Is there a term in XP for this?" ''The end of Iteration 1'' It would presumably only be "end of iteration 1" if this risk was considered paramount compared to others. This got me thinking again about "architecture" and "modeling". This phrase came to me waiting for a train this morning: Traditional object architecture is concerned with modeling ''the real world'', EvolutionaryArchitecture is concerned with modeling '''risk'''. "Architecture" so redefined would certainly include all unit tests, any TestFramework and SimulationFramework (?~> RealTimeMonitoring for qualities such as scalability). The piece that "models the real world" in a boring static sense remains the SimplestThing possible. I even started to wonder if these two "non-PC" words couldn't thus be usefully rehabilitated. ---- Lots of good issues here. In the specific example I think there is a case to be made that a system designer has an ethical duty to point out the limitations of a system. Failing that get a good scope contract and sign-off like the big consultancies do. I think there is a great deal of craft experience required to build evolvable architectures. -- Richard Henderson. ---- See also: ElicitingRequirements