You've got 6 developers and 8 months to ship an MIS application. This worries the person who is in charge of the project (it's a tight deadline and he's ''never'' seen a project comfortably make its deadline). The project manager recognizes that he needs to do the following before "real coding" : (1) get a complete list of requirements and do domain analysis (2) create prototypes and get user feedback on the GUI design and (3) design the application. Reasoning that "Hey. They've built similar things before," the assumption is made that all three can proceed simultaneously. After two months, the project has a prototype GUI (with lots of inbuilt assumptions about how the rest of the application is built), a generic "architecture" for a "back-end" (because it was designed ignoring the GUI and without a complete list of requirements) and a list of requirements. But, reasons the project manager, this is okay. Because it looks vaguely client-serverish (and will look even more so when the developers insert a "translation" layer between the GUI and what the developers are beginning to think of as "the real application"). Hell, it's on one machine, but it's a genuine "3 tier" architecture and isn't that a good thing ? And so, development commences. And design continues, because now there's this complete list of requirements. And because the back-end actually has some coherent logic to it, changes forced by the requirements document are often foisted into the GUI or the middle-layer ("since they were pretty much designed-by-kludge it doesn't matter"). The result is a half-baked application that barely does what it is supposed to, has lots of logic in the widgets, and barely ships on time (if it even makes schedule). The project manager realizes all of these flaws and they reinforce his decisions (outlined in paragraph 1). After all, even with maximizing his use of developer time, he barely got the dang thing out the door. That's part of what I mean by TooMuchGuiCode