This term relates to the nowadays BuzzWord '''ScaleAbility'''. There are situations where adding more power to one machine or exchanging a machine with a more powerful one reach the limit of what is physically or economically feasible. Then, and I say ''only then'', you should think about putting the load to be handled on more than one machine. But remember, there are other ways to ''scale'' a system: * use better algorithms or structure * avoid unnecessary overhead * reduce the number of system calls (crossing component chasm) * avoid acquiring concurrent resources requiring locks Sharing the load of a system across several computers requires a duplication of the system and an additional component for coordination (the LoadBalancer or see MasterSlavePattern in PatternOrientedSoftwareArchitectureOne). If the split system requires some global state, there is need for another component that coordinates access to this global state. Often this is implemented as a database dealing with the issues of transaction control, etc. As you see, just by saying: let us scale the system you get into a much higher level of complexity. You get a system consisting of more computers performing less (at least in terms of user perception) because of the latency required by the additional communication and the coordination overhead involved. Please think of that, when you next read the word ''ScaleAbility'' or multi-tier. That doesn't mean SeparationOfConcerns is not a good concept, but it shouldn't be blindly used to distribute functionality across deep chasms. ------ The big mainframe computers (I never used one) as all singular systems have an enormous benefit: if something goes wrong you know where to look (or who to blame).