A system that has many processors (usually MPP systems start at 16 CPUs). Typically, we have something called nodes, which usually have one to four CPUs, and some memory. It is the presence of memory that set MPP apart from SymmetricMultiprocessing. SMP has one set of memory for the whole system, whereas MPP doesn't. This makes programming for MPP more difficult; it isn't that much different from programming a bunch of machines on a network to work together, which is actually how an increasing number of supercomputers are designed. A large number of single or dual processor linux machines connected using a special super fast network (or at least a well laid out normal network). The ConnectionMachine Mark-I had 16384 processors, although it was designed to be expanded to 65536. The most successful MPP applications have been for problems that can be broken down into many separate, independent operations on vast quantities of data. In data mining, there is a need to perform multiple searches of a static database. In artificial intelligence, there is the need to analyze multiple alternatives, as in a chess game. Often MPP systems are structured as clusters of processors. Within each cluster the processors interact as in a SMP system. It is only between the clusters that messages are passed. Because operands may be addressed either via messages or via memory addresses, some MPP systems are called NUMA machines, for Non-Uniform Memory Addressing. The cost of communications is fairly large, and the speed fairly low. For this reason the search is always on to find algorithms that can run independently in lots of separate processors, only communicating occasionally. Google, for example, uses 1000s of cheap PCs rather than a special-purpose parallel machine. It's an interesting, although perhaps ultimately pointless, debate as to when 1000s of networked machines can be regarded as "a super-computer". ------ See ConcurrentProgramming, DistributedComputing