In TransactionProcessing matters, transactions must be designed carefully depending on the production constraints they will face. This pages lists some best practices about transaction design. ---- '''In Unix-like contexts, in the context of the use of TransactionProcessingMonitor''' The main idea is that the design of a transaction varies upon the traffic. For transactions running at less than 50 TPS (TransactionsPerSecond), there are usuall not a lot of issues. For transactions running between 100 and 500 TPS, you will have severe performance issues that you must absolutely consider at design times. For transactions supposed to run at more than 500 to 1000 TPS, you should consider complete different approaches. For instance: * For small traffic transaction, hitting several TPS (TransactionPerSecond), you can do a quite complex and ideal CeePlusPlus design with patterns, a lot of classes and interfaces, dynamic allocations and so on. Usually this kind of design is done for administration transactions. * When your are hitting several hundreds of TPS, you must design transactions for ensure performance and reliability first. For instance, all the ''new'' and ''delete'' stuff and memory copies should be avoided at all costs. Even when coding in CeePlusPlus, your transaction will look like a primitive CeeLanguage program for it needs to be basic and efficient. The database will also be quite an issue at this transaction rate. You rarely can do what you need to without partitioning the tables and doing some tuning on the database. * When you reach transactions rates that exceed 1000 TPS, you should design the transaction in a very specific way. It is possible that you don't want to use or can't use anymore a database in the transactional path because of performance issues. Most companies innovate and invent some complex ways to update a database possibly after the transaction is complete. Very often, in companies that are able to sustain this load, they are some technical expertise in house with custom solutions that are very adapted to the business. Very often, the development framework in this context is hugely restrictive (such as mandatory use of certain verbs or reserved memory area, etc.). The fact is that TransactionProcessing is quite a strange area because after a certain limit in transaction rates (more than 500 TPS), some phenomenons are unpredictable: * A good design working on the paper may not work at all and you will troubleshoot for days with experts in the domain to figure out what's the issue; * A significant change in the traffic can have hazardous results (some applications may run better at 1000 TPS than at 300!); * You can reach system problems quite quickly, protocol stack tunings and handle number configurations issues. In transaction processing: * Single threaded application code is usually preferred to multi-threaded application code (see TransactionProcessingMonitor). * Transaction cannot leak memory (just imagine leaking 3 bytes per transactions at 1000 TPS and you can foresee the crash moment). They must be purified at least (cf. RationalPurify) and you should do CodeCoverage on them (to ensure the error conditions are all implemented and tested). To be done: * Explain the design issues of bouncing transactions. ---- See also: * TransactionProcessingMonitor ---- CategoryTransactionProcessing