'''Problem definition:''' Multithreaded programming is difficult; more so than many other aspects of programming. * It makes programs non-linear. One no longer knows what statement will be executed next in time. Unfortunately, programs in most languages (e.g. C/C++, Java) are written as if they are linear: one statement followed by the next. Moreover, traditional programming is taught as if it were linear. Thus, it is hard for programmers to think in this paradigm. * It explodes the statespace of a program combinatorially. With a much larger number of possible states, it becomes difficult to understand a program and thus difficult to write and verify. It becomes difficult to test all the possible states because there are so many. * It makes the program non-deterministic (but see DeclarativeConcurrency). This adds to testing problems. * Poor synchronization strategies can seriously slow down a program, especially on systems whose synchronization objects incur high OverHead (e.g. Windows). * There are few adequate debugging tools available. Without decent debugging tools, you're almost as bad off as if you had none. So, ForgetTheDebugger. * Many languages, especially C++, don't support multithreaded programming at any level. Java is a shining exception because it has built in support for monitors, but even those aren't bulletproof, or adequate in every case (see JavasInsecureParallelism). * Most environments provide synchronization primitives in the form of bare C calls. They could better be expressed as objects, especially in C++ when you can make the constructor/destructor do lock/unlock balancing for you. ''And yes I have done that and now My C++ environment has some rather nice synchronization objects. And the cute bit is that because they are not language primitives, but user implemented, I can replace them with equivalent classes to assist in testing my concurrent code.'' * There are well known and well proven patterns for handling synchronous failures, such as ExceptionHandling and ResumableException. A significant advantage here is that the 'failure' as such is often centralized relative to procedural code (though handling failures when dealing with logic programming can be an interesting challenge). Asynchronous failures open an entirely new arena for failure classes, including decentralized failures such as deadlock, RaceCondition''''''s, failure of a thread or process (that might be holding locks or semaphores), and (for optimistic synchronization) livelock. It is generally unclear where these errors should be handled and who is responsible for defining the policy for handling them. ---- '''Strategies for effective synchronized programming''' ''From SunirShah's work term report 2 (cf. note on his name page)'' * WriteYourOwnThreadingPackage * SynchronizedSmartPointers * AvoidThreadsForOptimizations * CoarseGrainLocking (or BigTimeSynchronization) * SynchronizedTightGroupOfClasses * TrafficCop * ProofOfCorrectness (ProofOfIncorrectness) * SynchronizationTesting ''From PaulMcKenney's "Parallel Patterns for Synchronization on Shared-Memory Multiprocessors" available at http://c2.com/ppr/mutex/mutexpat.html'' * CodeLocking * ParTition * DataOwnership * ParalleliZation * ReaderWriterLocking * CriticalSectionFusing ''Other'' * SemaphoresForMutualExclusion * WaitFreeSynchronization * Use a database (AtomicConsistentIsolatedDurable and ParallelNeedScenario) * SoftwareTransactionalMemory (in-process transactions) or TransactionalActorModel (multi-process transactions) * PromisePipelining * SendReceiveReply (blocking) or handling replies ''as'' promises (unites ActorsModel, allows PromisePipelining) ---- '''Ways to keep a data structure in shared memory in a consistent state, even when multiple threads try to update it''' There are many techniques for guaranteeing that readers will see a consistent state for that data structure. summarizing and quoting from "Obstruction-Free Synchronization: Double-Ended Queues as an Example" Maurice Herlihy, Victor Luchangco, Mark Moir http://www.cs.brown.edu/people/mph/HerlihyLM03/main.pdf * "A synchronization technique is ''wait-free'' (WaitFreeSynchronization) if it ensures that every thread will continue to make progress in the face of arbitrary delay (or even failure) of other threads." * "A synchronization technique ... is ''lock-free'' (LockFreeSynchronization) if it ensures only that some thread always makes progress. While wait-free synchronization is the ideal behavior (thread starvation is unacceptable), lock-free synchronization is often good enough for practical purposes (as long as starvation, while possible in principle, never happens in practice)." * "A synchronization technique is ''obstruction-free'' (NonBlockingSynchronization) if it guarantees progress for any thread that eventually executes in isolation. ... (Pragmatically, it is enough for the thread to run long enough without encountering a synchronization conflict from a concurrent thread.) Like the wait-free and lock-free conditions, obstruction-free synchronization ensures that no thread can be blocked by delays or failures of other threads. This property is weaker than lock-free synchronization, because it does not guarantee progress when two or more conflicting threads are executing concurrently." * A synchronization technique has "obstructions" or "locks" ("is blocking") if, as long as one thread has the lock on some object, no other thread can make any progress updating that object, even if the thread with the lock is delayed or halted. Includes CodeLocking, CoarseGrainLocking, SemaphoresForMutualExclusion, semaphores, mutexes, monitors, etc. (I think SendReceiveReply is also blocking, right ? ''It is, though you can tweak it to handle replies as promises.''). (EditHint: do we have 4 pages for the above 4 "types" of synchronization ?) ---- Synchronization usually requires GlobalConsensus. (2004-05-17: moved text to GlobalConsensus) ---- See also ExtremeProgrammingChallengeFourteen, RaceCondition, DeadLock This is all very well and good for threads, but does anyone have good ideas for NetworkSynchronization? ''SendReceiveReply. A network connection is a kind of well-synchronised message queue anyway...'' Some of the ideas at WaitFreeSynchronization seem relevant. ---- On a multiprocessor system, there are only 3 ways to design a subroutine intended to modify an object in shared memory (right ?): * BalkingPattern: The subroutine returns in finite time, with a message as to whether it was successful or not. * GuardedSuspension: When the subroutine returns, the object has been successfully modified. (But it may wait indefinitely for other threads, possibly on other processors). (Basically a SpinLock plus a BalkingPattern). * WaitFreeSynchronization: the subroutine succeeds in finite time, no matter what the other processors are doing. ---- (moved from CodeLocking -- I'm pretty sure this is *not* an example of CodeLocking. What kind of synchronization is this ? Move it to the relevant page.) ''(Is WaitFreeSynchronization the right place for this ?)'' We have the problem that we're writing in a language with no internal locking constructs - C. We have objects that can be in multiple linked lists, and occasionally have to be removed from all the linked lists, and then the object itself needs to be removed. What pattern can we use for this reasonably common(?) problem? In a bit more detail, what we are working on is an IrcServer, there is a mapping of channels a user is on, and users on a channel both pointing to a struct which contains the information about that user on that channel. Now we might have looked up the user by "users on this channel" if for instance the user was booted from the channel, or we might have looked up "channels this user is on" if for example the user has quit. How can we remove the object from both linked lists without DeadLock''''ing? -- PerryLorier * ''Create a dummy node '''E'''. Label it "Empty".'' * ''Change the code that runs down the lists to ignore pointers to the '''Empty''' node.'' * ''When you have a node '''X''' that you want to remove, modify the pointers in your lists to point at '''E''' instead of user node '''X'''.'' * ''Once '''X''' has been deleted from every list, you can delete user node '''X'''.'' * ''You could then remove the the inert node '''E''' from your lists. Alternatively, you can have your normal traversing code short-circuiting past them as they go.'' ''Now all your operations are either no-ops or single-writes and therefore atomic. Unless I haven't understood your problem, which is possible.'' MichaelWill: C has no internal locking constructs but you can use the pthreads library which offers semaphores, mutexes etc. ---- http://en.wikipedia.org/wiki/Double_checked_locking_pattern ---- CategoryConcurrency CategoryConcurrencyPatterns