Moved from DeadLock. How do biological cellular systems avoid DeadLock and RaceCondition with so many processes in parallel? ''Many times they don't... Technically positive feedback can reinforce a "negative" response. Many levels of control in a bio system... Higher level processes usually monitor the progress of lower ones to keep things under control. -- gl'' * Deadlock frequently does occur in biological systems; the usual result is the death of the organism (or at least the failure of some part of the organism). For example, it is quite possible to withhold oxygen from an individual for enough time that normal regulation of breathing stops working - even if the oxygen deprival ceases, the individual will die without active resuscitation. (If, at this point, artificial respiration is provided, the individual may recover). Concrete examples of this phenomenon are pulled out of swimming pools all the time. If you rescue a drowning person (who has stopped breathing) but then, after removing them from the water, just leave them alone on the ground - they will probably die. Give that person CPR afterwards, and he/she has a chance at survival - depending on how much brain damage has occurred due to oxygen deprivation. ** ''That is at the Macro level. I'm looking to understand DeadLock more at a Micro level such as occurs in cells, how these nano-components can coordinate themselves to function properly despite lack of centralized control and DeadlockAvoidance schemes.'' *** It seems in many cases cells just live with the collision. In fact, biology tends to rely on multiple messages to govern the degree of activity. For example, a hormone is used to send a message, and receptors react depending on how many molecules of the harmone they sense. In neurons, multiple messages tend to reinforce the output signal level. Biology tends to thumb OnceAndOnlyOnce, perhaps because it uses lots of small machines working in tandem instead of a few big ones. Also, maybe deadlocks look different when you are not Boolean-base. Interesting - are there patterns there that could be applied to improve software design? For instance, the discussion of ManyShortMethodsPerClass - what is the typical ratio of function per "object" in a cell? After all nature often gives good engineering pointers having had millions of years to adapt solutions. ''Wow. No kidding there... compiler/interpreter/translator/transcribing technologies are used in the holistic DNA/genetic/cell control stuff. The biggest problem is that what we have today IS highly evolved. Consider how the chain involved in engineering CPUs for example. The better the chip, the better the software that can design the chip and then the feedback loop is set up. Not much of the 8008 left (just barely remnants of 8080, Z-80).'' Right but despite the complexity of our tools, we still have problems how to use them. Cells are tolerant of deadlock but can we make software that allows deadlock and still functions? We still are arguing about whether a class should have <= 10 methods or >10 methods, how complex each method should be. It seems to me a cell is analogous to a complex ObjectOriented software application so can we imitate some of the architecture in our designs? AlanTuring was interested in CellularAutomata from the beginning but now that we have a better understanding, can we feed forward from nature ideas of how cells work to make our software better? And of course then feed back software that designs better hardware leading to increased knowledge of biology, better medicine... ''Wow. Getting deep here... I think so. The primary key is building in an interpretation engine such that the code can produce new code that is then executed ala LISP (or perhaps P-Code or whatever the heck Java uses). ''Forth'' has some nice ideas also. Parallel processing is the way to go in the future. (Nice bio analogy - many cells doing similar work). Any given task CANNOT deadlock. It MUST safeguard against this condition. Most mutually dependent processes can probably automatically be detected. Detect a potential deadlock and prevent it from happening. Have alternate paths to take (exception handling) so that something reasonable and productive is always going on.'' Thanks. But I'm not thinking of imitating cells literally ie have "genetic code" that is then "transcribed" to P-Code (RuntimeReflectionIsaDesignSmell?) but more abstractly, say the cell's main "Packages" are Cytoplasm, Nucleus, Mitochondria, Ribosomes etc then within those are the "Classes" ie Nucleus has Chromosomes, which again are a kind of package, then genes, then DNA, proteins etc. If you were to do a high level class diagram of this architecture, and enumerate methods that each low level class performs, ie DNA, RNA, protein classes etc, is there a pattern of each class/objects having few methods, or many, or a mix? Then from that abstract view, does it tell us anything about the "load" that a functional unit should have in general. We could explore it from other domains ie the structure of a transportation system (GridLock?), economy or an ecosystem and their functional units; I just think cells are an easier metaphor. Why CAN a biological system allow deadlock (which was stated at the top) and a man-made system CANNOT? ''I need to think about this and get back to you. About reached my WikkiMax for the present. BTW, I do understand that deadlock does occur in bio processes, but if a particular resource is not available, that requesting process usually halts right there and then withers and dies if not given the proper encouragement. Perhaps re-queued at a later date...'' [Biological systems, it should be pointed out, do not have anything resembling SharedStateConcurrency; if anything, they use a message passing mechanism, with molecules of various sorts as the "messages". Something to consider...] MessagePassingConcurrency? Still, within each "object" or message-creating entity if you identify what processes the entity carries out, and functions it uses to send/receive, is there a typical number? 1, 3, 10, 20? Recently reading, it seems a lot of what was thought to be "junk" DNA appears to be functional so we are still learning, but based on what we know, do objects in cells highly specialize and if so does that limit each entity's behaviour (number of methods)? To make it more concrete, consider the following snapshot of a very small subset of activity in the cell: : DNA replication is initiated when a protein encoded by the gene ''dnaA'' binds to the 9-mers and forms a protein core around which the DNA coils. This coiling of the DNA stimulates the region containing the three 13-mers to unwind. A number of enzymes bind to the unwound DNA before replication can start. So in abbreviated UmlAsciiArt we seem to have something like: dnaA_Protein *Protein# ''(IsA protein)'' [...,fold(),bind(),unwind()...] DNA *MoleculeAssembly# [...,unwind(),coil(),stimulate(),replicate()...] 13Mers *DNA@ ''(Composition or PartOf DNA)'' [...,unwind(),] 9Mers *DNA@ [...,unwind(),] Enzyme [...,bind(),...] So even if we consider these 5 classes, how many methods (and for that matter attributes) each if we try to list all the most important ones? Do methods vary significantly in behaviour where the parent class already has it? Such as unwind()? Can we show an instance of DeadLock in this specific process (replication)? ---- The above as a sequence in abbreviated UmlAsciiArt: dnaA_Protein--bind()->9Mers DNA--coil()->dna_Protein DNA--stimulate()->13Mer 13Mer<-unwind() ''(message to self)'' Enzymes--bind()->13Mer What would DeadLock look like as a sequence in general and could it happen in this process? I realize this is greatly simplified; see http://www.sb.fsu.edu/~hongli/BCH5425/note4.html. ---- The situation is even more complicated in human cells. Human chromosomes are not replicated linearly from end to end, but rather concurrently from hundreds (thousands?) of replication origins. Each of these origins "fires" semi-independently of each other to replicate the genome in parallel, analogous to having 300 people each copy one page of a 300 page book. It isn't super-important that all the origins fire simultaneously, but it is crucial that '''none''' of the origins fires more than once per cell-division. To continue the book analogy, origin re-firing would be similar to winding up with a new book that has two page 217's. The cell solves this problem by initially having all the origins turned off. Then they are all (or most of them are) primed to fire. Then the primer is destroyed. Then a signal is sent that fires the primed origins. The primed origins fire and then immediately return themselves to the unprimed state. A sensor detects when all the copying is finished and resynthesizes the primer enabling the cycle to continue. * some gory details from a leader in the field can be found at http://www.mskcc.org/mskcc/html/10235.cfm Complex behaviors are tough to coordinate, but is ''deadlock'' really a problem one might expect? Cellular components aren't really locked down for any particular use. They may be primed to allow a certain set of behaviors, but in that case it's clear what happens next. Aside from that, the only thing I can think of is the blocking caused by physical presence, which is fine so long as the components separate again. ---- Cellular processes rarely involve locking mechanisms of any sort. Atomicity is achieved through the fact that occupying a receptor or an enzyme precludes other molecules and signals from doing the same. But it is almost a truism that these interactions are extremely short lived- many of the most powerful toxins work by introducing locks into physiological mechanisms- by cyanide or carbon monoxide permanently binding to ATPases or hemoglobin, by a neurotoxin binding permanently to a nerve ending or to the enzyme which consumes neurotransmitter, etc. I guess you could say that many of the systems within the body follow the producer-consumer model with the blood as the producer and the cells as consumers. Since the body actually works upon physical material that can only be in one place at a time, there is no way for a group of cells or subcellular processes to break this abstraction (which isnt an abstraction in the real world) and vie for a consumable substance with another cell or process. Everything is either in one place or another. This isnt something that can literally exist within a software system so it must be modelled- which leads to incorrect implementations and bugs like deadlocks.