''We have 4 or so titles on this topic. I think it is time to consolidate, or at least make clear divisions.'' People anthropomorphize. They treat inanimate things like they were people. Programmers, good, clear-thinking, logical programmers anthropomorphize about programs. They say things like, 'The program thinks that you want to . . . '. Well, strictly speaking the program thinks no such thing. It doesn't think, it merely executes instructions. Similarly, classes don't have responsibilities. They're bits of code. People have responsibilities, code just executes. * ''The distinction is questionable... especially if 'thinking' is an emergent property of executing instructions. It's rather difficult to justify a claim that neurons 'think', but not so difficult when given a whole, active brain. SeeAlso: ChineseRoom -- AnonymousDonor'' So why do sane, rational programmers engage in this sort of garbage about programs 'thinking' and 'having responsibilities'? They do it because it works. It helps them to think about the programs. People have evolved for dealing with the complexities of people interacting, and they can use these innate skills to deal with complex programs by anthropomorphizing about the programs. So, one of the reasons why OO is good is not because its more logical but because its more human. Programmers, even very geeky ones, are still human, and programming is a human activity. -- StephenHutchinson It might work for simple things, but for heavy duty stuff, it is most likely a distraction. Here's what a HumbleProgrammer had to say: "The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity." (EwDijkstra, EWD 709) * ''Same HumbleProgrammer said: "Object-oriented programming is an exceptionally bad idea which could only have originated in California." -- LuizEsmiralha'' ---- Surviving in groups is "heavy duty stuff". Increase in human group size co-evolved with brain structures for language and symbolic reasoning. There's nothing immature about using those brain structures to write software. -- EricHodges ''There's something immature about remaining anchored in antropomorphic symbols, when it has long been proven that humans can perform at a high level of abstraction. It is immature when better approaches are available for tackling particular problems. For example in concurrent programming, humans have a natural tendency to put an observation order on the history of things, and therefore reason in terms of execution histories. As Dijkstra have observed this is not very effective, and most of the time misleading. The claim that "this is how humans think" is in the general case just bunk. Humans also think in terms of algebras, logic(s), relational calculus, petri nets, predicate transformers and in so many different ways. -- CostinCozianu'' You're assuming that anthropomorphing prohibits thinking at a high level of abstraction. Applying social intelligence doesn't necessarily prevent us from using other kinds of intelligence. And evolutionary psychology suggests that these other ways of thinking are rooted in social intelligence. Calling it "immature" seems like an attempt to shame people away from thinking about it or doing it. There's a lot of research about social intelligence. I don't think it's "just bunk". -- EricHodges ''"A lot of research" doesn't prove a thing. There's lots and lots of research all over the place. EwDijkstra had results to show for his theories, and you bet he attempted to shame people away from using antropomorphic terminology. He took his mission as an educator very seriously. Furthermore, see OoIsNotAnthropomorphic. -- CostinCozianu'' Calling it bunk doesn't prove anything, either. Shame isn't needed. If it doesn't work we should be able to express that without resorting to emotional tactics. I'm glad Dijkstra wasn't my educator. -- EricHodges Comparison with other fields may be helpful. Some biologists avoid anthropomorphic language at all costs, and the end result is that they say the same thing in a more technically accurate but far less direct and understandable way. Computer science would do well to skip the movement. When we say the computer ''wants input,'' we all know we mean it is ''has been instructed to request and receive input before its operation will continue,'' so why would we use the longer phrase? -- AnonymousDonor There's a fear that anthropomorphizing will lead people to imagine software is more intelligent than it really is. That's well-founded and people should guard against that, but my experience is that most programmers are well aware of how intelligent their software can be. It's usually programmers who provide a reality check for customers in this area. -- EricHodges ---- I think invoking EWD may be an example of QuotingNotThinking, or at least quoting out of context; either way, the original poster hasn't supported his thesis that OO is anthropomorphic. The only examples cited so far are: * The program thinks that you want to... ''(no less clear than "the current state of the program requires that the user...?")'' * ...classes don't have responsibilities. ''Inanimate objects quite often have responsibilities; corporations, coffee mugs, governments, and my brake linings each have responsibilities. They're the expected behaviors that a human has assigned to them''. {But do physical modeling techniques extrapolate well to non-physical or less-physical domains, like finance or law?} ---- I stick with what I say 'classes don't have responsibilities' - there is, in my mind at least, a difference between 'an expected behavior' and a 'responsibility'. Legal entities, people, or collections of people like corporations, have responsibilities, Animals and inanimate objects don't. To say 'a mug has a responsibility to hold coffee' might be fine in a poem, but for ordinary prose is stretching the meaning of the word responsibility beyond breaking point, in fact as a use of language its downright irresponsible! ;-) -- StephenHutchinson ------ There seem to be two or more somewhat distinct issues here. First, does OO better allow "social-centric" models. Second, do social-centric models make for better software. The second invites the question of whether social modeling is a crutch for those who cannot grasp more powerful concepts. In other words, should software fit our minds, or should we alter our minds to fit software needs? On the flip side, one could also argue that to make applications more user-friendly, we need to use social models that better map to user thinking. But, what about non-human applications such as chemistry modeling with no UI's or batch jobs? Perhaps if we better divide these issues, the arguments might be better organized. Just a thought. ''I think OO enables the application of social intelligence to software development more than other paradigms I've used. I don't think that makes better software. Instead it makes software development easier because we have brains well suited to modeling and predicting social scenarios. Compare it to running across a field using the long muscles of the legs versus pulling yourself across by your fingertips. Both methods accomplish the same task, but one requires less effort because of the way our bodies operate. We evolved legs for running. We evolved brains for surviving in groups. -- EricHodges'' -------- Is ResponsibilityDrivenDesign part of OoIsAnthropomorphic? I remember somebody suggesting it was but I cannot find the original suggestion. I did a write-up that was removed from this topic, probably because I failed to establish a relationship. -- top ''No, ResponsibilityDrivenDesign is not "part of" OoIsAnthropomorphic. ResponsibilityDrivenDesign can be used without anthropomorphing OO. One can anthropomorph OO without using ResponsibilityDrivenDesign. They are orthogonal.'' Then what ''are'' the anthro features of OO? You have "self-handling nouns" which are kind of like little machines or even little people, but is that it? ''Some OO features that encourage the application of evolved social intelligence to design:'' * ''The distinction between public and private data. Human brains are good at keeping track of who knows what about whom, what information is secret, who shares those secrets, etc. These skills are vital to survival in groups with dominance hierarchies.'' ** Yes, but it is relative. (I bet you hate me using that word by now.) You tell some people some things but not others. And when you are really drunk... Plus, languages like SmallTalk don't make a big deal about public/private. Does that make ST less anthro? * ''The distinction between public and private behavior. Human brains are also good at keeping track of who can do what to whom. (Again, for survival within dominance hierarchies.)'' ** Please clarify. Anybody can do anything to anybody just about. It is a matter of probability, at least that is the way I think about it. OO tends to hardwire these, and that is unnatural, thus less anthro. * ''Separation of interface and implementation. As it says on your home page, "... to me it is better abstraction to simply ask for a cup of strong coffee, assign a priority, and let something else take care of who or what actually does the task." You have a brain that easily builds models on the knowledge that something will happen without details about how it will happen.'' ** Something functions have been doing for a long time. * ''Association of data and behavior. Human brains are good at figuring out what information is needed to accomplish a certain task and vice versa, what tasks can be accomplished with certain knowledge.'' ** I am fuzzy on this. May I ask for clarification? ''You asked for some features, so I gave you some. Arguing about them is pointless.'' * Why is it "pointless"? A look at pros and cons is a good thing. That is what wiki is for. Above, I noticed somebody said, "So why do sane, rational programmers engage in this sort of garbage about programs 'thinking' and 'having responsibilities'? They do it because it works." This seems to make that snipped content of mine relevant after all. -- top ''The ability to think fuzzily, and to draw correlations that don't actually exist is one of the great strengths of the human mind, and it's why we aren't computers. Taking advantage of that strength is useful. TerryPratchett makes this a fairly common theme in his books, where various entities are repeatedly surprised by the ability of humans for self-delusion, and how that ability actually *works* for them. Bet you didn't think that page would be OnTopic any time soon!'' -- ChrisMellon But that does not answer the question, "Is OO more anthropomorphic than other paradigms?" Most paradigms can model UsefulLie''''''s. Or, am I misreading the implications of the topic title? ---- CategoryObjectOrientation