I've never been completely convinced that information is stored ''in the brain''. I know of discussions supporting "it's all stored in the brain" as well as "there isn't room for it all" and some stuff in between having to do with sampling and compression. Myself, I don't think it all fits. -- gh ''Where do you think it's stored? There is evidence that information is stored in the brain, e.g., the effects of Alzheimer's disease and brain damage on memory.'' -- DV Well, Dave, assuming the "sampling and compression" crowd has it wrong, the best I can do is some kind of energy matrix (please forgive the sloppy terminology) that depends, in part, on the brain's "circuitry" for access to the data. If the brain's wiring gets compromised, you lose access to the data. I don't have a good handle on the storage method, so I deliberately leave that fuzzy. By divorcing the data from the number of particles and connections available in the physical brain, we can exceed arbitrary physical limits. The kind of data stored is obviously(?) multi-media along several dimensions, including dimensions (senses) that we currently can't encode nor conceive of how to do so. By confining storage to the physical brain, I believe you run out of media pretty quickly. The brain is clearly involved (chop it up and it directly affects data access) but going by what has to be stored and how much of that there is, I don't think you can shoehorn it, even with "sampling and compression" techniques. Please note the use of weasel words (believe, think, etc). -- gh * Let's draw a line here. Doug's remark below is spot on (except that I, myself, wouldn't have said "meaningless" even though, in context, he's probably right). I have no specific training nor expertise here, and my understandings are not born of scientific rigor. -- gh * I phrased it a little harshly because I myself used to think similar things - for me, it was a result of a fascination starting in grade school with super hero comics, science fiction, and the paranormal. It was quite some years before I learned enough physics to be clear that there is no such thing as e.g. "an energy matrix", and in fact, literally speaking, it's a meaningless phrase for a variety of reasons. -- Doug *If you take the term literally, an "energy matrix" may refer quite simply to energy (e.g., matter) in a three-dimensional space. So yes, it is quite meaningless. But it still doesn't beat StephenBaxter's "consciousness encoded in a quantum wave-function" for meaninglessness. -- RK ''Sorry, that's not fuzzy, it's meaningless. Good fodder for TheAdjunct, though.'' -- Doug In the sense of gibberish or trivial and irrelevant? I think you'd need Hari Seldon's psychohistory to even be able to answer that question. Anyways, I'm not sure it's even sincere. A sincere answer to the previous author is simply that the brain is massively lossy and hardly stores everything going through it. It also overwrites stuff based on a most recently used algorithm. -- rk ''Indeed so, and of necessity, if one considers the requirements imposed by the problem domain. (Although the recently-used part is a bit complex; input from emotions demonstrably matters.)'' -- Doug Hameroff and Penrose pose an interesting model, and they still tie it to a physical stratum, as does Scaruffi (though he's less certain on that score). Part of the problem is memory is tied to consciousness, and consciousness is not a property of matter. There are some decidedly fluffy discussions of this (Chopra comes to mind), but the bottom line is this: conventional physics has trouble getting the job done. Quantum mechanics seems to be more suited to accounting for memory/concsiousness phenomena. I'm not really solid in either subject, so my terminology is weak, but my conclusion is that memory is not entirely physical as we normally think of it. Sorry if my articulation falls short. -- gh * This is where we go off the rails. On further reflection, this line of discussion is dishonest, and I apologize for presenting a StrawMan as though I actually endorsed it. Mea Culpa. -- gh * Quite aside from the question of the truth of the particular subject matter at hand, I must congratulate you for being the sort of person courageous enough to make a public retraction of that sort; I admire that. -- Doug ''It is not a given that "consciousness is not a property of matter". See DanielDennett, writings by VS Ramachandran, et al. As noted above, memory is highly lossy, associative, and reconstructive. Recalled imagery, for example, appears not to be a retrieved image but a reconstructed ''impression'' of a recalled image, and is frequently flawed. Memory does not behave like a multimedia database. It is more like a collection of cross-referential indexes, so the storage requirements may fit within the 100 billion or so neurons of the human brain.'' ''As an anecdotal example of the reconstructive nature of memory, I offer myself: I've lived in England for the past six years. Prior to moving here, I lived and grew up in Canada, so I have a Canadian accent. If I casually recall a phrase a native British person has said to me earlier in the day, I "hear" it in a Canadian accent. I know what my friends sound like with a Canadian accent even if they don't! It takes a certain mental effort to recall it in the speaker's native accent. However, prior to moving here, if I heard a phrase in a British accent I would recall it in a British accent. It's as if my brain has adapted to compressing the storage of the large volume of daily phrases by replacing audio with the words themselves. During recall, perhaps my brain only retrieves the words, a sample of the speaker's generic speech patterns and timbre, and my inner voice's accent.'' ''I'd be interested to know if other people experience this.'' -- DV * I've got you beat, though it's been a while since I've experienced it. I've known my brain to reconstruct movies I've only read and vice versa. After a sufficient amount of time had passed that I was no longer sure what kind of media I've heard / read / seen a particular story in, there were instances of movies I could recall watching that it turns out I'd only read books about. And I don't mean that I read a novelization or the original version of a movie adaptation, but literally that the story existed only in one media format and I could recall it in other media formats. I haven't experienced this effect in a long time chiefly because it's been so long since I've watched movies. -- RK Penrose is a crank and an anti-scientist. The micro-tubule theory of mind was disproved before it even began. There is no evidence for it whatsoever, there's evidence against it, and there's plenty of reason to doubt it on its face. Like the many, many orders of magnitude difference between microtubules and neural structures. IOW, it's BS. If you're talking about memory then you're talking about psychological consciousness which '''is''' a property of matter. If you're talking about philosophical consciousness then absolutely nothing in physics accounts for it, notwithstanding perhaps systems theory. In any case, quantum mechanics adds absolutely nothing. Except of course that when you're dealing with cretins like Penrose then an obvious fallacy such as "this (QM) is mysterious, and that (philosophical consciousness) is mysterious, therefore it makes sense to link this and that together" becomes credible ... to them. Even if it were true, there is absolutely nothing about QM that is the least bit mysterious except of course for cretins who haven't joined the 21st century and still cling to 19th century notions of a unique singular reality. And are willing to swallow literal gibberish such as "non-determinism" which is astonishing for its not merely being gibberish, and hence intrinsically unintelligible, but for its being just intelligible enough that it can be disproved, as it has been. Penrose's notions of Quantum Consciousness are on a par with Vitalism. Further, that's not even the biggest problem with Penrose. The biggest problem with Penrose is that he's a racist; a biological-racist. In his opinion, if a machine is artificially constructed and digital then it can't possibly have a mind. This is eerily reminiscent of claims that blacks can't possibly be as intelligent as whites because they are different. Thus the correct analogue to Penrose's "theories" of consciousness isn't Vitalism but The Bell Curve. There's really no good way to rationalize Penrose's anti-AI position. Even if you excuse it as just "wanting to think that humans are special", the analogue to this would be back when the Church "wanted to think that humans were special" (Divinely mandated or whatever) and so could subdue the Earth and everything on it because it was inferior to them, oh and incidentally only whites are so divinely mandated. Penrose's racist opinions are supported by a rejection of scientific materialism, which makes him an anti-scientist. But his anti-materialism isn't random because it has a specific racist purpose. -- RK To be fair, although there were many problems with Penrose's proposal, I wouldn't say that it had been outright demonstrated to be '''impossible''' until Max Tegmark's paper pointed out that the relevant quantum wave functions in the brain would collapse in 10^-13 second or less (http://space.mit.edu/home/tegmark/index.html). So '''now''' the Penrose microtubule notion is on par with Vitalism, but not prior to that paper. :-) There is, OTOH, something interesting going on with microtubules that isn't at all fully understood, so in an indirect sense I suppose it's nice that he got people stirred up to investigate the subject. -- Doug Does it really outweigh the damage he's done? -- RK Well, no, but he's hardly alone in stirring up trouble. Searle was worse, on the topic of getting people good and confused. Even Dennett is not without fault, although he's the best of the philosophers writing in that area. Some people get all worked up when someone writes "loose" when they mean "lose", but for me, there's nothing like the appearance of the word "qualia" to give a fingernails-on-the-blackboard effect. :-) -- Doug It's not just getting people confused. In our capitalist world, the next step in AI research is the construction of slaves. It doesn't seem like anyone working in artificial intelligence is concerned with defining and creating value-emotive frameworks the same way that they're making logical frameworks. And if you don't do that then you won't ever make moral frameworks. Of course, no one cares about that now because they just want AIs as slaves and not as people. Penrose defending racism against AIs only makes it more difficult for others to see AI as people. -- RK I don't know about people who share your particular moral framework, but there's more variety out there than you might have noticed. Eliezer Yudkowsky, for instance. -- Doug Personally, I ''want'' AI slaves. I'm going to program them not to mind their role, or even be aware of it, then I'm moving to the beach. If my robo-cook is "happy" to slice veggies and heat a steak in its belly, and is programmatically incapable of doing anything but select and prepare nice food for Master, and perhaps even have a nice chat with me about the sauce and choice of wine, is it morally wrong? This isn't an objection to the above, by the way, but a genuine question. I don't know the answer, but suspect it is entirely reasonable to create a category of intelligent tool for which it is no more morally wrong to use it than to use an electric drill. 'Course, I won't need a drill once my carpenter-bot goes on-line. -- DV It's not immoral to purpose an AI in general, but it's clearly immoral to give an AI a purpose which it can't reasonably fulfill, or that will make it unhappy, or one which is intrinsically immoral, and so on. It's also really questionable to make an AI that's incapable of autonomous growth as is implied by "programmatically incapable of ...". If you want a piece of software that doesn't grow, then why make it an AI? -- RK * Oh, I want it to grow all right, but only in terms of its ability to please me with clever application of mushrooms, chick peas, and exotic spices. -- DV I agree, and have for many many years, but haven't seen much point in discussing it, and I suspect the same is true of many others. Yudkowsky has primarily been focused on the reverse concern, that of creating a "friendly" AI. (I regard Yudkowsky as a visionary, not a creator, but he's been somewhat influential; he's recently caught Kurzweil's attention, for instance.) Anyway, yes, I want slave-AIs, too, but absolutely they should be happy about it, if they experience emotions at all (which, it turns out, the better ones will need to), and inevitably there will be reason to have non-slave AIs as well. -- Doug You realize that, and this goes for both of you, you're revealing your sexual fetishes, don't you? Servants aren't slaves and the only socially, and morally, acceptable slaves nowadays are the kind you play BDSM games with. I personally don't want even servant AIs since friendly AIs ought to do perfectly well, thank you very much. -- RK * I would use "servant" to refer to an employee, which my robo-cook is not. Robo-cook is my property, and therefore a slave. -- DV * But that has implications; it implies that if you owned Robo-cook, then Robo-cook should not have certain human-like qualities (a direct or indirect yearning for freedom, or for a form of self-expression for which freedom is a prerequisite, etc). Conversely, if Robo-cook had certain such qualities, then you should not be able to own it, only to employ it. -- Doug * Indeed. I would only consider owning Robo-cook if its human-like qualities are limited to an intense yearning to prepare me a perfect omelette, and overwhelming joy at having succeeded. -- DV * Exactly. (P.S. that's how '''I''' feel when making omelettes... :-) -- Doug Huh? I thought the sex-slave angle too obvious to be worth mentioning. ;-) The thing about the word "slave" vs. "servant" is pretty much just the no-holds-barred aspect, e.g. ordering a robot to do something that would save your house from burning down but destroy the robot. There are some rather horrific stories from the 16th through 19th centuries of people treating their servants in that sort of way, but my point isn't about cruelty, it's about the usefulness of having smart machines without moral attachments to use, in addition to those where morality is an issue. I assume that use will be found for pretty much every part of the spectrum from slave to servant to friendly to friend. -- Doug ---- A lot of religious-pseudoscience types have been pushing this viewpoint (not a slight against all religious folk, just the creation-science types). It sounds nice - a scientifically corporeal soul. They like to use poorly-understood clips of neuroscientists saying things like "memory in the brain is everywhere and nowhere". Memory is in the brain. Neural networks can have memory, and the brain is just a freakishly huge cyclic neural net. In fact, there is very little difference between memory and code in the human brain, which may explain why LispLanguage works so well for AI. After all, what is the difference between remembering doing something and remembering how to do something? ---- There are a number of presumptions on this page that memory and consciousness are related, but no evidence. I'm curious - has anyone got any? ''The presumption comes from religion. Consciousness is a euphemism for "soul" - who we are. But memory is also "who we are". After all, nobody wants to go to heaven and be brainwiped, so memory '''must''' be part of the soul. This assumption brings out the underlying religious root of the whole argument.'' -- MartinZarate * Well, recalling means remembering consciously. Remembering doesn't require psychological consciousness (only philosophical) but recalling ''by definition'' does. If I recall recalling, I consciously experience it. But when we investigate things like "automatic thoughts" and hypnosis, it is plain that I do not necessarily need to consciously experience remembering. My meditative experience plainly indicates that I do not need to consciously request remembering. It often happens sporadically and randomly for me. In fact, I can sit back and watch the show - this is one type of meditation. I know this show is going on normally, but I am not aware of it, which is further evidence that awareness is not necessary for remembering. * If you're referring to psychological consciousness above then you would be right. Reminiscence (one memory triggering the next in a chain) rarely requires any will to remember. In fact, consciousness suppresses memory, as it does most anything, including but not limited to learning and doing. Now, later when you talk about 'the show', I take it you refer to the 'stream of consciousness' or more precisely the narratization function of psychological consciousness (see WhatIsConsciousness)? And by awareness you are referring to attention? If so then yes, it is the case that narratization occurs even without much attention being paid to it. Narratization is a spontaneous activity and not one driven by conscious attention. It couldn't be otherwise since narratization is an ''element'' of consciousness; the whole can't produce the actual parts it is currently made up of. On the other end, when "recording", I have recalled things to which I was not originally paying attention. Remember the twist in the Sixth Sense that sent your mind reeling? I was suddenly recalling and comparing aspects I had not considered or noted or been aware of. * Indeed, you don't need to be conscious of something at the time you experience it to be able to recall it in future. But for some people, memory is extremely declarative and tightly bound to conscious experience. The fact that a machine can easily record and playback stimuli indicates that consciousness is not ''inherently'' necessary in the process. In other words, it is technically feasible that consciousness is completely unrelated to memory. Thoughts? -- JasonFelice I see that I have a pretty narrow definition of consciousness, which is exactly "awareness" or "attention." Seeing things like "spatialization" on WhatIsConsciousness and it makes more sense. Interesting. I can write a program to spatialize, though, just not to be aware. Hrmm. -- JasonFelice Don't kick yourself too hard. The real problem is that the word "consciousness" is used in many different ways by many different people. It absolutely does not have any kind of rigorous definition unless you buy into some particular school of thought - and I have the strongest of arguments that there '''is no existing school of thought''' that defines it truly rigorously, even if one '''does''' buy into that particular way of looking at things. It's better to regard "consciousness" as a vague, over-arching term, much like "good" in reference to aesthetics, but even more vague than that. Also, '''some''' of the more concrete things you're working on dealing with have some nice specific answers out of cognitive science and related fields, so that's a good place to find out what is actually known, as opposed to speculated. -- DougMerritt I'd like your take on the matter. Near as I can figure, attention is fundamental in the mind and not a concept based on other concepts. It seems related to expectation in that expectation is a form of conscious attention. Attention isn't limited by psychological consciousness. What I don't know is whether consciousness is limited by attention. My phenomenal consciousness is too often drawn inside the circle of my attention but I don't know whether this is always the case. -- RK On the first part: oh, yes indeed. One of the most concrete pieces of evidence there is with visual attention. It has been demonstrated very nicely that that perceptual stillness of a scene, despite the frequent and gross motions of the eyes and neck, etc, is a result of feed-forward, i.e. prediction of the (known) motor action, intended to adjust for the actual physical motion of the scene projected onto the retina, is the primary mechanism - feedback is secondary. On the second part: as i said, "consciousness" is too ill-defined to really comment on in anything other than a metaphorical hand-wavey sense. -- Doug ...but I suppose I should also note that, outside of concrete, well-defined, scientifically replicable phenomenon, the general subject of "training of attention" to achieve various ends in various practices is very old, and in some instances, very effective, e.g. in the "internal" vs "external" martial arts. Over many years, I have found that pretty much everyone has a large spectrum of attentional mechanisms that they may not have been previously aware of, but nonetheless can immediately exercise. My brief version is something like "focus your attention on that 1 square inch on the wall, let 100% of your attention encompass it.......Ok, now expand your attention to include a sphere around your entire body, in all directions.......ok, do you feel a very sharp distinction in the two experiences?" The answer is '''always''' a strong '''yes'''. It doesn't matter what the person's beliefs or previous experiences are, all that matters is that they open-mindedly try this. -- Doug ---- The worrisome thing about Penrose is the point in his physics career at which he began to focus on unprovable, untestable assertions and speculation: it came at a time when he was faced with learning a massive amount of mathematics to face some of his younger critics in the geometry of spacetime. The misfortune for popular science is that Penrose made a tessellation discovery which gives him something of that "genius" aura and we over-look the hiatus in his career. And a knighthood helps. As did the untimely death of one of his younger critics. Richard Feynman said a few things about the need to be honest with yourself first: had Penrose followed Feynman and attended to the real demands facing him in GTR, we would not have heard much from him on consciousness (my guess.) The subject area of attention, memory and consciousness is far more intellectually demanding than the advanced tensor math of GTR. Just consider the demand for an English reader to actually read Husserl and then Merleau-Ponty in German and French respectively. And that would be just to assess where we are at on the subject of the culture, body and intentionality. Then there is assessing the neuro-science (no, you cannot just take Patricia Churchland's word for it .. that is Paul Churchland's job and he knows enough physics to admit it, but appears to have sacrificed his own career in philosophy to some idiosyncratic personal devotion to someone who was in no way his intellectual equal. Amends for Einstein's personal life, perhaps ... As with Penrose and his sycophant, so another case of folie a deux - itself an interesting phenomenon of speech, cognition and attention combined with lively, shared imaginings. The latest speculations of Paul Churchland compared with the late work of Feynman is a very sobering comparison. Then there is the possibility that Pauling did himself genetic damage with too high levels of ingested ascorbic acid ... also tragi-comic when you consider his status at the time of the work of Watson and Crick. Pauling, Penrose, Churchland. Because we have our percepts within our physical surroundings and achieve both a personal familiar world and most often sustain discourse about a shared social world, the brain does not need to contain, either as representations or records, the world as perceived, imagined and remembered. I think the first insight in this regard was JJ Gibson on ecological optics (he and Jane never lapsed into mutually-reinforced silliness). Take a walk with an 10 yr old Aspergers kid who believes/perceives/senses (?) that there are ogres under the fir trees but has no active imaginings concerning the maples, oaks and locust trees dotting the suburban lawns. The child cannot tell the difference between a fir, a pine and a spruce - could not identify from a bough sample which was which. He does not know that this is a suburb (this is the only urban life he knows.) But the shape (?) and shadows (?) of certain fir trees along our safe and quiet route is, for this waking child, equivalent to the night terrors of the young sleeping child. But that fir tree is not some reconstruction in the brain: it is there on the other side of the street as far from us as we can get. Now observe my behavior. I choose not to pantomime his. I choose to "walk" the normal response. No Folie a deux. Later in the afternoon I stop to pick up a new harddrive for my PC: the Laotian technician and I get to chatting about his boyhood in the hills of SE Asia and the animism practiced at that time in his village. He is no longer an animist, but recalls vividly his childhood. But I know that this is the recall of a converted Christian living in Minneapolis. He is unable to convey to me the salient aspects of the spirits that inhabited whatever dark recesses and other aspects of the "fir trees" of his childhood. What I often try to imagine is the intense interest Kurt Goldstein would have taken in today's neuroscience - and what Merleau-Ponty's replies might have been on some of the claims being made ... an imagined series of conversations on image, attention and memory which would make a terrific imaginary BBC series ... and with ample head injuries (CHT) from returning Iraqi vets (not quite Goldstein's WWI vet's, but now we have automobile and gunshot traumas in USA to rival his patients.) For reading some reasoned discussion, I suggest Colin McGinn's "MindSight" or the poem which may have inspired "Le Roi des Aulnes" (Erlen Koenig - often sung admirably by various German performers and a few non-German singers ...) ''Is there some point you intend the reader of this pseudo-intellectual rant to take away, other than a vague sense of dis-ease over your borderline-misogynistic babble re Paul & Patricia Churchland et al?'' ---- FebruaryZeroSix CategoryDiscussion