Mathematical Model Suggests That Human Consciousness Is Noncomputable 426
KentuckyFC (1144503) writes "One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to formulate the problem of consciousness in mathematical terms, in particular using information theory. That's largely thanks to a relatively new theory that consciousness is a phenomenon which integrates information in the brain in a way that cannot be broken down. Now a group of researchers has taken this idea further using algorithmic theory to study whether this kind of integrated information is computable. They say that the process of integrating information is equivalent to compressing it. That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy, they use algorithmic theory to show that the process of integrating information must noncomputable. In other words, your PC can never be conscious in the way you are. That's likely to be a controversial finding but the bigger picture is that the problem of consciousness is finally opening up to mathematical scrutiny for the first time."
Ghost in the machine? (Score:3)
Nope, just a bad copy of it.
Re:Ghost in the machine? (Score:5, Insightful)
no Ghost_no "singularity"_only sci-fi (Score:5, Interesting)
Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**
That's the problem. Hard AI, "teh singularity", and the "question of consciousness" are so polluted in the literature by non-tech philosophers throughout history that the notion of ***falsifiability*** of computation theory get's tossed aside in favor of TED-talk style bullshit.
Falsifiability kills these theories *every time* and hopefully this research in TFA will help break the cycle.
To be science it must be able to be tested. It must be a premise that is capable of being proven or disproven. "hard AI" proponents like Kurzweil and the "singularity" believers ignore this part of science.
So happy to see this research
Re: (Score:3)
Re: (Score:3)
I'm not sure I agree. I think building an OS with virus checking incorporated into the design, for instance, would be a form of "self preservation". Or a computer/robotic arm combination that recognizes a screwdriver and will not let one get near. Moreover, I could point out humans that don't appear to have any concept of self-preservation, which calls into question whether this would be a rigorous requirement for a "truly human computer".
Likewise, a robot that nudged you and said "let's play catch. Ple
Re:no Ghost_no "singularity"_only sci-fi (Score:4, Insightful)
To put it bluntly, this entire study is worthless as science. We don't know how human mind works. Should we ever know, we'd then have the oh so fun task of disentangling accidents of biology from fundamental underlaying limits. And because we don't know how the human mind works, we have no way of knowing whether a particular model presents it accurately or at all (however, any theory that claims human memory is in any way perfect is certainly off to a bad start), thus any conclusions based on it are firmly in the land of wild mass guessing.
Well, the complexity of behaviour of the Universe has been increasing since at least the Big Bang in a virtuous circle. Is there some reason why the trend would stop, either now or at some future point? If not, then it seems like singularity would be the inevitable result.
Anti-AI isn't science, it's just the ancient belief about the supernatural specialness of human soul, typically dressed in arguments from lack of imagination [wikipedia.org] and often seasoned with a helping of ego [wikipedia.org]. Nature has no way of telling between "artificial" and "natural", after all, so it's incapable of allowing natural intelligent creatures (us) yet disallowing artificial ones.
Re:no Ghost_no "singularity"_only sci-fi (Score:4, Insightful)
Computation is insufficient to solve all problems, yes. The questions are: is anything capable of solving all problems? That is, is there something beyond computation? And if there is, does human mind include it? And if it does, is it something essential or does it just give you an extra edge in some special situations?
So far, no one has demonstrated any ability of human mind that couldn't be replicated through computation. That, of course, doesn't mean none exists. Knowing how mind works would would presumably allow us to enumerate over all its capabilities and settle the matter.
And now we're back to meaningless rhetoric.
Re: (Score:3)
Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**
That's the problem. Hard AI, "teh singularity", and the "question of consciousness" are so polluted in the literature by non-tech philosophers throughout history that the notion of ***falsifiability*** of computation theory get's tossed aside in favor of TED-talk style bullshit.
Uh... excuse me? Why are you ranting on about something GP never even said?
Here's some "falsifiability" for you: repeatable experiments have been done on these different creatures, and a subset of species DO in fact exhibit self-recognition in controlled studies. Now, it may be only an assumption, but it is a pretty damned good assumption, that self-recognition is a precursor to consciousness. (It is actually more than just an assumption; but we have only one example of a "conscious" brain so it's hard t
Re: (Score:3)
*The neural network of human brain can be atomized down to neurons and their connections.
Re:no Ghost_no "singularity"_only sci-fi (Score:5, Funny)
I hereby bestow upon you a Ph.D. in Pedantry.
Re: (Score:3)
Can a Ph.D. be bestowed by an individual? :P
Your comment presupposes that a "Ph.D. in Pedantry" exists. If such a degree did exist, I'm sure people many people around here would have attained one (if not granted multiple honorary doctorates).
Perhaps this calls for a new Slashdot achievement -- the Ph.D. in Pedantry. Once one achieves it, one gains the ability to mod posts as "pedantic" (since someone with a Ph.D. is obviously an official arbiter in the field). The fun thing about the "pedantic" mod is that it could serve as either +1 or -1, whic
Re: (Score:3)
Sure. There is 7 different types of consciousness - 3 below humans and 3 above human. This is no accident. I will leave it to you to also experience the joy of (re)discovery of what they are. Your life will never be the same again. Lucid Dreaming is a great spring board to Meditation which is usually the fastest way, but use what ever techniques and religion(s) that you feel help.
By 2024 first (public) contact will happen. Technically, contact already happened thousands of years ago but on a limited level
can i play? (Score:3)
2024 is not a date or time. In the multiverse it is a place.
That year, 2024, is the point in space/time where the natural progression of human consciousness & technology & science converge and we will take a step forward equivalent to the first humans to make artwork or speak language...only this is not an inward step, but an outward one.
Conspiracy theorists talk about "predictive programming" and it's bunk of course, but humanity has known this all along. The parallel is all humans who will be aliv
Memories do decay (Score:5, Informative)
But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay.
Memories do decay upon recall. People misremember something and convince themselves that the misremembered notion was correct.
Re:Memories do decay (Score:5, Funny)
I'm pretty sure you are remembering that wrong.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2, Informative)
There is actually a physiological basis for memories decaying upon recall, and there's a separate process called reconsolidation that needs to be initiated at a synaptic level in order to prevent memories from progressively degrading with activation (that is, it reconstitutes the memory after activation). You can selectively block this reconsolidation process during a small time window using protein synthesis inhibitors or electroconvulsive shock. The result is that these treatments will leave unactivated m
Re: (Score:2)
Memories do decay upon recall.
Nonsense. I mean, I can still recall every square centimeter of that 1976 Farrah Fawcett poster in excruciating detail. Over the last 30-odd years, I've literally recalled it some tens of thousands of times with absolutely no degradation in quality.
Good thing too, because for some reason I'm now almost completely blind. (see username)
Re: (Score:2)
Yeah, I don't feel like reading this whole thing because it reeks of some engineer trying to be an expert on the brain without bothering to dig into what's already been discovered. We've been studying the mind for thousands of years. Don't think that knowing about computers will make you more of an expert than people who've studied the subject.
Remembering both enhances and corrupts memory. You could compare "remembering" to "opening and resaving a media file with a lossy format" specifically because the
Re: (Score:2)
Re:Memories do decay (Score:5, Informative)
Exactly right. Neuroscientists have shown memories are distorted every time you use them; thus memories that are recalled frequently are less accurate than those infrequently recalled. [citation [northwestern.edu]]
Re: (Score:3)
Early non-medicinal PTSD treatments were desensitization, where you recall the memory in a calm and non-threatening situation. Turns out, just recalling them is like getting them off the shelf and putting them back. So there are faster ways to achieve the same thing.
Remembering things, and interrupting the storage process, seems to reduce the strength of a traumatic memory.
citation [smithsonianmag.com]
That link only touches the surface of the changing part, but it's a starting point.
As time goes on, your arguments can fall ap
Re: (Score:2)
Which is wrong, unless you read it with information loss and then store the retrieved version with more information loss. If information is lost on storage it does not mean it is lost on retrieval. If it is lost on retrieval it does not mean it is lost on storage. Even if it's lost on both actions, it does not degrade the stored version more unless you remove it and re-store it.
Re: (Score:3)
Protip: The brain isn't a computer.
Obviously it is, as it demonstrably can be used for computing.
However, it isn't a very reliable computer, nor necessarily Turing complete.
Re: (Score:3)
Re: (Score:2)
why shouldn't the computer have a decay? You can have read/write errors, you can compute a lossy encoding if you want to ... and i would just implement the brain so lowlevel, that the neurons are modeled. And how the decay in the signals between neurons works can be observed, and will be better observed in future.
Re: (Score:2)
They were saying that the act of retrieving memory would erode the memory.
Right - it's not the act of recollection that causes the memory to decay.
Memories are not a sequence of visual images like a film reel; they're associations between "symbols" representing the things you experienced at the time the memory was formed. The more often you think about the memory, the stronger those associations become, and the more permanent this memory - however, the initial impression is not guaranteed to be a perfect record, so details that are incorrectly recorded initially will become rein
Re: (Score:2)
Re: (Score:2)
Even if human consciousnes is based on method X, that doesn't mean that consciousness has to be based on method X. Remember, the original Turing machine thought expe
"never" is a rather strong word... (Score:2)
Memory is non-lossy? Research suggests otherwise. (Score:5, Informative)
Retrieving memories repeatedly would cause them to gradually decay is talked about in a radiolab episode.
http://www.radiolab.org/story/91569-memory-and-forgetting/
Eyewitness accounts have been proven to be wrong over and over again. The assumption of a non-lossy memory is just false.
Re:Memory is non-lossy? Research suggests otherwis (Score:4, Insightful)
Re: (Score:3)
Wrong. Your short term memory can remember, without effort, 7 digits easily. It therefore is not so complicated that it changes before you put it back.
Remembering the order of events over several days, on the other hand, does not fit in short term memory in a coherent fashion. So you may gradually put several things on a week long trip on the same day. When telling the story, someone else who was there says "No, it was the next day, because [reason]". You didn't have a forceful enough memory to record
Re: (Score:2)
Let's play a little game. Go to, say, DeviantArt, and pick a random picture. With that picture right in front of you, can you describe it in such detail that I can find it? Or will the game end with me picking a random image that might, with some luck, bear some resemblance to the scene you described?
Eyewitness accounts are difficult because making a useful description is hard, even with
Memory is more like dynamic RAM. (Score:5, Interesting)
Re: (Score:2)
I've read studies that suggest the brain is designed to remember what's useful to it, and forget what isn't or what's harmful.
The same study stated that psychoanalysis, forcing the patient to constantly recall painful memories (what you call refresh) interferes with the brain's natural ability to heal by forgetting, maintains the patient's problem - and their dependency on the psychoanalist in their search for a cure.
They really should pay attention to other fields (Score:2)
My PC cannot be conscious the way I am (Score:4, Interesting)
Because I'm a human being and it's a PC. Duh...
I think machines will eventually acquire their own form of consciousness, totally separate from ours. and I reckon it's just fine, and much more exciting in fact than trying to replicate our humanity in hardware that's just not compatible with it.
Re: (Score:3)
Even if machines eventually acquire a form of consciousness, how would *we* know? Who would believe a machine's claim to be conscious?
Well, you can't really prove you're conscious. I don't even mean proving it to me, I mean you can't prove it to yourself.
What if every decision you make is made before you realize it? What if what you think of as consciousness, what you think of as your decision making process, is merely a byproduct of packaging that decision up for dissemination to other parts of your brain that need to know about it, but weren't involved in the making of the decision. Maybe you didn't even make the decision for the rea
Sounds Non-Deterministic to me (Score:2)
A good analogy (Score:2)
Talking about whether a computer can think is like talking about whether a submarine can swim.
Trying to duplicate the mechanical details may be a waste of time. The fact that we can't duplicate the mechanical details today doesn't mean we never will.
worse than physicists (Score:2)
> By assuming that the process of memory is non-lossy
What a fucking strange way to start. Memories are recursive, really old memor s you don't directly remember, you remember remembering.
Memory Non-Lossy? I beg to differ. (Score:2)
"But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy."
Really? I can barely remember last friday night. Let alone my circumcision 50 years ago. What was that girl's name who slapped me in my face? Or punched me... it's so hazy.... Caroline? Katy? Maybe it was Jeffery..... so fuzzy.... I had her number written on my hand.... oops right palm....
Memory non-lossy my ass...
Re: (Score:2)
Best memory loss example, ever [youtube.com].
misapplied mathematics (Score:5, Insightful)
Illogical Distinction (Score:2)
How is the brain not a computer? Pfft...ridiculous conclusions.
Retrieving memories causes decay? (Score:5, Interesting)
Ouch. Just. Ouch. No. Noooo. NOOOOO.
There is so much wrong with this statement I don't even know where to start. It implies that the memory is overwritten with the memory of recalling the memory, which is a huge and ridiculous assumption. Memory likely works much more like ant paths. The details that are recalled more frequently are reinforced, and can be remembered longer. It could also be compared to a caching algorithm; details used more often are less likely to be lost, or need fewer hints to retrieve them.
And then using this assumption to declare something as non-computable demonstrates a lack of understanding of the concept of computability. The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Irrational Numbers are Supernatural
The Universe is Dependent upon The Cosmological Constant
Pi is an Irrational Number
The Cosmological Constant is Irrational because it contains Pi which is also Irrational.
The Universe Must be Supernatural.
Re: (Score:2)
Roger Penrose [wikipedia.org] (for one) is vehement in his insistence that consciousness is non-computable, possibly quantum in nature. Certainly there are other ways that consciousness could be non-computable without being supernatural.
Re: (Score:2)
And his fascination with that crackpot theory is why he, frankly, hasn't done any significant work in 20 years.
It's based on assuming their exists a new type particle that we have no evidence to currently believe exists interact with a part of the neuron whose functions are known to cause a quantum superposition despite the fact it's been shown there's no way such a state could maintain coherence at anything close to the temperature the brain is at.
Re: (Score:2)
It's not true that it has to be supernatural to be noncomputable, unless you agree that physics itself is computable. The jury is still out on that one (although I believe that it will turn out to be true).
Re: (Score:2)
Nonsense. Just because something exists and is not "supernatural" doesn't mean that it must be computable. Take the halting problem for instance. There is no Turing Machine that is able to take any possible TM and input and determine whether the inputted TM will eventually halt or go into an infinite loop when run with the given input. This
Re: (Score:3)
It implies that the memory is overwritten with the memory of recalling the memory, which is a huge and ridiculous assumption.
However the notion that memory is overwritten by recollection actually does have experimental support. The idea isn't ridiculous, it's just repugnant because it implies that our grasp on reality isn't as firm as we'd like to believe it is.
The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.
Not necessarily. One way consciousness could be non-computable would be for it to be non-deterministic.
In any case this is all fuzzy; not only is "supernatural" a fuzzy word, the discussion of "computable" is fuzzy too. What would it mean for consciousness to be "computa
I thought memories do decay (Score:5, Insightful)
That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay.
I remember hearing a radiolab episode on NPR talking about how memories actually get modified every time you recall them.
http://www.radiolab.org/story/91569-memory-and-forgetting/
Maybe the radiolab episode is completely wrong, but I don't think it's fair to assume memories are lossless without providing some evidence of this.
otherwise, retrieving memories repeatedly would c (Score:2)
> otherwise, retrieving memories repeatedly would cause them to gradually decay
So, i guess this was never observed on real humans?
a bunch of silly assumptions (Score:2)
That said:
1) Most memory researchers believe it IS lossy. Specifically each time you access a memory you change it, losing original information
2) Not all computers have to only use mathematical equations and algorithms. Specifically their are quantum computers that do not work that way. While I am not an expert on such things I highly doubt that the rather limited definition they are usin
Sounds like complete bullshit. (Score:2)
There seems to be a step missing from A (that's not how memory works) to B (therefore uncomputable). The premise that memory isn't lossy sounds like rubbish, even IF it's perhaps not so simply a question of 'read errors'
I recently watched this talk, Modeling Data Streams Using Sparse Distributed Representations [youtube.com], which seems to be able to represent memory in a layered and lossy way perfectly fine in a computer.
Memories do decay upon recall (Score:2)
Memories decay upon recall. [wired.com] Your brain basically alters the memory slightly each time. This can be used to erase or alter memories.
No need for math model (Score:2)
As always, the truth is in the Bible:
Genesis 1:27
God created man in His own image, in the image of God He created him; male and female He created them; but man is not a machine, for God did not look like a beige box PC.
singularity (Score:2)
If this is true, what does that mean for wankers like Kurzweil and the fantasy of the 'Singularity'?
Sounds like utter bullshit (Score:4, Interesting)
Here's a a critique [arxiv.org]. (It's on arxiv; no need to sign up for "Medium")
The paper isn't impressive. It make the assumption that human (other mammals, too?) memory isn't compressed, and is somehow "integrated" with all other information. We've been through this before. Last time, the buzzword was "holographic". [wikipedia.org] We've been here before.
The observation that damage to part of the brain may not result in the loss of specific memories still seems to confuse many philosophers and neurologists. That shouldn't be mysterious at this point. A compact disk has the same property. You can obscure a sizeable area on a CD without making it unreadable. There's redundancy in the data, but it's a lot less than 2x redundant. The combination of error correction and spreading codes allows any small part of the disk to be unreadable without losing any data. (Read up on how CDs work if you don't know this. It's quite clever. First mass-market application of really good error correction.)
Re: (Score:2)
Is that what it proves? (Score:2)
Sounds more like you can't separate the human consciousness from the memories. I thought we already knew that. Perhaps there was a theory why until now.
Conscious phenomenon != complex processing (Score:2)
Stop it. Just stop it, people.
Memory doesn't work that way. It's a live feedback loop that reinforces itself through the conscious mind. There is some lossy drift but stuff that maps to the real world is indeed corrected if lossily. Ancient stuff from when you were a kid (Gee, what did Koogle taste like) drifts and drifts.
Something from when you were a kid,
like Orange Julius taste, drifts but may suddenly be reset when you stumble across one at a mall somewhere (or Dairy Queen, whoever bought them). Hi
Re: (Score:2)
If it's not computable... (Score:3)
Recalled memory _is_ lossy (Score:3)
Repeatedly recalling an event, as for story telling, restores a subtly _altered_ copy of the memory. This has been shown by many experiments about the plasticity of human memory.
No it isn't (Score:5, Insightful)
Just Kolmogorov Complexity...and religious intent? (Score:3)
The argument for compression describes essentially Kolmogorov Complexity. [wikipedia.org] The idea is that the K.C. of something (and everything can be reduced to a binary string) is the length of the shortest program (if you look at it algorithmically) that can describe that object (reproduce that binary string) and stops. In TFA, the example is reducing the description of an infinite sequence of numbers to a finite program that calculates the odd primes and adds one to each. The number pi is infinite in length and random, but not complex since there are small programs that caculate it; an infinte truly random string of numbers would have infinite K.C. because the shortest program would be "print -infinite string". The K.C. of an object is not computable (it's related to the Halting Problem [wikipedia.org]), essentially you never know if you have the shortest program to describe an object.
So here are some observations
a) the whole premise rests on the assumption that the brain is a Turing complete computer ie. the brain is a computer too. So if the brain is a computer, why couldn't other Turing complete computers mimic it? In fact, K.C. theory uses the idea that there is a Universal Turing Machine that can mimic all other Turing machines. If the brain is not a Turing machine then you can't really make any comments about it's compression abilities, etc. because algorithmic theory is grounded on the Turing assumption; ... it just seems like a pointless example; BUT
b) the TFA implies that compression is lossy. Well, not all compression is lossy and the example provided (prime plus 1) is not lossy at all, it's perfect. So what is the point of that example except to suggest that memory must be perfect compression?
c) the assumption that memory is/must be perfect compression seems extremely flawed. Memory is not perfect and most memory seems to degrade over time (see witness reports, personal experience, etc.)
So ... the whole paper seems riddled with discontinuities or inaccuracies. Really it seems like it would have been better to say:
"The brain compresses information in a lossy fashion. We don't know how. Assuming a Turing process is occuring, then the brain is looking for the best compression it can but it can never know if it has the best or not. A computer will be in the same boat." BUT
THE FEAR
Basically the article is making (a flawed?) claim "Machines can never be conscious." The argument plays very well to a religious and research oriented crowd. First: machines can never be made in the image of man. We are not gods. Second: There is no requirement to consider ethics in AI . No matter what the AI seems, it is not, _can't be_, conscious. Therefore, should you create a robot that walks, talks, acts, and feels like a human ... well, it isn't conscious, so do with it as you will.
Bad syllogism (Score:5, Insightful)
Baloney. What a stupid argument. Here is it, summarized:
1. Here is one mathematical model of a way that memories could work.
2. This method would be computable.
3. But that would mean memories degrade the more you remember them
4. But memories don't degrade the more you remember them.
5. Therefore memories are not computable.
Assignment for the student: find the flaw in this argument.
Re:Bad syllogism (Score:5, Funny)
The flaw is as followed: the summary is missing a crucial step, which would read as such: "6. Profits!".
Re:Bad syllogism (Score:5, Insightful)
The flaw is as followed: the summary is missing a crucial step, which would read as such: "6. Profits!".
They are missing an even more fundamental step: "0. Define consciousness." The definition they give, "a property of a physical system, its 'integrated information'," is a definition that I have never heard before, and I doubt most people would agree with. Before you try to explain something, you need to have a definition that people accept, and you have to also have a consensus that the phenomenon actual exists. There is some evidence that consciousness is an illusion, and that people make decisions unconsciously, and then rationalize them after the fact [about.com]. Arguing about "consciousness" is like arguing about "free will" or arguing about whether people have a soul.
Re:Bad syllogism (Score:4, Insightful)
There is some evidence that consciousness is an illusion, and that people make decisions unconsciously, and then rationalize them after the fact [about.com].
But how could we rationalize about stuff if we weren't conscious?
Re:Bad syllogism (Score:4, Interesting)
An interesting argument is that it's basically the same way we do anything else.
Numerous studies have shown that if you for-example watch someone moving their arm, you partially understand this by using the same area of your brain that deals with your arms. Same with emotion - microexpressions where you have a fleeting subtle echo of expressions on others faces which aids your understanding - botox actually can impair your ability to perceive well the emotions of others.
Consciousness - or more accurately the illusion of a self can be reasonably understood as the reuse of an evolutionary device originally used to understand others actions. When applied to ourselves, this guesses our 'intent' from internal actions, and provides reasons and justifications for actions, which may be entirely specious.
For example, direct brain stimulation does not 'feel' like an external input - it feels like a 'natural' thought that you had - and people will often rationalise reasons for the most unusual behaviour due to direct brain stimulation, rather than the simple answer 'you applied a pulse of electricity to my brain' - because that's not how it feels.
http://brainsciencepodcast.com... [brainsciencepodcast.com] - is interesting on this exact topic.
Re: (Score:2)
Actually, John Conway and Simon Kochen at IAS have a really interesting argument about free will [wikipedia.org]. Lecture videos here [princeton.edu].
Re:Bad syllogism (Score:4, Insightful)
The error is in step 5. It should be:
5. Therefore, that mathematical model is incorrect.
They found a contradiction, so the model must be revised.
Re: (Score:3)
Re: (Score:2)
Re:Bad syllogism (Score:5, Interesting)
In fact, it's pretty clear that 4. is incorrect. There was a fascinating recent study.
There is a drug that you can give somebody (or in this experiment, a rat) that will prevent it from creating new memories. They trained the rat to solve a maze, and it did it just fine. They gave the rat the drug, and it solved the maze perfectly. Once. After that, it couldn't do it again.
Implying that when you remember something, that very process of remembering removes the original memory,and it has to be created again. It will be different the second time; colored by your current experience. The more times you remember something, the more you are remembering the previous memory, not the original event.
A reference is
Re:Bad syllogism (Score:5, Funny)
A reference is
I think you remembered your reference once too often. ;-)
Re: (Score:2)
Right-O (Score:3)
Your cite of the "recent" study fits with my memory from the old school. There are several kinds (at least two) of memory: long- and short-term; one is chemical, the other electrical. Each reference to the protein carrying the memory rewrites it to include the information from the new conscious understanding and context, thus changing the protein when it is recreated. I am surprised that this method of decoding/recoding has not been looked into.
Re: (Score:3)
Re: (Score:3)
Well they have serious problems with even
0. The assumptions on which their model is based.
FTFS:
They say that the process of integrating information is equivalent to compressing it. [...] By assuming that the process of memory is non-lossy [...]
Re: (Score:2)
Baloney. What a stupid argument. Here is it, summarized: 1. Here is one mathematical model of a way that memories could work. 2. This method would be computable. 3. But that would mean memories degrade the more you remember them 4. But memories don't degrade the more you remember them. 5. Therefore memories are not computable.
Assignment for the student: find the flaw in this argument.
You cannot blame the theory when the data doesn't match! That is denial-ism!
Re:Bad syllogism (Score:4, Insightful)
You can, however, blame ignorant fucktards who don't understand the data OR the theory who go around acting like self-righteous assholes when a scientific theory intrudes on their ideological leanings.
Re:Bad syllogism (Score:4, Informative)
They might as well have just used some Schneier Facts in place of the paper: "SHA-256 is a hash algorithm, and not reversible." "Bruce Schneier uses SHA-256 as a compression algorithm for Alice and Bob's shared secret." "Therefore Bruce Schneier is not computable, except by himself."
It would have taken about ten minutes to email anybody in the psych department and this all could have been avoided. Good Work!
Re: (Score:3)
1. something ...memories degrade the more you remember them.
2. something else
3.
4. But memories don't degrade the more you remember them.
5. Therefore memories are not computable.
I just read your post and was going to reply but I forgot what point you were making. I kept thinking about it too long. What really pissed me off though is that you had the nerve to insult my mother or my religion or something. Just know for the rest of my life, I'll be keeping an eye on you, and you'd better be looking over your shoulder.
People who say stupid things piss me off. Yeah, it doesn't compute, I know.
It's a 'lossy' system by design_no flaw to detect (Score:2)
You start off-kilter and just careen into a ditch of dumbness...
The researchers did *not* start with that at all...
Here's where they started, from TFA:
"cannot be broken down" but it can be modeled in a way that proves the theory
Here's how, note the distinction, from TFA:
Re: (Score:2)
Exactly. This is like the intelligent design argument. "This problem is complex, so we're going to propose that it can't EVER be solved. Let's discuss where we're all going for lunch."
Re: (Score:2)
Re:Physically impossible (Score:4, Informative)
Re: (Score:3)
Are brains Turing machines?
Re: (Score:2)
There's no Enya music in the background.
Re: (Score:2)
The halting problem is a counterexample (Score:3)
Everything is computable given the right models and starting conditions.
"Does the Turing machine with a given description halt?" That's been proven not computable on a Turing machine. And we lack a model more powerful than a Turing machine.
Re: (Score:2)