Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Science

Mathematical Model Suggests That Human Consciousness Is Noncomputable 426

KentuckyFC (1144503) writes "One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to formulate the problem of consciousness in mathematical terms, in particular using information theory. That's largely thanks to a relatively new theory that consciousness is a phenomenon which integrates information in the brain in a way that cannot be broken down. Now a group of researchers has taken this idea further using algorithmic theory to study whether this kind of integrated information is computable. They say that the process of integrating information is equivalent to compressing it. That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy, they use algorithmic theory to show that the process of integrating information must noncomputable. In other words, your PC can never be conscious in the way you are. That's likely to be a controversial finding but the bigger picture is that the problem of consciousness is finally opening up to mathematical scrutiny for the first time."
This discussion has been archived. No new comments can be posted.

Mathematical Model Suggests That Human Consciousness Is Noncomputable

Comments Filter:
  • by ArcadeMan ( 2766669 ) on Thursday May 08, 2014 @04:00PM (#46952881)

    Nope, just a bad copy of it.

    • by VernonNemitz ( 581327 ) on Thursday May 08, 2014 @04:30PM (#46953271) Journal
      "Non-computable" does not mean "non-copy-able". In other words, consider the sort of consciousness associated with recognizing oneself in a mirror. Humans are not the only animals that can do that. Among those that can are quite a few other primates, dolphins, elephants, some species of birds (certain parrots), and even the octopus. So, think about that in terms of brain structure: Birds have a variant on the basic "reptilian brain", elephants and dolphins have the "mammalian brain" extension of the reptilian brain, chimps and gorillas have the "primate brain" extension of the mammalian brain, and the octopus brain is in an entirely different class altogether (the mollusk family includes clams and snails). Yet Nature found ways to give all of those types of data-processing equipment enough consciousness for self-recognition. And after you include however-many extraterrestrial intelligences there might be, all across the Universe, well, anyone who thinks "no variant of computer hardware will ever be able to do that" is just not thinking clearly.
      • by globaljustin ( 574257 ) on Thursday May 08, 2014 @04:48PM (#46953477) Journal

        And after you include however-many extraterrestrial intelligences there might be, all across the Universe, well,

        Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**

        That's the problem. Hard AI, "teh singularity", and the "question of consciousness" are so polluted in the literature by non-tech philosophers throughout history that the notion of ***falsifiability*** of computation theory get's tossed aside in favor of TED-talk style bullshit.

        Falsifiability kills these theories *every time* and hopefully this research in TFA will help break the cycle.

        To be science it must be able to be tested. It must be a premise that is capable of being proven or disproven. "hard AI" proponents like Kurzweil and the "singularity" believers ignore this part of science.

        So happy to see this research

        • We'll never make a truly human computer (or maybe "natural computer" is a better term) because we can't make it first and foremost desire self-preservation. We can build a robot that plays catch, but we can't make it want to play catch. Do we even know why we want to play catch (deep down I think it is motivated by the desire to procreate)? And thank FSM we can't build and evolving machines, because computers are logical and not forgetful, and would very likely enslave us first change they got.
          • I'm not sure I agree. I think building an OS with virus checking incorporated into the design, for instance, would be a form of "self preservation". Or a computer/robotic arm combination that recognizes a screwdriver and will not let one get near. Moreover, I could point out humans that don't appear to have any concept of self-preservation, which calls into question whether this would be a rigorous requirement for a "truly human computer".

            Likewise, a robot that nudged you and said "let's play catch. Ple

        • by ultranova ( 717540 ) on Thursday May 08, 2014 @10:12PM (#46955743)

          Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**

          To put it bluntly, this entire study is worthless as science. We don't know how human mind works. Should we ever know, we'd then have the oh so fun task of disentangling accidents of biology from fundamental underlaying limits. And because we don't know how the human mind works, we have no way of knowing whether a particular model presents it accurately or at all (however, any theory that claims human memory is in any way perfect is certainly off to a bad start), thus any conclusions based on it are firmly in the land of wild mass guessing.

          To be science it must be able to be tested. It must be a premise that is capable of being proven or disproven. "hard AI" proponents like Kurzweil and the "singularity" believers ignore this part of science.

          Well, the complexity of behaviour of the Universe has been increasing since at least the Big Bang in a virtuous circle. Is there some reason why the trend would stop, either now or at some future point? If not, then it seems like singularity would be the inevitable result.

          Anti-AI isn't science, it's just the ancient belief about the supernatural specialness of human soul, typically dressed in arguments from lack of imagination [wikipedia.org] and often seasoned with a helping of ego [wikipedia.org]. Nature has no way of telling between "artificial" and "natural", after all, so it's incapable of allowing natural intelligent creatures (us) yet disallowing artificial ones.

        • Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**

          That's the problem. Hard AI, "teh singularity", and the "question of consciousness" are so polluted in the literature by non-tech philosophers throughout history that the notion of ***falsifiability*** of computation theory get's tossed aside in favor of TED-talk style bullshit.

          Uh... excuse me? Why are you ranting on about something GP never even said?

          Here's some "falsifiability" for you: repeatable experiments have been done on these different creatures, and a subset of species DO in fact exhibit self-recognition in controlled studies. Now, it may be only an assumption, but it is a pretty damned good assumption, that self-recognition is a precursor to consciousness. (It is actually more than just an assumption; but we have only one example of a "conscious" brain so it's hard t

        • by h5inz ( 1284916 )
          Maybe if you read the article that the Slashdot summary refers to, you would also read a little about the criticism about the claim that human consciousness is not computable. The criticism pretty much destroys this and it appears that the "definite proof of non-computability" translates to "err..me thinks its not possible for humans to make it" (to mimic the function of a human brain). Maybe I should point some out:

          *The neural network of human brain can be atomized down to neurons and their connections.
  • Memories do decay (Score:5, Informative)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Thursday May 08, 2014 @04:04PM (#46952939) Homepage Journal

    But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay.

    Memories do decay upon recall. People misremember something and convince themselves that the misremembered notion was correct.

    • by Anonymous Coward on Thursday May 08, 2014 @04:07PM (#46952983)

      I'm pretty sure you are remembering that wrong.

    • Ok, take the memory of reading this statement. Then remember it, and remember it again. Have you started to forget the statement? Keep doing that until you do. Giving up because you're board isn't forgetting.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      There is actually a physiological basis for memories decaying upon recall, and there's a separate process called reconsolidation that needs to be initiated at a synaptic level in order to prevent memories from progressively degrading with activation (that is, it reconstitutes the memory after activation). You can selectively block this reconsolidation process during a small time window using protein synthesis inhibitors or electroconvulsive shock. The result is that these treatments will leave unactivated m

    • Memories do decay upon recall.

      Nonsense. I mean, I can still recall every square centimeter of that 1976 Farrah Fawcett poster in excruciating detail. Over the last 30-odd years, I've literally recalled it some tens of thousands of times with absolutely no degradation in quality.

      Good thing too, because for some reason I'm now almost completely blind. (see username)

    • Yeah, I don't feel like reading this whole thing because it reeks of some engineer trying to be an expert on the brain without bothering to dig into what's already been discovered. We've been studying the mind for thousands of years. Don't think that knowing about computers will make you more of an expert than people who've studied the subject.

      Remembering both enhances and corrupts memory. You could compare "remembering" to "opening and resaving a media file with a lossy format" specifically because the

    • That is because after the memory starts to fade, we start remembering recalling that memory, not the memory itself. Each iteration likely has details that weren't recalled, so they can be supplemented with someone elses recollection, or simply imagination filling in the gaps. Since this process isn't observable it is hard to tell where the memory changed and how
    • Re:Memories do decay (Score:5, Informative)

      by hey! ( 33014 ) on Thursday May 08, 2014 @05:55PM (#46954079) Homepage Journal

      Exactly right. Neuroscientists have shown memories are distorted every time you use them; thus memories that are recalled frequently are less accurate than those infrequently recalled. [citation [northwestern.edu]]

      • Early non-medicinal PTSD treatments were desensitization, where you recall the memory in a calm and non-threatening situation. Turns out, just recalling them is like getting them off the shelf and putting them back. So there are faster ways to achieve the same thing.

        Remembering things, and interrupting the storage process, seems to reduce the strength of a traumatic memory.

        citation [smithsonianmag.com]

        That link only touches the surface of the changing part, but it's a starting point.

        As time goes on, your arguments can fall ap

  • but then again, odds are anyone making such a claim will be long dead before it could be proven wrong.
  • by Anonymous Coward on Thursday May 08, 2014 @04:05PM (#46952957)

    Retrieving memories repeatedly would cause them to gradually decay is talked about in a radiolab episode.

    http://www.radiolab.org/story/91569-memory-and-forgetting/

    Eyewitness accounts have been proven to be wrong over and over again. The assumption of a non-lossy memory is just false.

    • by frog_strat ( 852055 ) on Thursday May 08, 2014 @04:23PM (#46953167)
      Hmmm. Do you find yourself occasionally having to re-learn your address or phone number ?
    • Eyewitness accounts have been proven to be wrong over and over again. The assumption of a non-lossy memory is just false.

      Let's play a little game. Go to, say, DeviantArt, and pick a random picture. With that picture right in front of you, can you describe it in such detail that I can find it? Or will the game end with me picking a random image that might, with some luck, bear some resemblance to the scene you described?

      Eyewitness accounts are difficult because making a useful description is hard, even with

  • by Kazoo the Clown ( 644526 ) on Thursday May 08, 2014 @04:06PM (#46952969)
    Not retrieving memories is what causes them to decay. Ever hear of refresh?
    • I've read studies that suggest the brain is designed to remember what's useful to it, and forget what isn't or what's harmful.

      The same study stated that psychoanalysis, forcing the patient to constantly recall painful memories (what you call refresh) interferes with the brain's natural ability to heal by forgetting, maintains the patient's problem - and their dependency on the psychoanalist in their search for a cure.

  • Remembering something is like reading a DRAM bit. You read it, and then you re-write it. This is why memory is fallible. http://www.smithsonianmag.com/... [smithsonianmag.com]
  • by Rosco P. Coltrane ( 209368 ) on Thursday May 08, 2014 @04:08PM (#46952987)

    Because I'm a human being and it's a PC. Duh...

    I think machines will eventually acquire their own form of consciousness, totally separate from ours. and I reckon it's just fine, and much more exciting in fact than trying to replicate our humanity in hardware that's just not compatible with it.

  • Isn't noncomputable the same as saying non-deterministic? There are lots of non-deterministic computer operations where the result is based on a database query or a call to a web page where you can't know in advance what the result will be and you also don't really know how long it will take to get the information (if at all).
  • Talking about whether a computer can think is like talking about whether a submarine can swim.

    Trying to duplicate the mechanical details may be a waste of time. The fact that we can't duplicate the mechanical details today doesn't mean we never will.

  • > By assuming that the process of memory is non-lossy

    What a fucking strange way to start. Memories are recursive, really old memor s you don't directly remember, you remember remembering.

  • "But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy."

    Really? I can barely remember last friday night. Let alone my circumcision 50 years ago. What was that girl's name who slapped me in my face? Or punched me... it's so hazy.... Caroline? Katy? Maybe it was Jeffery..... so fuzzy.... I had her number written on my hand.... oops right palm....

    Memory non-lossy my ass...

  • by Mr. Slippery ( 47854 ) <tms&infamous,net> on Thursday May 08, 2014 @04:13PM (#46953037) Homepage
    One of the most profound advances in bullshitting in recent years is the way researchers from a variety of fields are beginning to misuse mathematical terms in order to give their ideas a facade of intellectual responsibility. Since no one has yet come up with an agreed-upon definition of what this "consciousness" is as an objective observable phenomenon, trying to talk about it in mathematical terms is nothing more than intellectual masturbation.
  • How is the brain not a computer? Pfft...ridiculous conclusions.

  • by Verdatum ( 1257828 ) on Thursday May 08, 2014 @04:18PM (#46953111)
    "retrieving them repeatedly would cause them to gradually decay"

    Ouch. Just. Ouch. No. Noooo. NOOOOO.

    There is so much wrong with this statement I don't even know where to start. It implies that the memory is overwritten with the memory of recalling the memory, which is a huge and ridiculous assumption. Memory likely works much more like ant paths. The details that are recalled more frequently are reinforced, and can be remembered longer. It could also be compared to a caching algorithm; details used more often are less likely to be lost, or need fewer hints to retrieve them.

    And then using this assumption to declare something as non-computable demonstrates a lack of understanding of the concept of computability. The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.

    • Irrational Numbers are Non-Computable.
      • Irrational numbers are supernatural.
        • The Universe is Natural
          Irrational Numbers are Supernatural
          The Universe is Dependent upon The Cosmological Constant
          Pi is an Irrational Number
          The Cosmological Constant is Irrational because it contains Pi which is also Irrational.
          The Universe Must be Supernatural.
    • The only way that conciousness could be non-computable would be if there is a supernatural element to it.

      Roger Penrose [wikipedia.org] (for one) is vehement in his insistence that consciousness is non-computable, possibly quantum in nature. Certainly there are other ways that consciousness could be non-computable without being supernatural.

      • And his fascination with that crackpot theory is why he, frankly, hasn't done any significant work in 20 years.

        It's based on assuming their exists a new type particle that we have no evidence to currently believe exists interact with a part of the neuron whose functions are known to cause a quantum superposition despite the fact it's been shown there's no way such a state could maintain coherence at anything close to the temperature the brain is at.

    • by Dimwit ( 36756 )

      It's not true that it has to be supernatural to be noncomputable, unless you agree that physics itself is computable. The jury is still out on that one (although I believe that it will turn out to be true).

    • The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.

      Nonsense. Just because something exists and is not "supernatural" doesn't mean that it must be computable. Take the halting problem for instance. There is no Turing Machine that is able to take any possible TM and input and determine whether the inputted TM will eventually halt or go into an infinite loop when run with the given input. This

    • by hey! ( 33014 )

      It implies that the memory is overwritten with the memory of recalling the memory, which is a huge and ridiculous assumption.

      However the notion that memory is overwritten by recollection actually does have experimental support. The idea isn't ridiculous, it's just repugnant because it implies that our grasp on reality isn't as firm as we'd like to believe it is.

      The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.

      Not necessarily. One way consciousness could be non-computable would be for it to be non-deterministic.

      In any case this is all fuzzy; not only is "supernatural" a fuzzy word, the discussion of "computable" is fuzzy too. What would it mean for consciousness to be "computa

  • by TsuruchiBrian ( 2731979 ) on Thursday May 08, 2014 @04:19PM (#46953129)

    That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay.

    I remember hearing a radiolab episode on NPR talking about how memories actually get modified every time you recall them.

    http://www.radiolab.org/story/91569-memory-and-forgetting/

    Maybe the radiolab episode is completely wrong, but I don't think it's fair to assume memories are lossless without providing some evidence of this.

  • > otherwise, retrieving memories repeatedly would cause them to gradually decay

    So, i guess this was never observed on real humans?

  • First, I do agree with the result - that consciousness is not definable via mathematical equations and algorithms.

    That said:

    1) Most memory researchers believe it IS lossy. Specifically each time you access a memory you change it, losing original information

    2) Not all computers have to only use mathematical equations and algorithms. Specifically their are quantum computers that do not work that way. While I am not an expert on such things I highly doubt that the rather limited definition they are usin

  • There seems to be a step missing from A (that's not how memory works) to B (therefore uncomputable). The premise that memory isn't lossy sounds like rubbish, even IF it's perhaps not so simply a question of 'read errors'

    I recently watched this talk, Modeling Data Streams Using Sparse Distributed Representations [youtube.com], which seems to be able to represent memory in a layered and lossy way perfectly fine in a computer.

  • Memories decay upon recall. [wired.com] Your brain basically alters the memory slightly each time. This can be used to erase or alter memories.

  • As always, the truth is in the Bible:

    Genesis 1:27

    God created man in His own image, in the image of God He created him; male and female He created them; but man is not a machine, for God did not look like a beige box PC.

  • If this is true, what does that mean for wankers like Kurzweil and the fantasy of the 'Singularity'?

  • by Animats ( 122034 ) on Thursday May 08, 2014 @04:41PM (#46953383) Homepage

    Here's a a critique [arxiv.org]. (It's on arxiv; no need to sign up for "Medium")

    The paper isn't impressive. It make the assumption that human (other mammals, too?) memory isn't compressed, and is somehow "integrated" with all other information. We've been through this before. Last time, the buzzword was "holographic". [wikipedia.org] We've been here before.

    The observation that damage to part of the brain may not result in the loss of specific memories still seems to confuse many philosophers and neurologists. That shouldn't be mysterious at this point. A compact disk has the same property. You can obscure a sizeable area on a CD without making it unreadable. There's redundancy in the data, but it's a lot less than 2x redundant. The combination of error correction and spreading codes allows any small part of the disk to be unreadable without losing any data. (Read up on how CDs work if you don't know this. It's quite clever. First mass-market application of really good error correction.)

  • "By assuming that the process of memory is non-lossy..."

    Terrible assumption.

    Anecdotal evidence that individuals may be capable of accurate recall is directly contradicted by evidence that even witnesses who are absolutely certain of what they saw in fact only recalled those specific items which somehow drew their attention at the time of an event.

  • Sounds more like you can't separate the human consciousness from the memories. I thought we already knew that. Perhaps there was a theory why until now.

  • Stop it. Just stop it, people.

    Memory doesn't work that way. It's a live feedback loop that reinforces itself through the conscious mind. There is some lossy drift but stuff that maps to the real world is indeed corrected if lossily. Ancient stuff from when you were a kid (Gee, what did Koogle taste like) drifts and drifts.

    Something from when you were a kid,
    like Orange Julius taste, drifts but may suddenly be reset when you stumble across one at a mall somewhere (or Dairy Queen, whoever bought them). Hi

  • Comment removed based on user account deletion
  • by Alejux ( 2800513 ) on Thursday May 08, 2014 @05:38PM (#46953939)
    Then it's magic. If the brain is formed by neurons that work within a certain logic and mathematical model, then it's computable.
  • by Antique Geekmeister ( 740220 ) on Thursday May 08, 2014 @07:54PM (#46954861)

    Repeatedly recalling an event, as for story telling, restores a subtly _altered_ copy of the memory. This has been shown by many experiments about the plasticity of human memory.

  • No it isn't (Score:5, Insightful)

    by multi io ( 640409 ) <olaf.klischat@googlemail.com> on Thursday May 08, 2014 @08:29PM (#46955091)
    I hardly understand a goddamn word of TFA and have never heard of the "Integrated information theory", but I know that TFA's proposition must be false because the brain is based on the laws of physics, which are computable. Q.e.d.
  • The argument for compression describes essentially Kolmogorov Complexity. [wikipedia.org] The idea is that the K.C. of something (and everything can be reduced to a binary string) is the length of the shortest program (if you look at it algorithmically) that can describe that object (reproduce that binary string) and stops. In TFA, the example is reducing the description of an infinite sequence of numbers to a finite program that calculates the odd primes and adds one to each. The number pi is infinite in length and random, but not complex since there are small programs that caculate it; an infinte truly random string of numbers would have infinite K.C. because the shortest program would be "print -infinite string". The K.C. of an object is not computable (it's related to the Halting Problem [wikipedia.org]), essentially you never know if you have the shortest program to describe an object.

    So here are some observations

    a) the whole premise rests on the assumption that the brain is a Turing complete computer ie. the brain is a computer too. So if the brain is a computer, why couldn't other Turing complete computers mimic it? In fact, K.C. theory uses the idea that there is a Universal Turing Machine that can mimic all other Turing machines. If the brain is not a Turing machine then you can't really make any comments about it's compression abilities, etc. because algorithmic theory is grounded on the Turing assumption;
    b) the TFA implies that compression is lossy. Well, not all compression is lossy and the example provided (prime plus 1) is not lossy at all, it's perfect. So what is the point of that example except to suggest that memory must be perfect compression? ... it just seems like a pointless example; BUT
    c) the assumption that memory is/must be perfect compression seems extremely flawed. Memory is not perfect and most memory seems to degrade over time (see witness reports, personal experience, etc.)

    So ... the whole paper seems riddled with discontinuities or inaccuracies. Really it seems like it would have been better to say:

    "The brain compresses information in a lossy fashion. We don't know how. Assuming a Turing process is occuring, then the brain is looking for the best compression it can but it can never know if it has the best or not. A computer will be in the same boat." BUT

    THE FEAR

    Basically the article is making (a flawed?) claim "Machines can never be conscious." The argument plays very well to a religious and research oriented crowd. First: machines can never be made in the image of man. We are not gods. Second: There is no requirement to consider ethics in AI . No matter what the AI seems, it is not, _can't be_, conscious. Therefore, should you create a robot that walks, talks, acts, and feels like a human ... well, it isn't conscious, so do with it as you will.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...