Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
AI Science

Mathematical Model Suggests That Human Consciousness Is Noncomputable 426

KentuckyFC (1144503) writes "One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to formulate the problem of consciousness in mathematical terms, in particular using information theory. That's largely thanks to a relatively new theory that consciousness is a phenomenon which integrates information in the brain in a way that cannot be broken down. Now a group of researchers has taken this idea further using algorithmic theory to study whether this kind of integrated information is computable. They say that the process of integrating information is equivalent to compressing it. That allows memories to be retrieved but it also loses information in the process. But they point out that this cannot be how real memory works; otherwise, retrieving memories repeatedly would cause them to gradually decay. By assuming that the process of memory is non-lossy, they use algorithmic theory to show that the process of integrating information must noncomputable. In other words, your PC can never be conscious in the way you are. That's likely to be a controversial finding but the bigger picture is that the problem of consciousness is finally opening up to mathematical scrutiny for the first time."
This discussion has been archived. No new comments can be posted.

Mathematical Model Suggests That Human Consciousness Is Noncomputable

Comments Filter:
  • by Kazoo the Clown ( 644526 ) on Thursday May 08, 2014 @04:06PM (#46952969)
    Not retrieving memories is what causes them to decay. Ever hear of refresh?
  • by Rosco P. Coltrane ( 209368 ) on Thursday May 08, 2014 @04:08PM (#46952987)

    Because I'm a human being and it's a PC. Duh...

    I think machines will eventually acquire their own form of consciousness, totally separate from ours. and I reckon it's just fine, and much more exciting in fact than trying to replicate our humanity in hardware that's just not compatible with it.

  • by Verdatum ( 1257828 ) on Thursday May 08, 2014 @04:18PM (#46953111)
    "retrieving them repeatedly would cause them to gradually decay"

    Ouch. Just. Ouch. No. Noooo. NOOOOO.

    There is so much wrong with this statement I don't even know where to start. It implies that the memory is overwritten with the memory of recalling the memory, which is a huge and ridiculous assumption. Memory likely works much more like ant paths. The details that are recalled more frequently are reinforced, and can be remembered longer. It could also be compared to a caching algorithm; details used more often are less likely to be lost, or need fewer hints to retrieve them.

    And then using this assumption to declare something as non-computable demonstrates a lack of understanding of the concept of computability. The only way that conciousness could be non-computable would be if there is a supernatural element to it. Otherwise, the fact that it exists means it must be computable.

  • Re:Bad syllogism (Score:5, Interesting)

    by Thagg ( 9904 ) <> on Thursday May 08, 2014 @04:30PM (#46953277) Journal

    In fact, it's pretty clear that 4. is incorrect. There was a fascinating recent study.

    There is a drug that you can give somebody (or in this experiment, a rat) that will prevent it from creating new memories. They trained the rat to solve a maze, and it did it just fine. They gave the rat the drug, and it solved the maze perfectly. Once. After that, it couldn't do it again.

    Implying that when you remember something, that very process of remembering removes the original memory,and it has to be created again. It will be different the second time; colored by your current experience. The more times you remember something, the more you are remembering the previous memory, not the original event.

    A reference is

  • by Animats ( 122034 ) on Thursday May 08, 2014 @04:41PM (#46953383) Homepage

    Here's a a critique []. (It's on arxiv; no need to sign up for "Medium")

    The paper isn't impressive. It make the assumption that human (other mammals, too?) memory isn't compressed, and is somehow "integrated" with all other information. We've been through this before. Last time, the buzzword was "holographic". [] We've been here before.

    The observation that damage to part of the brain may not result in the loss of specific memories still seems to confuse many philosophers and neurologists. That shouldn't be mysterious at this point. A compact disk has the same property. You can obscure a sizeable area on a CD without making it unreadable. There's redundancy in the data, but it's a lot less than 2x redundant. The combination of error correction and spreading codes allows any small part of the disk to be unreadable without losing any data. (Read up on how CDs work if you don't know this. It's quite clever. First mass-market application of really good error correction.)

  • by globaljustin ( 574257 ) on Thursday May 08, 2014 @04:48PM (#46953477) Journal

    And after you include however-many extraterrestrial intelligences there might be, all across the Universe, well,

    Then you're in science fiction land...woo hoo! I like scifi as much as the next /.er but your imaginations of the possible existence of a civilization that can fully digitize continuous data is worthless to a **scientific discussion**

    That's the problem. Hard AI, "teh singularity", and the "question of consciousness" are so polluted in the literature by non-tech philosophers throughout history that the notion of ***falsifiability*** of computation theory get's tossed aside in favor of TED-talk style bullshit.

    Falsifiability kills these theories *every time* and hopefully this research in TFA will help break the cycle.

    To be science it must be able to be tested. It must be a premise that is capable of being proven or disproven. "hard AI" proponents like Kurzweil and the "singularity" believers ignore this part of science.

    So happy to see this research

  • Re:Bad syllogism (Score:4, Interesting)

    by queazocotal ( 915608 ) on Thursday May 08, 2014 @07:03PM (#46954551)

    An interesting argument is that it's basically the same way we do anything else.
    Numerous studies have shown that if you for-example watch someone moving their arm, you partially understand this by using the same area of your brain that deals with your arms. Same with emotion - microexpressions where you have a fleeting subtle echo of expressions on others faces which aids your understanding - botox actually can impair your ability to perceive well the emotions of others.

    Consciousness - or more accurately the illusion of a self can be reasonably understood as the reuse of an evolutionary device originally used to understand others actions. When applied to ourselves, this guesses our 'intent' from internal actions, and provides reasons and justifications for actions, which may be entirely specious.

    For example, direct brain stimulation does not 'feel' like an external input - it feels like a 'natural' thought that you had - and people will often rationalise reasons for the most unusual behaviour due to direct brain stimulation, rather than the simple answer 'you applied a pulse of electricity to my brain' - because that's not how it feels. [] - is interesting on this exact topic.

A list is only as strong as its weakest link. -- Don Knuth