Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Our Brains Don't Work Like Computers 737

Roland Piquepaille writes "We're using computers for so long now that I guess that many of you think that our brains are working like clusters of computers. Like them, we can do several things 'simultaneously' with our 'processors.' But each of these processors, in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According to a new study from Cornell University, this is not true, and our mental processing is continuous. By tracking mouse movements of students working with their computers, the researchers found that our learning process was similar to other biological organisms: we're not learning through a series of 0's and 1's. Instead, our brain is cascading through shades of grey."
This discussion has been archived. No new comments can be posted.

Our Brains Don't Work Like Computers

Comments Filter:
  • by Space cowboy ( 13680 ) * on Wednesday June 29, 2005 @08:41PM (#12946769) Journal

    Does anyone *really* think that computers and the brain work in the same way ? Or even in a significantly similar fashion ?
    Like them, we can do several things 'simultaneously' with our 'processors.'

    Well, by 'processors', I assume you mean neurons. These are activated to perform a firing sequence on output connections dependent on their input connections and current state, heavily modified by chemistry, propogation time (it's an electrical flow through ion channels, not a copper wire), and (for lack of a better word) weights on the output connections. To compare the processing capacity of one of these to a CPU is ludicrous. On the other hand, the 'several' in the quote above is also ludicrous... "Several" does not generally correspond to circa 100 billion...

    No-one has a clear idea of how the brain really processes and stored information. We have models (neural networks), and they're piss-poor ones at that...
    • There's evidence that the noise-level in the brain is critical - that less noise would make it work worse, and the same for more noise. That the brain uses superposition of signals in time (with constructive interference) as a messaging facility.
    • There's evidence that temporal behaviour is again critical, that the timing of pulses from neuron to neuron may be the information storage for short-term memory, and that the information is not 'stored' anywhere apart from in the pulse-train.
    • There's evidence that the transfer functions of neurons can radically change between a number of fixed states over short (millisecond) periods of time. And for other neurons, this doesn't happen. Not all neurons are equal or even close.
    • Neurons and their connections can enter resonant states, behaving more like waves than anything else - relatively long transmission lines can be set up between 2 neurons in the brain once, and then never again during the observation.

    The brain behaves less like a computer and more like a chaotic system of nodes the more you look at it, and yet there is enormous and significant order within the chaos. The book by Kauffman ("The origins of order", I've recommended it before, although it's very mathematical) posits evolution pushing any organism towards the boundary of order and chaos as the best place to be for survival, and the brain itself is the best example of these ideas that I can think of.

    Brain : computer is akin to Warp Drive : Internal combustion engine in that they both perform fundamentally the same job, but one is light years ahead of the other.

    Simon.
  • by Sir Pallas ( 696783 ) on Wednesday June 29, 2005 @08:56PM (#12946889) Homepage
    Analog computers still exist in some places, but you list discrete values. An analog computer works with an essentially continuous range of charges instead of discrete values; and it works continuously in time, instead of in discrete steps. They're very good at integrating, which is the application I used them in.
  • The article seems to assume that the only type of computer is a _binary_ computer. This is simply not true! There are all sorts of models for computing based on quantum states, fluid-controlled logic systems and who knows what else. To confine computing to binary systems is like confining mathematics to the set of real integers!

    I believe that the mind is (simply?) a quantum computer, and the article seems to support that idea. The human brain utilizes a sort of general interconnectedness of things to process thoughts as dynamic probabilities of state, with conclusions only being properly arrived at after a certain ammount of calculation has occured, but with all probabilities esiting well before the completion of the thought.

    Anyhow, I should probably stop rambling and go outside or something.

  • Schema Theory (Score:3, Interesting)

    by bedouin ( 248624 ) on Wednesday June 29, 2005 @09:26PM (#12947078)
    How is this different than a schema? [wikipedia.org] Haven't we known this since the 70's?
  • by Animats ( 122034 ) on Wednesday June 29, 2005 @10:09PM (#12947303) Homepage
    Where does he find this stuff?

    The path planner goes slower and generates paths that are initially ambiguous when faced with multiple alternatives. That's no surprise. I'm working on the steering control program for our DARPA Grand Challenge vehicle, and it does that, too. Doesn't mean it's not "digital".

  • Well DUH! (Score:2, Interesting)

    by JLF65 ( 888379 ) on Wednesday June 29, 2005 @10:29PM (#12947421)
    I'd have put this in the "water still wet" department. People have known for decades that the brain used continuous, or analog computing.
  • Re:Evolution (Score:3, Interesting)

    by __aaijsn7246 ( 86192 ) on Wednesday June 29, 2005 @11:45PM (#12947783)
    That's actually the first book I read on the subject too, it's really great I agree.
    The logic problem you refer to is modus tollens, "mode that denies" and people do find it extremely difficult.

    Here is a rule.. if there is an E on one side of a card, there is a 5 on the back. The fronts of cards have letters and the backs all have numbers. What cards must you turn over (minimum) to prove the rule?

    Here are the cards as you see them on the table:

    N 5 9 E

    Modus tollens is also called proof by contrapositive which could be a hint for solving this problem.
  • Re:OH MY GOD (Score:1, Interesting)

    by Anonymous Coward on Thursday June 30, 2005 @01:24AM (#12948142)
    Goedel's Incompleteness Theorem shows that within any axiomatic system there are statements that are true but unprovable, and therefore that no consistent (ie, noncontradictory) axiomatic system can be complete (or alternatively stated, no complete system can be consistent). That is all; it shows the limitations of axiomatic systems, such as mathematics. There is no compelling reason, nor any proof, to show that a universal Turing machine could not formulate such a proof. It has yet to be proven that a universal turing machine could not formulate a true but unprovable theorem, or a related activity (an Epimenede's paradox of some kind). Lucas's argument was thoroughly countered by Hoftstaadter in Godel, Escher, Bach: An Eternal Golden Braid (see GEB chapters XI, XII, XIV, XV, XVI, XVII). As for Penrose's arguments in "The Emperor's New Mind," those arguments have, by and large, been rebuffed in both the CS/AI and philosophy camps (check out the works of Dennis Dennett). This is not to say that our brains are essentially Turing machine (though I do believe they are), simply that the arguments regarding Godel's incompleteness theorem with regards to any differences between AI and human brains are inconclusive. Some argue one way, some argue another, and right now there is not really much actual proof on either side. Therefore, any attempt to use Godel's theorem as a large function of one's argument will be inconclusive.

    Penrose's argument, by the way, was that a Turing machine would be unable to formulate a Godellian situation, and that the human mind was not a Turing machine because of quantum effects. Hoftstaadter's argumant was a bit too involved to paraphrase here, but its essence is that human minds are turing machines, and that Godellian-like events are thrown out as errors (aka, nonsense. Like: This sentence is false). Lucas's argument is that "However complicated a machine we construct, it will, if it is to remain a machine, correspond to a formal system, which will in turn be liable to a Godel procedure for finding a formula unprovable-in-that-system. This formula the machine will be unable to produce as being true, although a mind can see it is true. And so the machine will not be an adequate model of the mind" (Lucas). The essential difference between Lucas and Hoftstaadter's arguments is that while both agree that in some way a machine is vulnerable to a Godelian operation, Hoftstaadter argues that human minds have the same limitation at some level, and Lucas argues they don't.

    This is an unsolved argument. It is currently unknown if human minds are vulnerable to Godellian opertaions, because by the very nature of Godellian operations the system they are operating on is unable to prove them. Only once it is shown that human minds are not isomorphic to a formal structure can Godel's incompleteness theorem be used in arguments against machine intelligence. As of now it is an open question.
  • Re:Wrong (Score:3, Interesting)

    by autopr0n ( 534291 ) on Thursday June 30, 2005 @02:04AM (#12948269) Homepage Journal
    It is also true that there is no instantaneous jump from 0v to firing voltage. A different types of neurons require more or less neurotransmitters to reach the threshold voltage.

    Well, there is no instantaneous jump in digital comptuers either, however once theshold voltage is reached in a neuron it fires very quickly and very sharply.

    In any event, the voltages inside a neuron are quantized, always an even multiple of the charge on an electron, which obviously can be stored in a computer program, as could the finite number of molicules of neurotransmiter around them.
  • Perfectly true (Score:3, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Thursday June 30, 2005 @03:15AM (#12948500) Homepage Journal
    If you allow for infinite precision. Let's take that dial scenario. Let's say that the dial is a potentiometer (variable resistor) as that is the normal way to build a dial.

    Now, the output cannot be any more stable than the input, so if you have a fluctuating input, you will have a fluctuating output. However, I'll assume that the input is stable to some high level of precision. (This requires a screened input and a screened device, but those are doable.)

    So we now focus on the device itself. Resistance varies with the exact composition of the material, the exact temperature of the material and the exact thickness of the material.

    Problem #1 - it is very hard to make a resistor that is of absolutely 100% perfect even consistancy. So if you move the dial N% of the total length, you expect to get a resistance of (total resistance)*100/N. In practice, this will merely be the average value, there will be some variance. That variance dictates the absolute upper limit of how finely you can tune the dial, because at some point the level of uncertainty will become comparable to the level of adjustment.

    The second problem is heat. All resistors generate heat, but heat increases resistance. Thus, all resistors will fluctuate in value. Remeber, though, that the composition is not 100% even, so the temperature cannot be 100% even either. This means that the fluctuation in resistance will be dependent on where you are on the dial, increasing the uncertainty.

    The third problem is the thickness. Resistance increases as the diameter of a wire decreases. Variable resistors involve two conductors in contact with each other, thus scraping. Unless the dial is 100% circular, you MUST go over the midsection of the potentiometer more than one or both ends. This means that even if you DO somehow achieve a perfect variable resistor at the start, you won't have one after you start using it. You will vary the thickness across the length, and therefore vary the resistance of any segment.

    Normally, these variations are too small to notice, which is why these components are useful in the first place. BUT, as you increase the precision, you increase the importance of these variations. Eventually, the variations will swamp the signal. At that point, tuning the dial with even greater precision will be worse than useless, as the value is utterly non-deterministic.

    In reality, power fluctuations are of vital importance and are a big reason ADCs and DACs have not exceeded 26 bits of precision. Nobody has figured out how to get a power source stable enough, or a chip screened enough, to transform signals of one form to the other with greater precision than that.

    If the cleanest signals we can get from an analog system are 26 bits wide, then producing a simulation of an analog system that is 64 bits wide will be vastly superior to any actual system we know how to build.

    Now, it is entirely possible that the brain has developed a level of precision and signal clenliness that exceeds 26 bits. I'm not disputing that. I am disputing that any physical system you can build can exceed 64 bits and it probably can't get even close to that. So, a 64 bit simulation of analog signals should be as good as the real thing.

    But what of waveforms? Can you reproduce waves, using discrete multi-state logic? Sure. It's called a transform. The three best-known transforms are Z transforms, Laplace Transforms and Fourier Transforms. Using these, you can do a surprising amount. Transforms work by turning a domain you can't use into a different domain that you CAN use. They're very useful devices.

    Fourier Synthesis (the theory that any wave, of any complexity, can be reproduced with a sufficient number of overlapping sine waves) makes this clearer. We can represent a classic sine wave by denoting amplitude, start point and end point. We just need to be able to build a set of any number of these, and we

  • Re:OH MY GOD (Score:2, Interesting)

    by teslar ( 706653 ) on Thursday June 30, 2005 @06:12AM (#12948906)
    Your brain is a turing machine.

    NO! The brain is NOT a Turing Machine.

    There is something called the 'Halting problem'. Basically, for any computation, a Turing Machine can:
    -halt with success
    -halt with failure
    -get caught in a loop

    The question is, if it hasn't halted yet, will it halt in the future or will it get caught in a loop? And you can prove that it is impossible to construct a Turing Machine that is able to answer that question. This is called the halting problem.

    It can be generalised to prove that you cannot construct a single Turing Machine to decide whether a given statement is true or false and this is where it ties back into Gödel's theorem and it is this argument that some people use to relate Gödel's incompleteness theorem to the brain (which I find intriguing but I'm not sure whether I agree with it.) The important point here, however, is that you, as a human being, can solve the Halting Problem. It follows, that you are NOT a Turing Machine.

    It can also be proven that Quantum Computers are not Turing Machines by the way, but even Quantum Computers are unable to solve the Halting Problem, so our brain is a step up even from Quantum Computers.

    Let me repeat the point here again: The brain is NOT a Turing Machine and as such the limitations of a Turing Machine do not apply.
  • Re:comparisons (Score:3, Interesting)

    by mjspivey ( 896286 ) on Thursday June 30, 2005 @09:44AM (#12949673)
    At the theoretical level, my argument is that analog (or nearly analog) computing provides a much better simulation of the mind than digital computing theory. At the pragmatic level, my argument hinges on decades of research on Artificial Intelligence being motivated by traditional tree-search algorithms, production systems, and other discrete serial processing systems. In speech recognition, however, hidden markov models have more recently been the popular method for automated word recognition, and they do indeed perform in a way that is describable as representing multiple potential words at once. Therefore, probabilistic algorithms and neural networks (programmed on digitial computers) are indeed useful and informative ways to build simulations of various human mental processes.
  • by lawpoop ( 604919 ) on Thursday June 30, 2005 @11:18AM (#12950465) Homepage Journal
    "(Or, as George Carlin put it: 'God can do anything' Well 'Can God make a rock so big that he himself can't lift it?')"

    One of God's properties is that He or It or Whatever is omnipotent, no? The _supreme_ being? Why would a supreme being need to obey logic? Your riddle supposes that logic is the supreme entity or force in the universe. I would expect a omnipotent, supreme-being type God to be able to do non-sensical, as well as sensical things.

Nothing succeeds like the appearance of success. -- Christopher Lascl

Working...