Our Brains Don't Work Like Computers 737
Roland Piquepaille writes "We're using computers for so long now that I guess that many of you think that our brains are working like clusters of computers. Like them, we can do several things 'simultaneously' with our 'processors.' But each of these processors, in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According to a new study from Cornell University, this is not true, and our mental processing is continuous. By tracking mouse movements of students working with their computers, the researchers found that our learning process was similar to other biological organisms: we're not learning through a series of 0's and 1's. Instead, our brain is cascading through shades of grey."
The Network is the Computer (Score:5, Informative)
Newsflash (Score:5, Informative)
Who woulda thunk it.
ftp://ftp.sas.com/pub/neural/FAQ.html%23A2 [sas.com]
'Most NNs have some sort of "training" rule whereby the weights of connections are adjusted on the basis of data.'
Insert joke about the 1980's (or 60's/50's/40's) calling). Somehow I don't think Norbert Weiner would be the slightest bit surprised.
-Tupshin
huh? (Score:1, Informative)
How so? Last time I checked 'computer brain' (cpu) cannot do multiple operations at the same time, unless you have dual core/cpus.
CPU just switch from one task to the other at break neck speed (yes I am ignoring pipelines and branch prediction - they are only use in streamlining the operations).
Human brain work the same way- it may be able to take in multiple informations (sight, feel, sound, smell) at the same time, but human brain has adapted a "filtering" system for unimportant sensor input. Thus you cannot say human brain does parallelistic operations at the same time.
Re:huh? (Score:4, Informative)
Yes it can, many have several ALUs and FPUs, and also more than one stage in their pipelines. The above hasn't been true since sometime in the nineties at the latest.
Re:The Network is the Computer (Score:5, Informative)
So, for example, for a simple if statement waiting on user input, part of the CPU would process the "true" result of the statement and part would process the "false" one. When the user made a decision, one would be used and one would be thrown out. In theory, computing these branches ahead of time was supposed to be faster than doing things linearly.
Again, though, I'm not sure Intel went through with this. They were the subject of the article.
also worthy of note (Score:5, Informative)
Figured it was worth mentioning given the subject matter of the thread... I liked it.. good read, if a bit dry at times...
Let's see the numbers (Score:5, Informative)
Fine, let's see the math. Let's see the trajectory calculations. How about those calculating the space? Calculating the number of dimensions the space has, and how fast that number changes over time?
40 years ago brain scientists realized that computer architecture made a good metaphor for how the brain works. (They did NOT assume there was no feedback, contrary to the article). It made a handy and productive way to look at things so they could figure out more about what was really going on.
10 years ago brain scientists realized that they could use the way cool chaos stuff the describe the way the brain works. Believe me, I know; I've been to the Santa Fe Institute twice. It worked particularly well for me because I'm essentially a signal analyst -- I HAVE to define a set of variables, estimate how well they work, and decide how many of my arbitrary variables to keep or throw out.
It's still only a metaphor. And unlike the specific specific processes described by cognitive science, the dynamic system stuff remains nebulous. It claims a mathematical legitimacy which it can really claim only in concept because the actual math of the acutal operations are is beyond the abilities of anyone making the claims. The fact that it *can* be described this way is no less trivial than the fact that processes can ge grouped according to the traditional cognitive science concepts.
Trajectories on phase space are soooooooo sexy. But if it's any good, it'll result in something more concrete than more people picking up this flag and waving it while shouting the new slogans and buzzwords. Until that happens I peg this with the study that "calculated" the "fractal dimension" of the cortex just because it has fold and folds in the folds.... so fsking what.
Re:Fascinating (Score:5, Informative)
It blocks stories submitted by Roland. Of course, you'd have to have installed greasemonkey. Which I forgot to do on re-install and hence saw this fucking stupid article.
predictive branching (Score:5, Informative)
Granted, if the processor is wrong, it has to clear the pipeline and start anew (which is costly), but the benefits outweigh the negatives.
Re:We are computers, just not /binary/ computers (Score:1, Informative)
The article seems to assume that the only type of computer is a _binary_ computer.
No, it assumes that any computer is equivalent in computational ability to a binary computer. This is a paraphrase of the Church-Turing thesis, and it is widely accepted as being true.
Re:comparisons (Score:2, Informative)
I hate it when someone presents a string of numbers like that. The brain involuntarily goes into 100% utilization until the answer comes out. The sum of the differences between the first five numbers in sequence plus the fifth number equals the sixth number. 4+7+1+7=19 19+23=42
Re:The Network is the Computer (Score:1, Informative)
This sort of problem of execution ILP (instruction-level parallelism) to turn sequential programs into parallel programs by throwing lots of hardware at the problem topped out at around the 8 pipelines in most modern CPU cores. Hence the move towards multiple CPU cores and explicit thread-level parallelism. Like any process, you can only push things so far at one level until you need to move up to the next level of meaning to extract more than marginal gains.
Re:Evolution (Score:2, Informative)
One fascinating nugget, humans find certain logic puzzles difficult but if equivalent questions are phrased in such a way as they become tests to detect other humans cheating, they solve them with ease.
Re:comparisons (Score:3, Informative)
No, they are not (Score:3, Informative)
Seriously, computers can work with things more complex then 'ones and zeros'. They can be programed to deal with shades of grey as easily (well, maybe not 'easily' but it definetly can be done)
The fundemental part of the human brain is the neuron, and it's either firing or not. 1 or 0 just like a computer. What triggers it is a bit more complicated, but the process can be emulated by a computer, and eventualy comptuers will be fast enough to do just that.
Re:OH MY GOD (Score:2, Informative)
Umm... when you say "theorem A shows B" it means that theorem A proves B. Not that it's "reasonable to conclude". It is "reasonable to conclude" just about anything from just about anything - because "reasonable" is a subjective term.
Re:comparisons (Score:5, Informative)
But a computer cannot demonstrate this truth. I don't claim to understand why not, but it clearly says in the wikipedia article that it can't.
Short answer: you're incorrect.
Long answer: The reason you seem to think that you are correct is also, I believe, incorrect. Godel's proof basically involves forming the statement "this statement is false" in a specialized language that allows you to do so without reference to pronouns--instead, he assigned each symbol a unique integer, and worked out ways of manipulating them both with and without regard to their "meaning". That part would be easy to do with a computer (e.g. asci/text editor/compiler).
Next, he posited a string of symbols where the meaning was related to the process for the manipulation of the meaningless symbols (this is also easy on a computer--sort of like using an editor to edit its own source code).
Using these, he constructed a relatively normal argument about the meaning level that coresponded to an argument at the symbol level--an argument that said "the argument represented by this long string of digits is unprovable"--but the kicker was the long string of digits was the coded representation of the argument itself. If false, the system could obviously not prove it (since we are assuming here that it only proves things that are true). Therefore it must be true, but that means it can't be proven within the system. Tricky, but there was nothing magical about the logic--no quantum mechanical must-derive-this-step-from-the-sprit-world voodoo that would make it impossible for a computer to follow.
--MarkusQ
P.S. A computer might not be able to understand the proof, but that shouldn't be held against it--after all, most of the people who discuss it don't understand it either.
Another gross misrepresentation of science (Score:2, Informative)
The claims that are made in the article do not contradict the idea of continuous attraction, but they do not prove it either. There is a much simpler explanatation, which is hinted at near the end of the article: one or more processes that try to solve the problem using competition. As a matter of fact, this study simply provides a little bit more evidence of what has been en vogue for a long time.
This behaviour *can* be mimicked quite easily using digital computers, and is definitely not shown by all biological processes.
So, our minds don't work like digital computers in the sense that they cannot store and delete information in the same way. That's been known for a long time, and this experiment doesn't prove it.
Some of the basic cognitive processes can be modelled on a computer, though, but that's not surprising either, since computers are supposed to be able to compute "everything computable" and there is still no reason to assume that the workings of our brain cannot be approached by a computational model.
So, nothing to see, only of interest to psycholinguistic experts. Move on, please.
Re:comparisons (Score:2, Informative)
Re:comparisons (Score:3, Informative)
HOWEVER, it appears the parent poster is one of the three authors of the paper under discussion, so somebody ought to mod the parent post up as "Informative".
*(Just for a start, the article quotes the researcher as saying, But a computer can and routinely does represent multiple or partial states.
A multiple state representation: const ONE_STATE = 0x1; const ANOTHER_STATE = 0x2 ; int currentState = ONE_STATE | ANOTHER_STATE ; while( dataSupportsEitherState() ) getAdditionalData()
A partial state, or a "value in between": double quantity = 0.5
(A purist will point out that multiple or partial states are implemented as additional states; but it's the interpretation, not the implementation, that matters.))
Re:Misleading (Score:3, Informative)
And there is a minimum time between such pulses. For a higher response rate/precision, more cells are used.
A single neuron will take in inputs from up to as many as 10,000 other neurons, with a threshold that has to be exceeded before it will fire itself. And each inputs can have the effect of increasing or decreasing the chances of firing.
There's some debate as to whether an individual neuron implements basic logic operations or whether it's a weighted sum calculation.
Re:Yes they do (Score:2, Informative)