Human Brain seems to procceses image data serially 164
Tekmage writes "Ever wonder how the brain processes image/vision data? According to this research, it does so in a more serial manner than parallel. " This has been one of those on-going debates since the 1960s, with the advent of machine vision, with this being the latest round in the battle between the two sides.
Re:This article is slightly garbled (Score:1)
This article isn't about image processing per se, it's about attention - the selective subcognitive process by which we focus additional processing power on specific elements of perception. It's a vital, essential skill, and works in a variety of levels and situations.
Notice how you can always hear your name mentioned in a loud party, even though you otherwise can't make out a single conversation? Or how a tired mother can sleep through a storm, yet awaken to the quieter sound of her infant crying?
There's a sort of "thunk" procedure that works in these situations. What the studies you've cited show, as well as the one cited in the article, is just which tasks require this "thunk" and which do not.
Kettle calling the scalpel black (Score:1)
impact on computer vision systems? (Score:1)
The conclusion to gain from this experiment is that computer vision systems need to be adaptive and learning. While it's probably not necessary to explicitly program every word into a computer reading system, sitting there and grinding away with OCR on a character-by-character basis is probably a waste of time. The difficulty is in feeding back the brain's (knowledge base's) identifier for a pattern (how do remember what the word "word" looks like well enough that you don't read it letter-by-letter?), determined after a serial examination of a given input, back to the parellel recognition system for training.
I think it's safe to dismiss the 1/10 of a second switch as specific to the situation. I can notice an interval a great deal smaller than that playing Half-Life, and so can you, I'd imagine.
-_Quinn
Re:Kettle calling the scalpel black (Score:1)
Did the question influence the results? (Score:1)
Non Seq. (Score:1)
nerve. Manipulating the processed data is done in
the brain. By the time the signal arrives at the
brain it has already been processed into data for
representing all the objects and characters being
viewed. All this experiment shows (besides limits
in the eyes field of view) is that the brain
evaluates objects serially, at least under some
conditions. I does not identify object serially.
Re:Hmm. (Score:1)
People have actually had their corpus callosum severed - so-called "split brain" patients. In general, experiments with these people show that the two sides of the brain are largely independent - for instance if people are shown an object in such a manner that only one side of the brain can see it, then the other side of the brain is not aware of the position of this object. If the patient is then asked to reach for the object with the hand governed by the other side of the brain, they will try, but not know where the object is.
Doesn't this tend to suggest at least 2 independent "chunks", with the CC normally governing communication between these chunks?
To me, if the brain is really a large parallel machine, there's no reason why seperate threads of computation can't be going on in seperate parts of the brain - each taking up a small physical region of resources.
These threads could even communicate with each other fairly readily. There's really only problems when the two threads want the same resources.
This is backed up by experiment, too - Richard Feynmann did some very interesting experiments in keeping time. He found that without a watch, he could keep time very well....when he counted to 60, he _ALWAYS_ got 72 +- 1 seconds (or something like that).
The interesting thing is what happened when he tried to do other things while counting. The majority of tasks had absolutely no effect on the counting. A few tasks slowed or sped up the counting. And some tasks precluded counting - he couldn't count at all while doing these tasks.
(He also did some experiments to make sure he wasn't basing his counting on some internal physical clock like the heart beating or breathing - he counted while running up and down stairs. No change in the rate of counting.)
Now I admit that a fast-switching serial model would work just as well in explaining these results, but considering that the brain is demonstrably a parallel architecture, I think that the parallel model is a lot more elegant.
-Shane Stephens
Re:Hmm. (Score:1)
It would make sense. (Score:1)
A fellow poster had said that the brain is parallel, becuase he can take a shower, and think about what to code after breakfast. This is true, however the article was more about image processing, rather then thought patterns.
The point that the study was making is that the brain focuses on images one at a time. (really fast that it's a blur to the concious mind, but singularly nonetheless.) Think about it. You stare at a computer monitor, and a post-it-note you have stuck next to your monitor falls down. Your eyes detect the movement and send the signal to the brain. The brain in turn sends an impulse back to the eye muscles to rotate and focus on the movement. In this split second, you forget about the monitor and your attention is on the post-it-note. Then your brain receives the visual cue that it's "Just a Goddamn piece of paper" and clicks back over to the monitor.
But, since Thought patterns are processed in parallel, you can think of many things at once. While that post-it-note falls, you could be singing along to music that you are listening to on the radio. (AND, chances are you won't miss a beat when the post-it-note falls, and you're still singing. It's not severe enough to command the brain's full attention. If a car smashed through your wall, however, I'd bet you'd stop singing.)
-- Give him Head? Be a Beacon?
Re:Seems like a questionable experiment (Score:2)
But is it really serial? (Score:1)
Now if only I could get around that road-block in the middle, but I guess the brain still has to break it up, digest it, and commit it to memory - which takes time.
As ever (and as with even the best computer architectures today), the problem appears to be the pipe between the processor and memory
Recognition and processing is parallel.
Understanding is serial.
Simon
Re:Hmm. (Score:1)
I'd be very curious as to if they'd get similar results with sound experiments. I suspect not. I recall experiments in which people were fed different sound sources in each ear. Even when paying attention to one voice stream for some task, subjects still responded to their own names in the other voice stream. This implies that some level of cognitive processing is still going on for the stream supposedly being ignored.
Influence of eye design (Score:1)
I mean, if we can only look at the objects serially (and discriminating whether it has a nick or not i think requires "looking at" rather than "scanning" for one red block in a sea of greens), how do they expect us to process them in parallel?
Is there something missing? (Score:1)
The article was a good brief overview but had no links to the report itself.
BTW. Did it seem odd that the experiments were performed in 1994 but the article was just published? I wish I had 5 years to evaluate my findings and report back.
I doubt this is brain function they're seeing. (Score:1)
Thus the eye will saccade (move rapidly) from one spot to the next to get the object under study projected onto the fovea. Of course the brain will process them serially; the fovea can only point at one at a time.
Reading is a more realistic problem, where several words can fit onto the fovea at once. The question of whether we process those words in parallel or serial is not resolved by this study.
Re:but we're massively parallel! (Score:1)
I already knew that when I read a book, I don't stare at the whole page until the text of the book sinks in. I read word after word. When I'm looking for a detail in a picture, I don't just stare at the picture until I find what I'm looking for. I scan small areas that look interesting until I can focus in on the area in question.
They needed an experiment for this? (Score:2)
Note that that's simply processing sensory data. The people who talk about spotting the red cube in a bunch of green ones are talking about something totally different: recognition. Even there, the brain picks the red cube out from the whole image; the reason the time to recognize the red cube doesn't depend on the number of total cubes is that you see the same size image, no matter how many cubes there are, and the red cube looks different enough from the green ones that it's easy to spot. It'd be like playing "Where's Waldo" in a situation where everyone else is wearing blue; no matter how many people there are you'll find Waldo in a second.
Still not convinced? Here's a simple experiment to try: play "Twinkle Twinkle Little Star" and Limp Bizkit's "Nookie" in your head simultaneously (those songs being chosen because they're totally different; feel free to substitute any other two songs that are sufficiently different), and try to concentrate on both at the same time (note: do this without actually saying the lyrics to either one; that's cheating). You can get pretty close, but I'll bet that you can't quite do it.
More than likely, the brain simply "multitasks" in a manner not unlike machines do today; it doesn't really run multiple processes at once but it can do a pretty convincing illusion. Since each area works somewhat independently of the others, you can get a bit of parallelism going. That's why you could sing the lyrics to one song while thinking of another; you've assigned a different area to each task. Put them in the same area (by not saying the lyrics to either one) and suddenly you can't do it.
So, cheer up. At least on this planet we're still top dog in terms of intelligence (your average U.S. politician notwithstanding).
cool! (Score:2)
What bothers me about the article is that it takes the stance of "The debate has always been which architecture is best. Now, since the human brain processes data serially, the debate is settled."
Since when was it established that the human way of processing things was the best for sure??? Very poor logic on their part (IMHO)
Frames Per Second? (Score:1)
What framerate do our eyes capture? Or are they too analogue to measure? Obviously our brains can't take in an infinite amount of data, but is it a matter of skipping "frames", or just a loss of resolution?
For that matter, what is the framerate of reality? Personally I think it's infinite but I know there will be people who disagree with me. It partly depends on how you measure it.
Enquiring minds want to know.
Re:It would make sense. (Score:1)
-- Give him Head? Be a Beacon?
Re:Serial processing may be conditioned. (Score:1)
I'm not cool enough to be able to speed read in parallel, but i'm pretty sure i can edit that way. For the last couple of years i've been able to look at a page of print, and my eyes will suddenly focus on a typo--it takes a second or two for my conscious mind to recognize what my eyes are focusing on is an error. Other people have mentioned the experiment of when given any number of green boxes, and one red box, that a person can find the red box in the same amount of time irregardless of how many green boxes there are. I believe that my high speed editing parlor trick is similar to this problem--over time my mind has become trained to recognize patterns of text as naturally as patterns of color.
If the "nick on the box" test was carried out for several months, possibly several years, the results might vary. Meaning, that over time the people would become more expert at noticing nicks on boxes and the brain might process the information on a higher symbolic level. I doubt that anyone would want to check for nicks on boxes for that long, but there must surely be a job similar to the experiment in manufacturing or processing. Some sort of quality control job where a person has to watch a line of goods go by that they check for defects. Testing if a person who has done a task like that for several years (and is actually good at it!) has trained their neural net to perform the task in a parallel manner would be interesting, and would give a broader view of the nature of the cognitive process. You would only need to find one person that could process image data this way and it would muddy the picture presented by the article.
Who knows, maybe people in Iowa are just more proned to seeing things serially than other humans
Parallel Vision (Score:1)
While it may well be true that the highest level of vision is serial, this particular level of vision must be quite tightly defined, for, going back to the House of the Dead example, I always shoot for the head, which is by no means just a simple object recognition in such a game.
I suspect more research really needs to be done in the area, and more importantly, that conclusions need to be very accurately defined, rather than making such broad statements.
Re:Hmm. (Score:2)
I agree, but I wouldn't consider these "threads" to be cognitive in nature. A person's immediate attention is always focused only on one item at a time. Try examining one object while describing another. Your mind has to switch back and forth to be able to do both "simultaneously."
but we're massively parallel! (Score:2)
Think about trying to describe a thief you saw running down the street. You saw that he was tall and wearing a hat, someone else saw that he had a mustache, etc. Add more processors required to compensate for the uncertainty in the data from any single one, and you've got a system that doesn't look so serial anymore.
It's obvious that more than one human is required for an accurate description. They haven't proven anything in the serial vs. parallel debate!
Re:Hmm. (Score:1)
Serial reasoning, not serial vision? (Score:1)
It is extremely difficult to tell what the scientists found based on the article. For example, "..it processes information serially, even though the underlying neural hardware is operating in parallel."
It would seem from this statement that they are considering part of the brain 'hardware', while other parts not. This seems like fragile reasoning as last I checked it consists of neurons and glial cells (okay, throw in some blood and ions as well).
I am going to make an assumption that they are referring to a person's attention when talking about the 'other part of the brain' -- that is, the brain takes the entire scene in all at once (this we know happens), but can only attend to one particular part of the scene at a time.
This is not a new discovery. In fact it was pointed out quite a while ago by William James. He describes concsiousness as the process of selecting what to pay attention to. That is, we can only really pay attention to one task at a time, but the brain takes in a whole lot. James is usually right.
Looking later in the article: "Luck and Woodman discovered that the brain turned its attention from one block to the next at intervals of about 1/10th second." Thus it would seem instead of describing how we view the world, they are rather describing the rate at which we attend to physically seen objects.
I would suspect that they could do a similiar experiment with sound, taste, etc. I have not seen any mention to factors such as rate that they eye can move at (as mentioned by a previous post), or even how far apart objects were.
Finally they a measure a brainwave without giving a good reason to pay attention to that brain wave. It reminds me of a joke I read once: "A scientist wants to figure out what makes an insect concious. He theorizes it must be the legs [okay, not smartest person]. He takes one leg off. It appears that the insect cannot make decisions as well as before. He continues in this fashion until the insect can no longer walk, and thus make a concious decision where to move."
In reality it is very difficult to probe the brain. Taking EEGs only gets weak signals off the top of the brain, and cannot measure other impotrant parts of the brain. Other measuring tools such as a PET or CAT scan operate a large intervals, not giving an overall picture of the brain (from what I've been told they can only image something like every 5 minutes). Imagine a system that is totally chaotic, except that normalizes for large amounts of time. Of course you will sometimes get images of it doing 'abnormal' behavior, but all an average is done (as they do for all PET and CAT scans for studies), and it will appear as though it very predictable.
This article is slightly garbled (Score:4)
No doubt the research reported in this article is important for some reason, because I saw the technical paper it was based on in the most recent issue of Nature, which is a pretty major journal. Unfortunately I don't have it with me, so I can't read the paper and tell you why it is important. Certainly it's not just the fact that some kinds of visual perception are serial.
Re:Microsoft Press Release (Score:1)
Although, I think I've heard of similar stories before, and there are always followed by the obligatory post:
If you see a fellow user going blue because WinNT has crashed, poke both eyes, twist the nose, and grab ears and shake simultaneously to restart. >G
Seriously, on
Hmm. (Score:3)
I start breakfast, and then take a shower while the water boils or whatnot. While taking a shower, I often think of what I'm going to code after breakfast. I would consider that to be "multi-tasking".
Now, here's another thing - how many times do you wake up in the morning with an answer to a complex coding problem? For me - it's *alot*. I find the answers just float in from dimension X into my head. That's parallel processing - part of my brain solved the problem while the other part handled something completely different without either part being aware of what the other was doing.
I think the debate is rather moot - we can do both. If you want to argue over the sematics, you can do so. But when I think of the brain, I think of it as a complex signals processor.
What I mean is, when you see something, it's translated into a signal, which is run through a series of filters and comparisons to tell you what you're seeing. This is also why you don't have an exact copy of what you saw - your brain only stores the "most significant bits" necessary to duplicate the signal. Some brains are better than others about reconstructing the signal. If you don't have all of the signal, your brain fudges it with values from similar experiences (or your values/beliefs). And if you have no signal at all, you post as an Anonymous Coward.
So my point is - it can be both. Infact, look at how society is structured - into clusters of people (brains?) that work in parallel on a project until completion (teamwork). Minimal communication. Why wouldn't your own brain be wired in a similar fashion - with dozens, if not hundreds, of semi-autonomous agents working towards the same goal?
--
That's just a silly argument (Score:1)
I mean, really, do people think at all before they post? And to the moderator who thinks this drivel is insightful - please...
Shaun
Re:Nope (Score:2)
As far as point 3, here's the relevant portion of the article:
The remaining items aren't delved into in the least, but it would certainly be nice if they were true.
Re:Hmm. (Score:2)
The instant they hear that audio cue, however, their cognitive attention is turned *away* from the active conversation in order to concentrate on the source of the new sound.
Re:That's just a silly argument (Score:1)
You don't believe that more witnesses would result in a more accurate picture?
--
Re:Serial processing may be conditioned. (Score:1)
As far as I know, speed reading is done serially as well, skipping quite a bit of the text and mentally filling in the blanks. As a result, the actual comprehension of speed readers is usually lower than normal readers. Can anyone verify this?
Re:Nope (Score:1)
One thing that worries me about it though is the fact that the article says the red and green blocks were very far away from eachother on the extreme edges, so it would be very tempting for someone to direct their focus at the blocks, which could take the
So, I might look into the study's publication to find out more exactly how the procedure was done.
Re:Organizational Intelligence (Score:1)
Re:Maybe now they can answer the other question... (Score:1)
(Schoolchildren in Kansas, cover your eyes)
I would have to say the egg, since the ancestors of what we now know as a chicken would at some point not be chickens. However, said ancestors would have laid an egg containing a mutant offspring which we now know as a chicken. Therefore, the egg came first.
QED.
Countless ordinary neurons... (Score:1)
Yeah, I'm a Mac programmer. You got a problem with that?
Re:High level versus low level processes. (Score:1)
According to my understanding of the subject, this is not true. High speed photography has demonstrated that vision occurs as discrete (serial) episodes called saccades. The choice of focal point is not based on higher level processing in the visual cortex, but rather is controlled by the Superior Colliculus (sp?) which is not part of the cortex. In fact, from what I've been told by researchers in the area of neurobiology, a human subject's eyes will repeatably focus on the same points in an image when presented at different times. Typically, edges and corners might be favored. Each such episode takes on the order of 50 or 100 msec, and input from a field of view about the focus point is fed into the visual cortex. Apparently, it is at the higher levels of processing that we turn these discrete, serial, images into a smooth, fuzzy view of the world about us.
It's a hot topic in neurobiology and really quite neat to learn about.
Cheers. Sapphire.
Re:Hmm. (Score:1)
The model that wins is what we view as our conciousness - the models continually compete, so what we conciously think of changes in response to new inputs.
I'm not sure if this supports your argument or mine!!!
-Shane Stephens
Re:But is it really serial? (Score:1)
Re:That's just a silly argument (Score:1)
Actually, it would be interesting to study how artists look at items when they're drawing. Most good artists don't look at features, but look at features in how they relate to other features. You don't just draw a nose, an ear, or a mouth, but you draw them a little at a time, a line here, a line there, some shading here, some shading there, in relation to each other to build the face. Now, would that not characterize working in parallel?
Re:I don't get it... (Score:2)
It seems that by placing the blocks on opposite sides of the board (left and right), looking at the left block would elicit a higher amount of activity in the right side of the brain while examining the right block would fire up the left side. I believe these differences were what they were looking for. If the subject were able to examine both blocks in parallel, the two halves of the brain would work simultaneously. The experiment showed a 1/10th second or so difference that was always right -> left, indicating that they focused their attention on the left block followed by the right.
The article didn't really explain this, though, so this is just my educated guess.
Re:Hmm. (Score:1)
For example, about 10 seconds ago I was thinking about what I just wrote, thinking about what I was going to write, kneading a ball of Sculpey in my fingers (for no real reason) and thinking about what it felt like, and noticing the sound my computer's fan is making. That's 4 right there.
--
heh. (Score:1)
For example (assuming a guy audience), can you talk with someone while you're watching the TV? I can't. Most women can.
I know, and it drives me bloody insane!
Berlin-- http://www.berlin-consortium.org [berlin-consortium.org]
Re:Amen! (Score:1)
btw, just to be pedantic, all our brains are equally evolved.. some just work better than others
in conclusion, peoples's efficacy at "multi-tasking" may well be based on the environment they grew up in.
Re: You're right, it's fishy... (Score:1)
Re:Organizational Intelligence (Score:1)
IMHO the only way to stop corporations from behaving immorally is to structure them in such a way that the individual moral decisions of the employees are not stifled as they are in a traditional corporate structure. What structure would work best, I don't know, but the Internet doesn't seem to be any better (see recent slashdot story on computer ethics).
Re:Hmm. (Score:2)
I always just consider my "subconscious" to be that which is handling and analyzing everything that I'm not consciously thinking about. I don't think it's much of a cognitive process, but mainly abstract pattern recognition. If an interesting pattern is discovered, you'll "notice" it.
Re:neurons may be too slow for serial vision (Score:1)
Re:cool! (Score:1)
Re:Suboptimal (Score:1)
Maybe the brain does a lot of serial processing of data from the optic nerve, but the optic nerve and retina also do a lot of signal processing in and of themselves.
I would rather think that we have lots of parallel/simultaneous subprocesses that are pipelined serially...
Student: "Is it a wave or a particle?"
Physics Buddha: "Yes."
Organizational Intelligence (Score:5)
We humans have developed organizational intelligence. Groups of human brains, hooked up with the appropriate networking, can themselves become an alien intelligence, as different from human intelligence as human behavior is from cellular behavior.
For a long time, this has been mostly the province of corporations and governments. Ever wonder why such entities often lack common sense? It's because they are made up of humans, but aren't human. Congress is a group of over 400 humans; it doesn't act as a human, but can be modeled as an intelligent, alien being.
Today, we have the Internet. On a smaller scale, we have Slashdot-style phenomena. These are virtually those "Beowulf clusters of human brains". It is just another alien intelligence.
The big difference between the Internet and government/corporate organizations is in the interhuman connectivity. In governments and corporations, the governing layers are codified into a bureaucracy. This causes specific people to act as chokepoints, and that in turn limits the number of people that can interact effectively. On the Internet, the governing layers are a lot less codified. This requires a lot more data filtering at the various nodes (humans)--spam and similar phenomena travel better across the Internet than through your office--and a lot more bandwidth. But the Internet is all about bandwidth.
Bureaucracies are alien intelligences made of humans. Internet communities are alien intelligences made of humans. They are different species of alien, and they are fighting each other.
Why are bureaucracies afraid of internet communities, and vice versa? The answer is easy to see if you stop thinking in terms of humans. The bureaucracies are seeing a brand new type of intelligence. The "Linux community" is a perfect example. Over the course of eight years, this thing has gotten Microsoft, one of the Lords of Bureaucracy, frightened. A race war of organizational intelligences is brewing, if not already being fought.
Is this the end of humanity and the beginning of organizational intelligence? Hardly. We have been living with bureaucracies since the Pharoahs, possibly before. But just the knowledge that there are inhuman intelligences out there helps you to better understand them, and to better interact with them.
Yeah, OK, but... (Score:1)
Da Vinci - there are exceptions (Score:1)
It is said that Leonardo Da Vinci was able to write with both hands at the same time and in a different language with each hand. This may be somewhat of a myth and is far from provable, but if it is true than I would think that the brain has to be parallel.
I agree with one poster who said something about the brain being able to change from parallel to serial. Sometimes I find myself barely able to concentrate on any one thought, and other times I am able to think of several things at the same time (As in read a newspaper or do schoolwork while listening to the radio or carrying on a conversation).
Re:flawed logic? (Score:1)
Though if they are in a pattern, you don't, which is interesting in and of itself. We don't have to count the dots every time a die comes up six.
Re:Hmm. (Score:2)
I don't believe it's possible to, in a parallel fashion, divide your attention between more than one thing. It may *seem* like it (driving and shaving, for example), but you're just switching back and forth between each task and probably don't notice it.
Perhaps our definitions of "cognitive thread of thought" differ, but the only way I can imagine a person being able to truly think about each of the things you mention above at the same moment (in a parallel fashion, and not just "task-switching") is if their brain were somehow divided into four independent chunks, and even then, each chunk probably wouldn't know about the other 3 trains of thought. I think we're just defining "cognitive thread" differently.
Decieving? (Score:1)
I imagine that this is a serial experiment in itself because of how the mind works: trial and error (or the scientific method).
or:
for (i=0,i < blocks, i++)
if (block[i] == red) || (block[i] == green) {
if (block[i] == nicked) {
item.pick(block[i])
}
}
}
I don't believe this proves that the brain processes images serially - just experimentation data.
Re:flawed logic? (Score:2)
Selective Visual Attention != Image Processing (Score:1)
consists of two functionally independent, hierarchical stages: An early, pre-attentive stage that operates without capacity limitation and in parallel across the entire visual field, followed by a later, attentive limited-capacity stage that can deal with only one item (or at best a few items) at a time. When items pass from the first to the second stage of processing, these items are considered to be selected. (Theeuwes 1993, p. 97f, original italics)
Now wether or not this researcher is refering to the first or second stage is not clear from the article. As the reasearch had the subjects looking for a red or green block with a nick in it, I assume he is not making a claim about the first stage. This stage has always been considered parallel and he would have to prove it is not with a single feature task, not a multiple one like he used. However, from the tone of the article and the quote, it seem that he IS making this claim.
If the author is making the claim that single feature detection is serial, I feel that his experiment will be soundly ripped apart by most Psychological researchers as we have a large convincing body of evidence that this stage is parallel. If he is not making this claim, then he really wasn't adding anything new to the scientific body because we already KNEW that the second stage was serial.
Click here for more info [www.diku.dk] JT
Threading (Score:1)
Just because our input(s) may be serial, and just because we only have one CPU, doesn't mean we cant have many processes going on at once.
Most of the evidence people in here have presented to argue against the serial processing theory sounds a lot more like multitasking, or perhaps even closer to threading. Though you have two threads going at once, say each focused on one item, you can't process each thread better than one at a time. Then occasionally you can put both threads in a wait state while you start another thread to process the results.
Also keep in mind that a lot of what we might perceive to indicate parallel processing is actually being done by recognized behavior analysis, which was burnt into us during our very early years.
We're also great at filtering, so that we can store one image or sound, but only focus on certain aspects of it. Later we may recall other aspects that we weren't paying attention to.
Can you tell me what flavors make up the flavor of Coca-Cola (without looking it up the same place I did)? Can you perceive a taste in parallel and pick out each part? Maybe you can take in a sample of data, filter it for one taste, take another sample, filter it a different way... but thats about it.
Same with musical chords -- this is purely a serial observation which we need to filter in order to pull out different bits, ond only by filtering out other sounds which we recognize. Often the best we can do is pattern-match one chord with the sound of the same chord we have heard before. Can even a trained ear recognize each note of a chord it hasn't heard before?
Okay, I don't have a degree or a research paper to back this up (I wish I did), but neither do most of you.
With a one-track mind,
Amen! (Score:1)
I've had to explain to her several times that my brain is not as evolved as hers. Therefore, if the TV is on, no talking. If you want to talk, turn the TV off. If you forget, you have no right to yell at me for watching the news while you're trying to get my opinion on whether or not we should have dinner with the Andersons next Friday.
(Moderator -- I know what you're thinking. You're thinking, "Is he off topic?" In all the excitement, I kind of forgot myself. But since the main thread IS about mental multitasking, and since this post IS anecdotal information about the topic, and since I am an Anonymous Coward who already has 0 points, and since you only have so many points to use, you have to ask yourself a question -- "Do I want to waste my points on this guy?" Well, do ya?)
Good experiment? (Score:1)
Culture is so influent. It's so obvious to me that one would scan one object at a time when searching an array of items for a tiny detail.
That's the way you're told to read, for example, albeit differently among different cultures.
In languages using phonetic alphabets, one's told to scan letter by letter and wait for a space, then put the letter in a string and possibly check with one's linguistic database for matches.
If you're playing pool, however, and you're watching your ball go, you're paying attention to a lot more 'events' at a time. You follow the ball's path, estimate its direction before it bounces on other balls (and often I get to picture its course and 'draw' it on my current 'view'). But you'll find yourself also keeping record of what color the first ball you hit was, which balls are possibly heading straight into the holes and which are not, and so forth.
Again, play tennis and your eyes/brain will be analyzing ball speed, course, estimating the bounce and checking if the ball lands out... it happens often that you're aware that the balls out but move and hit it anyway. That's because orders to your muscles have already been sent, but it also means that your brain has both ruled the ball out and estimated its path. One of the two has probably occurred before the other, but it might probably be because the two processes were indipendent yet not equally difficult to 'compute'.
Bottom line, I'd say that the 'attention area' capable of being processed is small, so you're naturally prone to shift from one point to another because of the limited 'screen' you have. Yet, if details aren't too tiny, and don't require great resolution, like balls moving, a broader scope is enough to let you observe them all and analyze them more or less in a parallel fashion...
Re:Serial recognition of data processed in paralle (Score:1)
Ever wonder why chickens and pigeons do that "head thing" when they walk? Part of it is due to the latency they have in the rate of how fast their eyes focus (which is probably related to how fast their brains absorb detail). They keep their head steady until they have to move it to give their eyes a chance to focus...
Re:Frames Per Second? -- Half an answer (Score:1)
might help.
Film (you know, that old fashioned stuff that you
used to head about projecting movies) goes at
24 frames per second, because any slower and
the human eye sees the flicker. Why isn't 24
frames good enough for video games? Because
the monitor is also flickering. When you have
a flicker on top of a flicker, you get problems
that you've probably seen.
Of course, that's a real half-assed answer for
you. Subliminal images are much shorter than
1/24th of a second, and we're pretty sure that
some part of our visual system picks them up.
Furthermore, the whole system is influenced by
all sorts of strange things. Ever get in a crash
or a fight, and remember seeing things in slow
motion? That was adrenaline at work, overclocking
your whole body including your brain. Even
And of course, in the end, I don't think that
the way human vision works could really be described in terms of frames/second. There's
even things like compression going on.
I hope someone posts a real answer...
shows the power of the brain (Score:1)
High level versus low level processes. (Score:2)
But to me, those higher processes have less to do with vision and more to do with reasoning. I might experience one thought after another concerning some object, but I still see all the objects in front of me.
This doesn't seem new (Score:1)
Also, its not the image processing that is serial. We've known for some time that that is parallel - large parts of visual cortex recognise lines at different angles, changes in color, etc, using what are quite close to standard image processing algorithms.
Seems more like behavior to me (Score:1)
Just because I look at each block in order doesn't mean thats how I think about those blocks.
Please explain to me how, (Score:1)
processer.
beyond this study (Score:1)
Serial recognition of data processed in parallel (Score:2)
This really shouldn't be that big of a surprise. Try watching two or more moving objects simultaneously, and pay attention to how you do it. Your attention ends up being focused on one item at a time, albeit relatively quickly (depending on how fast you think and how much caffeine you've had).
Though I basically agree with their findings, I'm not too thrilled about how this experiment was set up. They basically *forced* the participants to think serially by placing both of the suspect blocks on opposite ends of the board (yes, I know that's really the only way they could reliably determine which item was being focused on and when). The eye ball itself isn't capable of doing a detailed analysis of imagery except in the very small area in the direct center of its field of view. It's only logical for the participant to immediately identify the different colors peripherally (and perhaps even in parallel -- the experiment never delved into this part) and then concentrate a detailed glance first on one block, then on the other. Biologically, it had to happen that way. Their eyes couldn't have efficiently made the same analysis in a parallel fashion.
flawed logic? (Score:2)
now my question is, might this not have to do with the human's eyes and focusing on one object at a time and switching between multiple images quickly to try to bring them into focus as simultaneously and seamlessly as possible?
"It was important that we knew the order in which they paid attention to the colored objects, because the N2PC works by correlating the brain waves coming from each side of the brain over many statistical trials, so we had to always have them search in the same order"
He acknowledges that the brain is paying attention to certain objects based on color in a certain order, but attributes this to the brain and not to the input device. I'm going to make a crude analogy which will probably get shot down, but if you can think of a better one, please post it. It's like taking a mono VCR hooked up to mono speakers VS a mono VCR hooked up to a surround sound speakers. You know it's able to process the info better, but it can't because of the input device's shortcomings.
Serial I/O? (Score:2)
We can only focus on one thing at one time, therefore we can only handle one visual input. I'd venture the guess that all our I/O is serial - with quite a bit of DMA capability thrown in.
We can tune in on a single conversation in a room full of people, and switch focus from one to another, but it's real hard to keep track of more than that. We remember music sequencially, but unless we're well trained in music, we can not correctly conceptualize chord structures.
We become completely oblivious to the goings-on when we watch (and listen to) TV. We have a difficult time separating olfactory inputs - so we process those serially as well. "What is that? Lemon?
The only sense that seems parallel to me is the tactile. Though, since tactile input is the summation of very many single (bit) neurons, the parallelism we experience is probably the result of a lot of preprocessing of stimuli in the sensory nervous system and the spinal chord.
The neat thing is when we tune all the senses into the same stream of data. Remember last Christmas? The scent of the cooking goose, the sound of the Grandma Got Run Over By A Raindeer, the blinking of those damned lights and the itchy wool sweater..
With all of the senses delivering a variety of data that shares the same conceptual context, the imprint of the event is more powerful than if the serial stimuli from the different senses were reporting on events that we know are not related. This is probably why we remember better those times when all our senses are firing in parallel on the same concepts.
I'd venture the guess that as this research progresses, we will learn that we manage some pseudo-parallelism in our input processing through a similar mechanizm that we rely on for memory. Chunking, was it?
For example, if shown a group of objects, we can visually process them based on similarity (i.e. they're all read, square, whatever) so we notice more than if they were all distinctly different. Then we get lost in the volume of data that we have to take it.
As with the chunking that takes place when trying to remember more that the 7 (avg) simple items, finding commonality among the items we try to process sensually, makes it possible for us to more more data through our inputs. Sort of a lossy compression really.
Seems like a questionable experiment (Score:1)
On the other hand, does this experiment actually indicate that the brain is _interpreting_ a scene serially? (That's a tree, that's grass, that's an anvil dropping on my head) Or just processing a task serially? (Where is the oak tree?)
I guess the artical didn't really give enough information; perhaps the experiment was more than indicated.
But then again, what do I know.
Re:Hmm. (Score:2)
The way I see it, that analysis is being performed in a massively parallel fashion (like everything else in the brain), but is only being focused on one particular item or object in our field of view at a time, which makes it parallel up close, but still basically serial.
Differences among the sexes? (Score:1)
Whuzzup with that?
BTW, can you watch the telly while talking on the phone? (Not me!)
I don't get it... (Score:1)
How does that let them draw their conclusion (that object recognition is serial)? And while I'm asking questions: how did they manage to know which brain activity was the stuff they were interested in, rather than some housekeeping-type function (breathing, heart rate, etc).
-- Baffled
We Are Borg (Score:1)
Re:cool! (Score:1)
SNAFU!
All serial? (Score:1)
What i really want to know is whether it uses a monolithic or microkernel architecture...
Re:That's just a silly argument (Score:1)
In both cases, they're trying to teach a student how to overcome the limitations of being locked into one "mode" or another.
As far as what "most good artists" do, I don't think that's something you want to generalize, you're really talking about what "most good draftsmen" do. In terms of "recording an image", as the data appears on the recording media (pencil marks on paper), it's going to be serialized, because the artist is typically holding one pencil. How the brain "solves" the entire image may not be as methodical as how an inkjet printer prints an image (one line at a time), but that's probably because a whole "rough" view of the subject has to be worked out first, to preserve perspecive and proportion, otherwise, I think the human brain has a tendency to focus in on details, breaking the image down into small sections, without tying them together.
I think the data-processing equivalent would be, creating an overall shape for the subject (cylinder, cube, sphere, some primative), and then determining it's orientation and proportions, and "shaping" it down to reflect details. However, I don't think machines would have the same limitations as the human brain, because the sensing device, say a CCD, rasterizes the image, and therefore, you always have a frame of reference to stick to, to judge proportions.
I think it was Albrecht Durer (not sure), who devised a device for viewing objects up against a wire-mesh grid, so that if you held your head steady, you could accurately work on small sections on your paper (or in his case, I think it was a silver engraving plate), and not have to worry about the whole view - (I don't think he actually used it much, but it shows what he was thinking about). But using this kind of technique, a machine could break down a scene into sections (sort of like how JPEG works), and then paralell threads could be assigned to work out processing the details, the spatial relationships between the sections will always work out because of the rasterization.
However, this addresses rendering a 3d image onto 2d. We know how raytracers work.
The question is, what sort of input would machine-vision use to process 3d images as 3d information? stereoscopic CCDs? lasers? radar? I would think that compiling stereoscopic 2d images to a 3d representation in the computer's memory would be computationally intense. Visual ques for depth information are notoriously ambiguous, and with machine input, you would think that using some kind of range-specific system, radar, etc. would be best. . . bottom line, I think how a machine would process vision, would probably depend most on the input mechanism.
(Art school dropout, now playing with computers)
"The number of suckers born each minute doubles every 18 months."
Maybe now they can answer the other question... (Score:1)
Which came first, the chicken or the egg? ;)
Seriously, though, this does sound right, considering the fact that people tend to have difficulty focusing on more than one thing at a time. The old adage about chewing gum and walking for some folk. :)
But I wonder, is this serial processing due to the need to comprehend in a temporal fashion though?
If we processed visually in parallel, then our concept of time would be blurred, would it not? Or am I just not getting enough sleep and swapping my attention between this screen and the second screen in an attempt to get work done hampering my ability to understand?
Who knows. :p
- Wing
- Reap the fires of the soul.
- Harvest the passion of life.
Microsoft Press Release (Score:2)
Microsoft today announced Windows for Neurones, the brand new Microsoft operating system for life critical operations. No release date has been set yet, but Microsoft hope to have a release version on the shelves by the fall of 2000.
It is thought that Microsoft have been working on this product for several years, early alpha versions of which can still apparently be seen in institutions around the US. "We had problems with the initial cooperative multitasking that we tried. Processes would sometimes end up in a loop, and not release the processor for other tasks." an insider said. "The results of these early tests can be seen as high up as ex-president Ronald Reagan. He was an early alpha tester, but developed problems. Unfortunately, the uninstall wasn't available then."
Microsoft site several advantages to using the OS:
1. Your brain is no longer dependant on old proprietry systems, some of them as old as several million years! We've learnt a lot in all those years. Windows for Neurones (sometimes referred to as Windows Neurones Technology, or just WinNT,) uses such modern features as pre-emtive multitasking, and virtual memory.
2. Your brain can now use cheap, off the shelve productivity software. Studies have shown that a lot of people have to have productivity tools (calanders, addressbooks etc.) as external programs or peripherals. WinNT has all this built in. It is also easy to use, "it's as if it knows what you are thinking," an insider said.
Some people have expressed concerns over the scalability of the new WinNT. While older systems (such as AT&T Metabolism Control and HP Coordination) have exploited the natural parallism in the typical brain, WinNT's new visual system appears to process data in a serial fashion, limiting the ability to exploit the brain's parallel capabilities.
"Rubbish," said a MS insider, "It has been shown in independent studies that our approach is upto 300% faster in processing visual data, for example." he said, quoting a recent study by Mindcraft Inc., a service-oriented, independent test lab. The visual aspects of the OS, what the person sees, has been controversial in recent discussions.
Existing OS providers in this critical industry also slam WinNT's reliability, based on test observations. "We have systems with a mean time before critical failure of 100+ years. I don't understand why anyone would want to upgrade. While brains running our OS consume ~20% of the bodies metabolic rate, we estimate WinNT brains to use upto 30% of the bodies metabolic rate, as it has no power saving facilities. Existing systems have the ability to sleep, saving power, but I've heard WinNT can keep you up all night. This can cause real problems." said a rival brain OS provider. Even if people think the visual aspects are better, which is debatable, a nice visual interface is a waste of time if your heart stops beating! Some things are simply more important than good visuals.
Microsoft refused to release licensing details, but it is said not be following the recent trend of open source software, and open API's and protocols. It is said to include a new licensing agent, called 'Paranoia', which prevents third parties from getting too close and examining it's workings, or 'reverse engineering' as it is known.
Created in our own images (Score:1)
Strange setup (Score:1)
In a human eye all data collected on the left side of each eye is sent to the left part of the brain and vice versa. Thus, due to the mirroring of the image done in the lens, the right half of the brain process the left half of what is placed before you.
Now... IF the subject has to keep his eyes straight ahead isn't it likely that it takes some concentration and effort to discern details (nicked block or not) in an area removed from the focalpoint?
Would this not provoke serial behaviour even if the decoding itself was done in paralell?
Depends on what you're looking at. (Score:1)
monitor, you would be viewin it serially.
If you are staring at your computer monitor at
savory JPEGS, it would be digitally, with your
digit in your hand.
Re:Serial recognition of data processed in paralle (Score:1)
In any case, more experiments should be done before making statements such as the article is making. IMHO anyways...
Ribo
Suboptimal (Score:1)
I can't wait for a 1.1.x human body ;)
Re:cool! (Score:1)
neurons may be too slow for serial vision (Score:3)
Given this, the article will have to do better than just state `vision is serial' w/o specifying how that is possible when using slow neurons.
Joe
Re:flawed logic? (Score:2)
Bingo. The eye isn't capable of really examining something unless it's in the direct center of your field of view, which makes it only logical that a detailed glance be performed in a serial fashion. In this way I think the experiment was biased towards a serial method of examining the blocks. I bet when they first saw the blocks, though, they were able to find the red and the green block almost instantly, likely in more of a parallel fashion (since their eyes really didn't need to move).
Though on the flip side of the coin, without using anything but your peripheral vision, try to count the number (or even color) of major items on the desk in front of you. You still end up doing it serially, concentrating on each item individually (though, it seems to me, a lot faster than moving your eyes around and focusing on each item).
Center of attention (Score:1)
Most of the imaging rods and cones are concentrated in the center allowing greater detail, so would it make sense that we are used to looking at one thing at a time, and changing focus on different items. I see much greater detail when looking at something directly and might say I process things serially. One thing at a time. This does not seem true when driving for long periods of time when tunnel vision and eye movement seems to be comotose.
Re:Hmm. (Score:2)
--
Re:Hmm. (Score:1)
Re:Hmm. (Score:2)
I wonder what it would feel like to have two cognitive threads running at once inside your brain... Two lines of thought... weird.
Re:Hmm. (Score:2)
Does this cast doubt? (Score:2)
"Luck was able to use N2PC to identify whether a person was processing visual signals one at a time or simultaneously. "
Bummer of a name for a probability doctor, eh?
(ahem) well, there's this (Score:2)
Let's see, out of how many billions of species over a few billion years have we come to dominate so totally (unless the Ants have nukes we don't know about)? My guess it has something to do with our brains, and how they work. Hands are pretty cool (read my thoughts) but I have to assume (unless you live in Kansas) that your brain helped them along to their current level of dexterity at some point (perhaps your parents chose white collar jobs?).
If machines can and do someday become intellligent, and do indeed surpass human intelligence,
it'll be 'cause we want them too. I'll leave it at that.
Read my sig, and you'll see where I fall on the debate.
Re:neurons may be too slow for serial vision (Score:2)