Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

Human Brain seems to procceses image data serially 164

Tekmage writes "Ever wonder how the brain processes image/vision data? According to this research, it does so in a more serial manner than parallel. " This has been one of those on-going debates since the 1960s, with the advent of machine vision, with this being the latest round in the battle between the two sides.
This discussion has been archived. No new comments can be posted.

Human Brain seems to procceses image data serially

Comments Filter:
  • You are completely correct.

    This article isn't about image processing per se, it's about attention - the selective subcognitive process by which we focus additional processing power on specific elements of perception. It's a vital, essential skill, and works in a variety of levels and situations.

    Notice how you can always hear your name mentioned in a loud party, even though you otherwise can't make out a single conversation? Or how a tired mother can sleep through a storm, yet awaken to the quieter sound of her infant crying?

    There's a sort of "thunk" procedure that works in these situations. What the studies you've cited show, as well as the one cited in the article, is just which tasks require this "thunk" and which do not.
  • The blood vessels in front of the photoreceptors supply blood (products) to the neurons in front of the photoreceptors, and are in fact, very sparse. The photoreceptors themselves are fed by the much more blood rich tissue BEHIND the receptors.
  • Will be slim to none, imho. "A serial process operating on parellel hardware." It seems likely that the comparison was a serial process because a line of cubes, one not the same, is not something you see regularly enough recognize in parellel -- "at a glance", like you do, for instances, faces. I'd bet that after enough repetitions, the mind's pattern matching would kick into action and operate vision in parellel. After all, how many of you still read letter-by-letter?

    The conclusion to gain from this experiment is that computer vision systems need to be adaptive and learning. While it's probably not necessary to explicitly program every word into a computer reading system, sitting there and grinding away with OCR on a character-by-character basis is probably a waste of time. The difficulty is in feeding back the brain's (knowledge base's) identifier for a pattern (how do remember what the word "word" looks like well enough that you don't read it letter-by-letter?), determined after a serial examination of a given input, back to the parellel recognition system for training.

    I think it's safe to dismiss the 1/10 of a second switch as specific to the situation. I can notice an interval a great deal smaller than that playing Half-Life, and so can you, I'd imagine.

    -_Quinn
  • Oops... I stand corrected :)
  • Whilst I am not an expert in any of the science involved here, doesn't the instruction "probably red, but could be green" immediately make people search the red block first in detail and then the green one? Would this not influence the results?
  • Visual processing is done in the retina and optic
    nerve. Manipulating the processed data is done in
    the brain. By the time the signal arrives at the
    brain it has already been processed into data for
    representing all the objects and characters being
    viewed. All this experiment shows (besides limits
    in the eyes field of view) is that the brain
    evaluates objects serially, at least under some
    conditions. I does not identify object serially.

  • Our brains are divided into at least 2 independent chunks - the left and the right side. These two sides are only connected by a fairly tenous collection of neurons known as the Corpus Callosum (there are a couple of other very small connections as well...).

    People have actually had their corpus callosum severed - so-called "split brain" patients. In general, experiments with these people show that the two sides of the brain are largely independent - for instance if people are shown an object in such a manner that only one side of the brain can see it, then the other side of the brain is not aware of the position of this object. If the patient is then asked to reach for the object with the hand governed by the other side of the brain, they will try, but not know where the object is.

    Doesn't this tend to suggest at least 2 independent "chunks", with the CC normally governing communication between these chunks?

    To me, if the brain is really a large parallel machine, there's no reason why seperate threads of computation can't be going on in seperate parts of the brain - each taking up a small physical region of resources.

    These threads could even communicate with each other fairly readily. There's really only problems when the two threads want the same resources.

    This is backed up by experiment, too - Richard Feynmann did some very interesting experiments in keeping time. He found that without a watch, he could keep time very well....when he counted to 60, he _ALWAYS_ got 72 +- 1 seconds (or something like that).

    The interesting thing is what happened when he tried to do other things while counting. The majority of tasks had absolutely no effect on the counting. A few tasks slowed or sped up the counting. And some tasks precluded counting - he couldn't count at all while doing these tasks.

    (He also did some experiments to make sure he wasn't basing his counting on some internal physical clock like the heart beating or breathing - he counted while running up and down stairs. No change in the rate of counting.)

    Now I admit that a fast-switching serial model would work just as well in explaining these results, but considering that the brain is demonstrably a parallel architecture, I think that the parallel model is a lot more elegant.

    -Shane Stephens
  • I agree, but in the article, they sought to prove their serial theory by using examples of visual images. So they were justifying their serial theory by using serial input. See my comment/posting on the mono VCR.
  • Reading this article makes a lot of sense if you stop and think about it.

    A fellow poster had said that the brain is parallel, becuase he can take a shower, and think about what to code after breakfast. This is true, however the article was more about image processing, rather then thought patterns.

    The point that the study was making is that the brain focuses on images one at a time. (really fast that it's a blur to the concious mind, but singularly nonetheless.) Think about it. You stare at a computer monitor, and a post-it-note you have stuck next to your monitor falls down. Your eyes detect the movement and send the signal to the brain. The brain in turn sends an impulse back to the eye muscles to rotate and focus on the movement. In this split second, you forget about the monitor and your attention is on the post-it-note. Then your brain receives the visual cue that it's "Just a Goddamn piece of paper" and clicks back over to the monitor.

    But, since Thought patterns are processed in parallel, you can think of many things at once. While that post-it-note falls, you could be singing along to music that you are listening to on the radio. (AND, chances are you won't miss a beat when the post-it-note falls, and you're still singing. It's not severe enough to command the brain's full attention. If a car smashed through your wall, however, I'd bet you'd stop singing.)

    -- Give him Head? Be a Beacon?

  • Agreed. If they simply had an array of black blocks with one white block and said, "Find the white block," or, say, "Does this array of blocks contain two green ones," you'd be able to do that *instantly*, because, as I understand it, you would be able to find the patterns in the scene and discover the anomalies (the white block, for example) in much more of a parallel fashion, which allows you to do it nearly instantaneously, whereas examining a green and a red block for a small "nick" (which would require a detailed examination, thus movement of the eyeball itself), requires much more detail.
  • .. then how come I'm 99% sure that I can read words instantaneously? That doesn't include understanding them; you seem to have to pump them through some kind of auditory circuit first and internally vocalize them, but I'm pretty damn sure that I read the words themselves nigh-on immediately.

    Now if only I could get around that road-block in the middle, but I guess the brain still has to break it up, digest it, and commit it to memory - which takes time.

    As ever (and as with even the best computer architectures today), the problem appears to be the pipe between the processor and memory :)

    Recognition and processing is parallel.
    Understanding is serial.

    Simon
  • Much of this may have to do with the fact that human vision is not like a computer's vision. We don't get a nice rectangle of even pixels. We see best in a small area where are vision is focused. Move a few degrees out and vision (peripheral vision) rapidly degrades. This alone means that it is next to impossible to examine two things well at the same time.

    I'd be very curious as to if they'd get similar results with sound experiments. I suspect not. I recall experiments in which people were fed different sound sources in each ear. Even when paying attention to one voice stream for some task, subjects still responded to their own names in the other voice stream. This implies that some level of cognitive processing is still going on for the stream supposedly being ignored.
  • I wonder how much influence the fact that we can only focus on one object at a time has on these findings.

    I mean, if we can only look at the objects serially (and discriminating whether it has a nick or not i think requires "looking at" rather than "scanning" for one red block in a sea of greens), how do they expect us to process them in parallel?
  • I didn't see any data. How many people did they test? How did they determine if the thoughts were parallel or serial by watching N2PC?

    The article was a good brief overview but had no links to the report itself.

    BTW. Did it seem odd that the experiments were performed in 1994 but the article was just published? I wish I had 5 years to evaluate my findings and report back.
  • This is eye mechanics. The eye has higher resolution in the fovea, which is the only place likely to be able to find a nick on a block. (Also, the coor receptors necessary to differentiate between red and green concentrate in the fovea).

    Thus the eye will saccade (move rapidly) from one spot to the next to get the object under study projected onto the fovea. Of course the brain will process them serially; the fovea can only point at one at a time.

    Reading is a more realistic problem, where several words can fit onto the fovea at once. The question of whether we process those words in parallel or serial is not resolved by this study.
  • I agree. We're massively parallel. All they proved is that there is a limit to our parallelism. Duh! I can only focus on so many details at once. I knew that without any brain-wave processing.

    I already knew that when I read a book, I don't stare at the whole page until the text of the book sinks in. I read word after word. When I'm looking for a detail in a picture, I don't just stare at the picture until I find what I'm looking for. I scan small areas that look interesting until I can focus in on the area in question.
  • This is simply common sense. Wave your hand in front of a lamp (monitors don't work very well for this experiment). How do you see it? As one image after another; you don't see your hand in all possible positions at once.

    Note that that's simply processing sensory data. The people who talk about spotting the red cube in a bunch of green ones are talking about something totally different: recognition. Even there, the brain picks the red cube out from the whole image; the reason the time to recognize the red cube doesn't depend on the number of total cubes is that you see the same size image, no matter how many cubes there are, and the red cube looks different enough from the green ones that it's easy to spot. It'd be like playing "Where's Waldo" in a situation where everyone else is wearing blue; no matter how many people there are you'll find Waldo in a second.

    Still not convinced? Here's a simple experiment to try: play "Twinkle Twinkle Little Star" and Limp Bizkit's "Nookie" in your head simultaneously (those songs being chosen because they're totally different; feel free to substitute any other two songs that are sufficiently different), and try to concentrate on both at the same time (note: do this without actually saying the lyrics to either one; that's cheating). You can get pretty close, but I'll bet that you can't quite do it.

    More than likely, the brain simply "multitasks" in a manner not unlike machines do today; it doesn't really run multiple processes at once but it can do a pretty convincing illusion. Since each area works somewhat independently of the others, you can get a bit of parallelism going. That's why you could sing the lyrics to one song while thinking of another; you've assigned a different area to each task. Put them in the same area (by not saying the lyrics to either one) and suddenly you can't do it.

    So, cheer up. At least on this planet we're still top dog in terms of intelligence (your average U.S. politician notwithstanding).
  • by Suydam ( 881 )
    So does this rule out Beowulf clusters of human brains?

    What bothers me about the article is that it takes the stance of "The debate has always been which architecture is best. Now, since the human brain processes data serially, the debate is settled."

    Since when was it established that the human way of processing things was the best for sure??? Very poor logic on their part (IMHO)

  • This leads me to a question...

    What framerate do our eyes capture? Or are they too analogue to measure? Obviously our brains can't take in an infinite amount of data, but is it a matter of skipping "frames", or just a loss of resolution?

    For that matter, what is the framerate of reality? Personally I think it's infinite but I know there will be people who disagree with me. It partly depends on how you measure it.

    Enquiring minds want to know.
  • Well, actually, your brain already KNOWS that a tree is made up of Leaf after leaf after leaf etc., so as a mental image, all the small details are combined to form what your mind knows to be "Tree."

    -- Give him Head? Be a Beacon?

  • I had a (somewhat) similar thought when I read the article.

    I'm not cool enough to be able to speed read in parallel, but i'm pretty sure i can edit that way. For the last couple of years i've been able to look at a page of print, and my eyes will suddenly focus on a typo--it takes a second or two for my conscious mind to recognize what my eyes are focusing on is an error. Other people have mentioned the experiment of when given any number of green boxes, and one red box, that a person can find the red box in the same amount of time irregardless of how many green boxes there are. I believe that my high speed editing parlor trick is similar to this problem--over time my mind has become trained to recognize patterns of text as naturally as patterns of color.
    If the "nick on the box" test was carried out for several months, possibly several years, the results might vary. Meaning, that over time the people would become more expert at noticing nicks on boxes and the brain might process the information on a higher symbolic level. I doubt that anyone would want to check for nicks on boxes for that long, but there must surely be a job similar to the experiment in manufacturing or processing. Some sort of quality control job where a person has to watch a line of goods go by that they check for defects. Testing if a person who has done a task like that for several years (and is actually good at it!) has trained their neural net to perform the task in a parallel manner would be interesting, and would give a broader view of the nature of the cognitive process. You would only need to find one person that could process image data this way and it would muddy the picture presented by the article.

    Who knows, maybe people in Iowa are just more proned to seeing things serially than other humans ;-)
  • It seems like a remarkable conclusion. I don't believe that they have really put enough detail into the article to judge it correctly. I make the claim that vision cannot be entirely serial, I play an arcade game called House of the Dead in two player mode by myself, and having played this for some time now, I have little or no difficulty maintaining accurate aim from both guns on varying targets on the screen.

    While it may well be true that the highest level of vision is serial, this particular level of vision must be quite tightly defined, for, going back to the House of the Dead example, I always shoot for the head, which is by no means just a simple object recognition in such a game.

    I suspect more research really needs to be done in the area, and more importantly, that conclusions need to be very accurately defined, rather than making such broad statements.
  • To me, if the brain is really a large parallel machine, there's no reason why seperate threads of computation can't be going on in seperate parts of the brain - each taking up a small physical region of resources.

    I agree, but I wouldn't consider these "threads" to be cognitive in nature. A person's immediate attention is always focused only on one item at a time. Try examining one object while describing another. Your mind has to switch back and forth to be able to do both "simultaneously."
  • In response to the "so this makes serial processers better" line of thought, it should be pointed out that it takes many parallel human braiin processors to get an accurate image descripion.

    Think about trying to describe a thief you saw running down the street. You saw that he was tall and wearing a hat, someone else saw that he had a mustache, etc. Add more processors required to compensate for the uncertainty in the data from any single one, and you've got a system that doesn't look so serial anymore.

    It's obvious that more than one human is required for an accurate description. They haven't proven anything in the serial vs. parallel debate!

  • But obviously something is doing enough processing to tell the difference between your name and some other random word.
  • It is extremely difficult to tell what the scientists found based on the article. For example, "..it processes information serially, even though the underlying neural hardware is operating in parallel."

    It would seem from this statement that they are considering part of the brain 'hardware', while other parts not. This seems like fragile reasoning as last I checked it consists of neurons and glial cells (okay, throw in some blood and ions as well).

    I am going to make an assumption that they are referring to a person's attention when talking about the 'other part of the brain' -- that is, the brain takes the entire scene in all at once (this we know happens), but can only attend to one particular part of the scene at a time.

    This is not a new discovery. In fact it was pointed out quite a while ago by William James. He describes concsiousness as the process of selecting what to pay attention to. That is, we can only really pay attention to one task at a time, but the brain takes in a whole lot. James is usually right.

    Looking later in the article: "Luck and Woodman discovered that the brain turned its attention from one block to the next at intervals of about 1/10th second." Thus it would seem instead of describing how we view the world, they are rather describing the rate at which we attend to physically seen objects.

    I would suspect that they could do a similiar experiment with sound, taste, etc. I have not seen any mention to factors such as rate that they eye can move at (as mentioned by a previous post), or even how far apart objects were.

    Finally they a measure a brainwave without giving a good reason to pay attention to that brain wave. It reminds me of a joke I read once: "A scientist wants to figure out what makes an insect concious. He theorizes it must be the legs [okay, not smartest person]. He takes one leg off. It appears that the insect cannot make decisions as well as before. He continues in this fashion until the insect can no longer walk, and thus make a concious decision where to move."

    In reality it is very difficult to probe the brain. Taking EEGs only gets weak signals off the top of the brain, and cannot measure other impotrant parts of the brain. Other measuring tools such as a PET or CAT scan operate a large intervals, not giving an overall picture of the brain (from what I've been told they can only image something like every 5 minutes). Imagine a system that is totally chaotic, except that normalizes for large amounts of time. Of course you will sometimes get images of it doing 'abnormal' behavior, but all an average is done (as they do for all PET and CAT scans for studies), and it will appear as though it very predictable.

  • by Airdevronsix Icefall ( 33280 ) on Wednesday September 08, 1999 @08:04AM (#1694996)
    People have known for years that some visual processes occur in parallel, because they take constant time regardless of the amount of input. For example, if I ask you to pick one red square out of a scattering of many green squares, the time required does not depend on the number of squares. Other tasks require times proportional to the number of objects. For example, finding one red square in a scattering of red circles, green circles, and green squares, is a task requiring time proportional to the number of items you have to sort through. Everybody assumes that this is a serial process. All this has been known for years-- the description of the tasks that can be done in parallel, and hence the properties of the hardware that computes them, was pretty much settled in the late '80s.

    No doubt the research reported in this article is important for some reason, because I saw the technical paper it was based on in the most recent issue of Nature, which is a pretty major journal. Unfortunately I don't have it with me, so I can't read the paper and tell you why it is important. Certainly it's not just the fact that some kinds of visual perception are serial.
  • LOL!

    Although, I think I've heard of similar stories before, and there are always followed by the obligatory post:

    If you see a fellow user going blue because WinNT has crashed, poke both eyes, twist the nose, and grab ears and shake simultaneously to restart. >G

    Seriously, on /. a while back, there was an article saying the human brain has *built-in multitasking hardware*... Thus letting you chew gum and walk down the street (or in my case, writing this, watching tv, and typing)
  • by Signal 11 ( 7608 ) on Wednesday September 08, 1999 @08:05AM (#1694998)
    I have to agree, and disagree, at the same time. The human brain can keep track of several different things at once. My fiendishly simple example is what I do in the morning:

    I start breakfast, and then take a shower while the water boils or whatnot. While taking a shower, I often think of what I'm going to code after breakfast. I would consider that to be "multi-tasking".

    Now, here's another thing - how many times do you wake up in the morning with an answer to a complex coding problem? For me - it's *alot*. I find the answers just float in from dimension X into my head. That's parallel processing - part of my brain solved the problem while the other part handled something completely different without either part being aware of what the other was doing.

    I think the debate is rather moot - we can do both. If you want to argue over the sematics, you can do so. But when I think of the brain, I think of it as a complex signals processor.

    What I mean is, when you see something, it's translated into a signal, which is run through a series of filters and comparisons to tell you what you're seeing. This is also why you don't have an exact copy of what you saw - your brain only stores the "most significant bits" necessary to duplicate the signal. Some brains are better than others about reconstructing the signal. If you don't have all of the signal, your brain fudges it with values from similar experiences (or your values/beliefs). And if you have no signal at all, you post as an Anonymous Coward.

    So my point is - it can be both. Infact, look at how society is structured - into clusters of people (brains?) that work in parallel on a project until completion (teamwork). Minimal communication. Why wouldn't your own brain be wired in a similar fashion - with dozens, if not hundreds, of semi-autonomous agents working towards the same goal?

    --

  • If you get a decent look at someone, a competent sketch artist can get a very accurate picture out of you.

    I mean, really, do people think at all before they post? And to the moderator who thinks this drivel is insightful - please...

    Shaun
  • You should read the article.

    As far as point 3, here's the relevant portion of the article:

    This experiment identified a pattern in brain waves known as N2PC, which stands for the second negative peak (N2) of the posterior contralateral (PC). The N2PC identifies the location of brain waves as emerging from either the right or left side of the brain.
    The remaining items aren't delved into in the least, but it would certainly be nice if they were true.
  • It's not necessarily cognitive, though. People learn the sound their name makes very well. People in the middle of a conversation just as easily get distracted when they hear a fire alarm in the distance, or a glass breaking. These sounds don't need to be loud; they're just automatically recognized.

    The instant they hear that audio cue, however, their cognitive attention is turned *away* from the active conversation in order to concentrate on the source of the new sound.
  • No need to get so flamey about it. It's true that a sketch artist can make a picture based on what someone can remember, but that's because they're taking the limited data from the witness and putting it together with their own knowledge of what people look like.
    You don't believe that more witnesses would result in a more accurate picture?
    --
  • But there are many speed readers who are able to assimilate entire pages at a time. This seems to be a type of parallel processing.

    As far as I know, speed reading is done serially as well, skipping quite a bit of the text and mentally filling in the blanks. As a result, the actual comprehension of speed readers is usually lower than normal readers. Can anyone verify this?
  • by def ( 87618 )
    I'd have to read the article to be sure, but it would be surprising if they missed (1) as causing the brain to focus on different items serially.

    One thing that worries me about it though is the fact that the article says the red and green blocks were very far away from eachother on the extreme edges, so it would be very tempting for someone to direct their focus at the blocks, which could take the .1 seconds that the article says is the difference in timing.

    So, I might look into the study's publication to find out more exactly how the procedure was done.
  • Interesting view, but isn't the use of 'intelligence' here a bit overrated? what is your definition of intelligence? What and how do these 'entities' 'live' for? And what about animals? They are non-human intelligence too :)
  • "Which came first, the chicken or the egg? ;) "

    (Schoolchildren in Kansas, cover your eyes)

    I would have to say the egg, since the ancestors of what we now know as a chicken would at some point not be chickens. However, said ancestors would have laid an egg containing a mutant offspring which we now know as a chicken. Therefore, the egg came first.

    QED.
  • Call me new-agey, but I'm concerned that such experiments do not include 'enlightened masters' as subjects or controls. Surely the fact that an 'enlightened' mind is capable of processing sensory data without recourse to symbols would have some bearing on the outcome of this kind of experiment. The theory being that such a 'natural mind' has no need to compile its data in a common area for sorting and comparison, instead relying on more parallel processes to get the job done.
    Yeah, I'm a Mac programmer. You got a problem with that?
  • I suspect that the article is talking about very high level cognitive processes. But it's clear that heck of a lot of parallel preprocessing has to happen upfront! Before you can shift your attention from one object to another, you have to recognize that object regardless of how it is oriented, what the lighting conditions are, what the background is and so on. They aren't saying that this is all serial.

    According to my understanding of the subject, this is not true. High speed photography has demonstrated that vision occurs as discrete (serial) episodes called saccades. The choice of focal point is not based on higher level processing in the visual cortex, but rather is controlled by the Superior Colliculus (sp?) which is not part of the cortex. In fact, from what I've been told by researchers in the area of neurobiology, a human subject's eyes will repeatably focus on the same points in an image when presented at different times. Typically, edges and corners might be favored. Each such episode takes on the order of 50 or 100 msec, and input from a field of view about the focus point is fed into the visual cortex. Apparently, it is at the higher levels of processing that we turn these discrete, serial, images into a smooth, fuzzy view of the world about us.

    It's a hot topic in neurobiology and really quite neat to learn about.

    Cheers. Sapphire.

  • There is actually a theory of conciousness that suggests that our subconcious parallel-ly models a large number of things at a time, and these compete for conciousness status.

    The model that wins is what we view as our conciousness - the models continually compete, so what we conciously think of changes in response to new inputs.

    I'm not sure if this supports your argument or mine!!!

    -Shane Stephens
  • Research tends to indicate that rapid readers do recognize entire words (possibly even groups of words) instantaneously -- but that doesn't necessarily mean that you're processing the sight of each letter individually, in parallel. Instead you're recognizing the shape of the entire word.
  • being an artist myself, I know how wrong you are in this assumption. Ever watched an artist's eyes while they're drawing from life? do you think they have a far-away look or do you think they're analyzing details point by point? yes, artists take a step back to get the "whole picture", but are continuously getting up close and personal to get the details write and to make sure things add up.

    Actually, it would be interesting to study how artists look at items when they're drawing. Most good artists don't look at features, but look at features in how they relate to other features. You don't just draw a nose, an ear, or a mouth, but you draw them a little at a time, a line here, a line there, some shading here, some shading there, in relation to each other to build the face. Now, would that not characterize working in parallel?
  • This bit from the article might explain it:

    The N2PC identifies the location of brain waves as emerging from either the right or left side of the brain. By arranging the experimental situation, Luck was able to use N2PC to identify whether a person was processing visual signals one at a time or simultaneously.
    It seems that by placing the blocks on opposite sides of the board (left and right), looking at the left block would elicit a higher amount of activity in the right side of the brain while examining the right block would fire up the left side. I believe these differences were what they were looking for. If the subject were able to examine both blocks in parallel, the two halves of the brain would work simultaneously. The experiment showed a 1/10th second or so difference that was always right -> left, indicating that they focused their attention on the left block followed by the right.

    The article didn't really explain this, though, so this is just my educated guess.
  • It happens all the time. In fact, the average person can have about seven cognitive threads at once.
    For example, about 10 seconds ago I was thinking about what I just wrote, thinking about what I was going to write, kneading a ball of Sculpey in my fingers (for no real reason) and thinking about what it felt like, and noticing the sound my computer's fan is making. That's 4 right there.

    --
  • For example (assuming a guy audience), can you talk with someone while you're watching the TV? I can't. Most women can.

    I know, and it drives me bloody insane!


    Berlin-- http://www.berlin-consortium.org [berlin-consortium.org]
  • This is probably down to learning. i went to an "open plan" junior school. there was always lots of commotion. later in life, (senior school, university etc) it was always possible to study with all sorts of commotion going on around me, whereas my peers had problems with the slightest distraction.

    btw, just to be pedantic, all our brains are equally evolved.. some just work better than others ;)

    in conclusion, peoples's efficacy at "multi-tasking" may well be based on the environment they grew up in.
  • I worked at an Air Force lab in the Human Resources department where they were doing cognitive and experimental psychological testing (in order to develop better ways of training pilots, or whatever). I would sometimes have discussions with my "mentor" (I was an intern - programming though), about human visual processing. One stream of thought was that one had to first /mentally construct/ an object before it could be recognized/identified/percieved. The other stream of thought said it was at a much lower level, just past the point of stimulation in eye cones and rods, where the image was constructed before the brain ever "thought" about it. I forget the names of the two streams of thought. Anyway, it seems to me that this article is at a much higher cognitive level. If you are /asking/ the subject to do something, you have given them a goal, and a reason to premeditate a process to solve the goal. All humans (in this case), chose to move their attention serially from cube to cube. This is far from saying that they could only "percieve" or "recognize" the cubes on a serial basis. They probably sat down and saw a whole bunch of cubes (parallel), and then /decided/ to examine them serially. It seems very fishy to me to conclude that the brain therefore processes the images serially. What if the test material wasn't graphical? What if it was just a multiple choice problem? Of course you'd examine the possible solutions /serially/...you wouldn't be able to examine them /all/ at once, and if you did, you'd do a pretty bad job of evaluating them.
  • A commercial entity (a corporation) 'lives' for profit, for example. This explains why a corporation comprised of basically decent, moral human beings can routinely commit immoral acts. The whole is more than the sum of its parts.

    IMHO the only way to stop corporations from behaving immorally is to structure them in such a way that the individual moral decisions of the employees are not stifled as they are in a traditional corporate structure. What structure would work best, I don't know, but the Internet doesn't seem to be any better (see recent slashdot story on computer ethics).
  • Perhaps both..

    I always just consider my "subconscious" to be that which is handling and analyzing everything that I'm not consciously thinking about. I don't think it's much of a cognitive process, but mainly abstract pattern recognition. If an interesting pattern is discovered, you'll "notice" it.
  • I'm no neurophysiologist but it seems that reguarding a neuron as a mere switch may be understating their function. Perhaps massively understating it.
  • Humans dominant? Insects and bacteria rule the world...we conveniently get to live in it, and in our hubris, self-proclaim that we're "Kings of the world!"
  • Did anyone catch some little blurb, probably from Science News, that showed that the distribution of the different color-sensing cones is essentially random, yet when we look at, say a red wall, we see a uniform red wall...

    Maybe the brain does a lot of serial processing of data from the optic nerve, but the optic nerve and retina also do a lot of signal processing in and of themselves.

    I would rather think that we have lots of parallel/simultaneous subprocesses that are pipelined serially...

    Student: "Is it a wave or a particle?"
    Physics Buddha: "Yes."
  • by remande ( 31154 ) <remande.bigfoot@com> on Wednesday September 08, 1999 @08:54AM (#1695037) Homepage
    I don't think that "artificial" intelligence exists, I'm not convinced either way for extra-terrestrial intelligence, but I know that non-human intelligence is here on Earth today.

    We humans have developed organizational intelligence. Groups of human brains, hooked up with the appropriate networking, can themselves become an alien intelligence, as different from human intelligence as human behavior is from cellular behavior.

    For a long time, this has been mostly the province of corporations and governments. Ever wonder why such entities often lack common sense? It's because they are made up of humans, but aren't human. Congress is a group of over 400 humans; it doesn't act as a human, but can be modeled as an intelligent, alien being.

    Today, we have the Internet. On a smaller scale, we have Slashdot-style phenomena. These are virtually those "Beowulf clusters of human brains". It is just another alien intelligence.

    The big difference between the Internet and government/corporate organizations is in the interhuman connectivity. In governments and corporations, the governing layers are codified into a bureaucracy. This causes specific people to act as chokepoints, and that in turn limits the number of people that can interact effectively. On the Internet, the governing layers are a lot less codified. This requires a lot more data filtering at the various nodes (humans)--spam and similar phenomena travel better across the Internet than through your office--and a lot more bandwidth. But the Internet is all about bandwidth.

    Bureaucracies are alien intelligences made of humans. Internet communities are alien intelligences made of humans. They are different species of alien, and they are fighting each other.

    Why are bureaucracies afraid of internet communities, and vice versa? The answer is easy to see if you stop thinking in terms of humans. The bureaucracies are seeing a brand new type of intelligence. The "Linux community" is a perfect example. Over the course of eight years, this thing has gotten Microsoft, one of the Lords of Bureaucracy, frightened. A race war of organizational intelligences is brewing, if not already being fought.

    Is this the end of humanity and the beginning of organizational intelligence? Hardly. We have been living with bureaucracies since the Pharoahs, possibly before. But just the knowledge that there are inhuman intelligences out there helps you to better understand them, and to better interact with them.

  • Why follow nature's mistake? Just because our meatware is too limited to process stuff in parallel doesn't mean that a computer couldn't. The question should be are we going to make the machines see like us, or see BETTER than us. I'm all for having the machines do things better than us whenever possible.
  • There has to be exceptions to this theory that the brain is serial. I can think of one example, though it is not an every day one:
    It is said that Leonardo Da Vinci was able to write with both hands at the same time and in a different language with each hand. This may be somewhat of a myth and is far from provable, but if it is true than I would think that the brain has to be parallel.

    I agree with one poster who said something about the brain being able to change from parallel to serial. Sometimes I find myself barely able to concentrate on any one thought, and other times I am able to think of several things at the same time (As in read a newspaper or do schoolwork while listening to the radio or carrying on a conversation).
  • Though on the flip side of the coin, without using anything but your peripheral vision, try to count the number (or even color) of major items on the desk in front of you.

    Though if they are in a pattern, you don't, which is interesting in and of itself. We don't have to count the dots every time a die comes up six.
  • I thought the "7 things" theory was dealing more with the number of *tasks* or items in your short-term memory.

    I don't believe it's possible to, in a parallel fashion, divide your attention between more than one thing. It may *seem* like it (driving and shaving, for example), but you're just switching back and forth between each task and probably don't notice it.

    Perhaps our definitions of "cognitive thread of thought" differ, but the only way I can imagine a person being able to truly think about each of the things you mention above at the same moment (in a parallel fashion, and not just "task-switching") is if their brain were somehow divided into four independent chunks, and even then, each chunk probably wouldn't know about the other 3 trains of thought. I think we're just defining "cognitive thread" differently.
  • I am beginning to wonder if this article is a bit decieving in it's nature. It talks of the experiment of looking for a nick (scratch, notch) in a block of either red or green blocks - and a variety of blocks was presented.

    I imagine that this is a serial experiment in itself because of how the mind works: trial and error (or the scientific method).

    • It works by checking the block it comes on first first for color - is it red or green?
    • Now check if it has a nick in it
    • go on

    or:
    for (i=0,i < blocks, i++)
    if (block[i] == red) || (block[i] == green) {
    if (block[i] == nicked) {
    item.pick(block[i])
    }
    }
    }

    I don't believe this proves that the brain processes images serially - just experimentation data.

  • Right -- but that's just pattern recognition (something that is done in parallel), and not a cognitive analysis. You eventually just "know" that that pattern of 6 dots is, well, 6 dots.
  • Although these researchers may have been the first to actually conclusively prove that Selective Visual Attention is a serial process, most of the recient evidence was pointing in this direction. Most research agrees that selective attention

    consists of two functionally independent, hierarchical stages: An early, pre-attentive stage that operates without capacity limitation and in parallel across the entire visual field, followed by a later, attentive limited-capacity stage that can deal with only one item (or at best a few items) at a time. When items pass from the first to the second stage of processing, these items are considered to be selected. (Theeuwes 1993, p. 97f, original italics)

    Now wether or not this researcher is refering to the first or second stage is not clear from the article. As the reasearch had the subjects looking for a red or green block with a nick in it, I assume he is not making a claim about the first stage. This stage has always been considered parallel and he would have to prove it is not with a single feature task, not a multiple one like he used. However, from the tone of the article and the quote, it seem that he IS making this claim.

    If the author is making the claim that single feature detection is serial, I feel that his experiment will be soundly ripped apart by most Psychological researchers as we have a large convincing body of evidence that this stage is parallel. If he is not making this claim, then he really wasn't adding anything new to the scientific body because we already KNEW that the second stage was serial.

    Click here for more info [www.diku.dk] JT

  • I realize this is barely worth a 1, but...

    Just because our input(s) may be serial, and just because we only have one CPU, doesn't mean we cant have many processes going on at once.

    Most of the evidence people in here have presented to argue against the serial processing theory sounds a lot more like multitasking, or perhaps even closer to threading. Though you have two threads going at once, say each focused on one item, you can't process each thread better than one at a time. Then occasionally you can put both threads in a wait state while you start another thread to process the results.

    Also keep in mind that a lot of what we might perceive to indicate parallel processing is actually being done by recognized behavior analysis, which was burnt into us during our very early years.

    We're also great at filtering, so that we can store one image or sound, but only focus on certain aspects of it. Later we may recall other aspects that we weren't paying attention to.

    Can you tell me what flavors make up the flavor of Coca-Cola (without looking it up the same place I did)? Can you perceive a taste in parallel and pick out each part? Maybe you can take in a sample of data, filter it for one taste, take another sample, filter it a different way... but thats about it.

    Same with musical chords -- this is purely a serial observation which we need to filter in order to pull out different bits, ond only by filtering out other sounds which we recognize. Often the best we can do is pattern-match one chord with the sound of the same chord we have heard before. Can even a trained ear recognize each note of a chord it hasn't heard before?

    Okay, I don't have a degree or a research paper to back this up (I wish I did), but neither do most of you.

    With a one-track mind,
  • by Anonymous Coward
    My wife handles electronic background noise MUCH better than I can. She can talk on the phone with a TV playing at normal volume in the same room. She can even sleep with the TV on. I can't handle ANY of that. I'm either watching TV or the darn thing is OFF because I'm doing something else.

    I've had to explain to her several times that my brain is not as evolved as hers. Therefore, if the TV is on, no talking. If you want to talk, turn the TV off. If you forget, you have no right to yell at me for watching the news while you're trying to get my opinion on whether or not we should have dinner with the Andersons next Friday.

    (Moderator -- I know what you're thinking. You're thinking, "Is he off topic?" In all the excitement, I kind of forgot myself. But since the main thread IS about mental multitasking, and since this post IS anecdotal information about the topic, and since I am an Anonymous Coward who already has 0 points, and since you only have so many points to use, you have to ask yourself a question -- "Do I want to waste my points on this guy?" Well, do ya?)
  • Mmmh, apparently the guys made it to Nature, so bow to my masters. Yet, I never trust people who try to assimilate the human brain to a machine, at least not to one that we yet know of.

    Culture is so influent. It's so obvious to me that one would scan one object at a time when searching an array of items for a tiny detail.

    That's the way you're told to read, for example, albeit differently among different cultures.

    In languages using phonetic alphabets, one's told to scan letter by letter and wait for a space, then put the letter in a string and possibly check with one's linguistic database for matches.

    If you're playing pool, however, and you're watching your ball go, you're paying attention to a lot more 'events' at a time. You follow the ball's path, estimate its direction before it bounces on other balls (and often I get to picture its course and 'draw' it on my current 'view'). But you'll find yourself also keeping record of what color the first ball you hit was, which balls are possibly heading straight into the holes and which are not, and so forth.

    Again, play tennis and your eyes/brain will be analyzing ball speed, course, estimating the bounce and checking if the ball lands out... it happens often that you're aware that the balls out but move and hit it anyway. That's because orders to your muscles have already been sent, but it also means that your brain has both ruled the ball out and estimated its path. One of the two has probably occurred before the other, but it might probably be because the two processes were indipendent yet not equally difficult to 'compute'.

    Bottom line, I'd say that the 'attention area' capable of being processed is small, so you're naturally prone to shift from one point to another because of the limited 'screen' you have. Yet, if details aren't too tiny, and don't require great resolution, like balls moving, a broader scope is enough to let you observe them all and analyze them more or less in a parallel fashion...
  • Hmm... how much of the serial processing debate is affected by the fact that most humans can only focus our attention on only one thing at a time, really, for details? Sure, there are peripheral triggers around the "attention space", but since we don't have eyes like chameleons, we're kind of stuck to some sort of parallel processing of serially-gathered detail information.

    Ever wonder why chickens and pigeons do that "head thing" when they walk? Part of it is due to the latency they have in the rate of how fast their eyes focus (which is probably related to how fast their brains absorb detail). They keep their head steady until they have to move it to give their eyes a chance to focus...
  • Interesting question. Ask someone else, but this
    might help.

    Film (you know, that old fashioned stuff that you
    used to head about projecting movies) goes at
    24 frames per second, because any slower and
    the human eye sees the flicker. Why isn't 24
    frames good enough for video games? Because
    the monitor is also flickering. When you have
    a flicker on top of a flicker, you get problems
    that you've probably seen.

    Of course, that's a real half-assed answer for
    you. Subliminal images are much shorter than
    1/24th of a second, and we're pretty sure that
    some part of our visual system picks them up.

    Furthermore, the whole system is influenced by
    all sorts of strange things. Ever get in a crash
    or a fight, and remember seeing things in slow
    motion? That was adrenaline at work, overclocking
    your whole body including your brain. Even

    And of course, in the end, I don't think that
    the way human vision works could really be described in terms of frames/second. There's
    even things like compression going on.

    I hope someone posts a real answer...
  • wow, i wonder how the brain can shift rapidly so fast from one object to another. it is awe-inspiring. and the fact is that each one of us has one in our heads is nice :)
  • I suspect that the article is talking about very high level cognitive processes. But it's clear that heck of a lot of parallel preprocessing has to happen upfront! Before you can shift your attention from one object to another, you have to recognize that object regardless of how it is oriented, what the lighting conditions are, what the background is and so on. They aren't saying that this is all serial.

    But to me, those higher processes have less to do with vision and more to do with reasoning. I might experience one thought after another concerning some object, but I still see all the objects in front of me.
  • Surely this is just further confirmation of what gestalt pyshcologists determined in the 19th century ? that we only pay attention to a small part of our visual field at any given time. The only new aspect seems to be the neurological confirmation of what was alread known through psychological experiment.

    Also, its not the image processing that is serial. We've known for some time that that is parallel - large parts of visual cortex recognise lines at different angles, changes in color, etc, using what are quite close to standard image processing algorithms.
  • The test seems more like it studies the method we use to examine things, which I would consider a behavioral trait, and not an indication of how the brain is working.

    Just because I look at each block in order doesn't mean thats how I think about those blocks.
  • Somthing can be procssed parallel with only one
    processer.
  • As the article notes ... this serial processing happens faster than the conscious mind can interpret and as such the difference between serial and parallel visual processing causes little difference in perception. It would be very interesting to see the processes involved in translating these serial images to memory and the crossover to parallel neural pathways.
  • The raw image data is of course handled in an extremely parallel fashion, but the cognitive process involved, identifying patterns and discriminating between one object and another, is serial.

    This really shouldn't be that big of a surprise. Try watching two or more moving objects simultaneously, and pay attention to how you do it. Your attention ends up being focused on one item at a time, albeit relatively quickly (depending on how fast you think and how much caffeine you've had).

    Though I basically agree with their findings, I'm not too thrilled about how this experiment was set up. They basically *forced* the participants to think serially by placing both of the suspect blocks on opposite ends of the board (yes, I know that's really the only way they could reliably determine which item was being focused on and when). The eye ball itself isn't capable of doing a detailed analysis of imagery except in the very small area in the direct center of its field of view. It's only logical for the participant to immediately identify the different colors peripherally (and perhaps even in parallel -- the experiment never delved into this part) and then concentrate a detailed glance first on one block, then on the other. Biologically, it had to happen that way. Their eyes couldn't have efficiently made the same analysis in a parallel fashion.
  • "We are the first research group to show definitively that the human brain processes images serially-paying attention to only one object at a time and shifting rapidly from object to object"

    now my question is, might this not have to do with the human's eyes and focusing on one object at a time and switching between multiple images quickly to try to bring them into focus as simultaneously and seamlessly as possible?

    "It was important that we knew the order in which they paid attention to the colored objects, because the N2PC works by correlating the brain waves coming from each side of the brain over many statistical trials, so we had to always have them search in the same order"

    He acknowledges that the brain is paying attention to certain objects based on color in a certain order, but attributes this to the brain and not to the input device. I'm going to make a crude analogy which will probably get shot down, but if you can think of a better one, please post it. It's like taking a mono VCR hooked up to mono speakers VS a mono VCR hooked up to a surround sound speakers. You know it's able to process the info better, but it can't because of the input device's shortcomings.
  • It seems totally intuitive. The only news here is that they've got documented data to back up intuition.

    We can only focus on one thing at one time, therefore we can only handle one visual input. I'd venture the guess that all our I/O is serial - with quite a bit of DMA capability thrown in.

    We can tune in on a single conversation in a room full of people, and switch focus from one to another, but it's real hard to keep track of more than that. We remember music sequencially, but unless we're well trained in music, we can not correctly conceptualize chord structures.

    We become completely oblivious to the goings-on when we watch (and listen to) TV. We have a difficult time separating olfactory inputs - so we process those serially as well. "What is that? Lemon? ... And sage, and rosemary... "

    The only sense that seems parallel to me is the tactile. Though, since tactile input is the summation of very many single (bit) neurons, the parallelism we experience is probably the result of a lot of preprocessing of stimuli in the sensory nervous system and the spinal chord.

    The neat thing is when we tune all the senses into the same stream of data. Remember last Christmas? The scent of the cooking goose, the sound of the Grandma Got Run Over By A Raindeer, the blinking of those damned lights and the itchy wool sweater..

    With all of the senses delivering a variety of data that shares the same conceptual context, the imprint of the event is more powerful than if the serial stimuli from the different senses were reporting on events that we know are not related. This is probably why we remember better those times when all our senses are firing in parallel on the same concepts.

    I'd venture the guess that as this research progresses, we will learn that we manage some pseudo-parallelism in our input processing through a similar mechanizm that we rely on for memory. Chunking, was it?

    For example, if shown a group of objects, we can visually process them based on similarity (i.e. they're all read, square, whatever) so we notice more than if they were all distinctly different. Then we get lost in the volume of data that we have to take it.

    As with the chunking that takes place when trying to remember more that the 7 (avg) simple items, finding commonality among the items we try to process sensually, makes it possible for us to more more data through our inputs. Sort of a lossy compression really. :)
  • Although my background isn't in neuro or such, this seems to be a questionable experiment. If I'm looking for a detail in a image (a nick on a block) I will examine each block sequentially, and look for the feature. If it's a big nick, it may not take long, but if it's small, I can certainly tell that I'm examining each block.

    On the other hand, does this experiment actually indicate that the brain is _interpreting_ a scene serially? (That's a tree, that's grass, that's an anvil dropping on my head) Or just processing a task serially? (Where is the oak tree?)

    I guess the artical didn't really give enough information; perhaps the experiment was more than indicated.


    But then again, what do I know.
  • The experiment wasn't trying to determine whether the brain can think or task in parallel, but how we analyze the data we see.

    The way I see it, that analysis is being performed in a massively parallel fashion (like everything else in the brain), but is only being focused on one particular item or object in our field of view at a time, which makes it parallel up close, but still basically serial.
  • I have always believed/heard (anyone have some scientific info on this?) that men and women interpret data differently. For example (assuming a guy audience), can you talk with someone while you're watching the TV? I can't. Most women can.

    Whuzzup with that?

    BTW, can you watch the telly while talking on the phone? (Not me!)
  • How does this experiment they mentioned (N2PC?) help them know serial vs parallel? All that does (according to the article) is let them know which side of the brain is doing the work.

    How does that let them draw their conclusion (that object recognition is serial)? And while I'm asking questions: how did they manage to know which brain activity was the stuff they were interested in, rather than some housekeeping-type function (breathing, heart rate, etc).

    -- Baffled
  • Hmm... So, I wonder what would happen if we somehow tapped into our brains, formed a collective, and created some sort of Beuwolf cluster... Isn't this the internet? Take all those "Serial" minds and make 'em useful. -Duranos
  • Look at PCs. They dominate, and they're not the best way of doing things...

    SNAFU!
  • This only proves (if it proves anything) that the brain is time-shared in the initial stage of analyzing visual information. The article says that the brain switches focus about every 0.1 second. But remember that the consciousness has a 0.5 second "lag" that gets masked by some reality-defying neural algorithms to get the "real-time" effect. And there is reason to belive that parts of the brain is far too complicated to be described as "parallel" or "serial" (eg. holographic theories).

    What i really want to know is whether it uses a monolithic or microkernel architecture...
  • That's actually one of the common pitfalls of beginning art students, the fact that they tend to focus on details rather than the object as a whole. The excercise of "contour drawing" is designed to combat that, by maddeningly forcing you to work on what amounts to details - while "gesture drawing" focusses on gross generalizations of shape. Gesture drawing tends to capture the overall form of the subject, while omitting the details, and contour drawing tends to be accurate in terms of details, but you end up with badly proportioned overall figures.

    In both cases, they're trying to teach a student how to overcome the limitations of being locked into one "mode" or another.

    As far as what "most good artists" do, I don't think that's something you want to generalize, you're really talking about what "most good draftsmen" do. In terms of "recording an image", as the data appears on the recording media (pencil marks on paper), it's going to be serialized, because the artist is typically holding one pencil. How the brain "solves" the entire image may not be as methodical as how an inkjet printer prints an image (one line at a time), but that's probably because a whole "rough" view of the subject has to be worked out first, to preserve perspecive and proportion, otherwise, I think the human brain has a tendency to focus in on details, breaking the image down into small sections, without tying them together.

    I think the data-processing equivalent would be, creating an overall shape for the subject (cylinder, cube, sphere, some primative), and then determining it's orientation and proportions, and "shaping" it down to reflect details. However, I don't think machines would have the same limitations as the human brain, because the sensing device, say a CCD, rasterizes the image, and therefore, you always have a frame of reference to stick to, to judge proportions.

    I think it was Albrecht Durer (not sure), who devised a device for viewing objects up against a wire-mesh grid, so that if you held your head steady, you could accurately work on small sections on your paper (or in his case, I think it was a silver engraving plate), and not have to worry about the whole view - (I don't think he actually used it much, but it shows what he was thinking about). But using this kind of technique, a machine could break down a scene into sections (sort of like how JPEG works), and then paralell threads could be assigned to work out processing the details, the spatial relationships between the sections will always work out because of the rasterization.
    However, this addresses rendering a 3d image onto 2d. We know how raytracers work.

    The question is, what sort of input would machine-vision use to process 3d images as 3d information? stereoscopic CCDs? lasers? radar? I would think that compiling stereoscopic 2d images to a 3d representation in the computer's memory would be computationally intense. Visual ques for depth information are notoriously ambiguous, and with machine input, you would think that using some kind of range-specific system, radar, etc. would be best. . . bottom line, I think how a machine would process vision, would probably depend most on the input mechanism.

    (Art school dropout, now playing with computers)

    "The number of suckers born each minute doubles every 18 months."
  • Which came first, the chicken or the egg? ;)

    Seriously, though, this does sound right, considering the fact that people tend to have difficulty focusing on more than one thing at a time. The old adage about chewing gum and walking for some folk. :)

    But I wonder, is this serial processing due to the need to comprehend in a temporal fashion though?

    If we processed visually in parallel, then our concept of time would be blurred, would it not? Or am I just not getting enough sleep and swapping my attention between this screen and the second screen in an attempt to get work done hampering my ability to understand?

    Who knows. :p


    - Wing
    - Reap the fires of the soul.
    - Harvest the passion of life.
  • Redmond, Wash. Wed. September 8 1999.

    Microsoft today announced Windows for Neurones, the brand new Microsoft operating system for life critical operations. No release date has been set yet, but Microsoft hope to have a release version on the shelves by the fall of 2000.

    It is thought that Microsoft have been working on this product for several years, early alpha versions of which can still apparently be seen in institutions around the US. "We had problems with the initial cooperative multitasking that we tried. Processes would sometimes end up in a loop, and not release the processor for other tasks." an insider said. "The results of these early tests can be seen as high up as ex-president Ronald Reagan. He was an early alpha tester, but developed problems. Unfortunately, the uninstall wasn't available then."

    Microsoft site several advantages to using the OS:
    1. Your brain is no longer dependant on old proprietry systems, some of them as old as several million years! We've learnt a lot in all those years. Windows for Neurones (sometimes referred to as Windows Neurones Technology, or just WinNT,) uses such modern features as pre-emtive multitasking, and virtual memory.
    2. Your brain can now use cheap, off the shelve productivity software. Studies have shown that a lot of people have to have productivity tools (calanders, addressbooks etc.) as external programs or peripherals. WinNT has all this built in. It is also easy to use, "it's as if it knows what you are thinking," an insider said.

    Some people have expressed concerns over the scalability of the new WinNT. While older systems (such as AT&T Metabolism Control and HP Coordination) have exploited the natural parallism in the typical brain, WinNT's new visual system appears to process data in a serial fashion, limiting the ability to exploit the brain's parallel capabilities.

    "Rubbish," said a MS insider, "It has been shown in independent studies that our approach is upto 300% faster in processing visual data, for example." he said, quoting a recent study by Mindcraft Inc., a service-oriented, independent test lab. The visual aspects of the OS, what the person sees, has been controversial in recent discussions.

    Existing OS providers in this critical industry also slam WinNT's reliability, based on test observations. "We have systems with a mean time before critical failure of 100+ years. I don't understand why anyone would want to upgrade. While brains running our OS consume ~20% of the bodies metabolic rate, we estimate WinNT brains to use upto 30% of the bodies metabolic rate, as it has no power saving facilities. Existing systems have the ability to sleep, saving power, but I've heard WinNT can keep you up all night. This can cause real problems." said a rival brain OS provider. Even if people think the visual aspects are better, which is debatable, a nice visual interface is a waste of time if your heart stops beating! Some things are simply more important than good visuals.

    Microsoft refused to release licensing details, but it is said not be following the recent trend of open source software, and open API's and protocols. It is said to include a new licensing agent, called 'Paranoia', which prevents third parties from getting too close and examining it's workings, or 'reverse engineering' as it is known.
  • It's often been said and proven that people try to create things in their own image... whether it be physical, mechanical or ideological. Perhaps we're on the right track to having a technological replicant of a human brain.
  • As I understand it they used the detection of a brainwave in one of either half of the brain as an indicator of imageprocessing done in this half. The details of the setup is a little unclear but for this to work in the way they wish (detect a block to the left -> brainwave goes 'ping' in right side of the brain) the subject needs to keep his/hers eyes directed straight ahead in the center of the image.

    In a human eye all data collected on the left side of each eye is sent to the left part of the brain and vice versa. Thus, due to the mirroring of the image done in the lens, the right half of the brain process the left half of what is placed before you.

    Now... IF the subject has to keep his eyes straight ahead isn't it likely that it takes some concentration and effort to discern details (nicked block or not) in an area removed from the focalpoint?
    Would this not provoke serial behaviour even if the decoding itself was done in paralell?

  • If you are staring blankly at say, your computer
    monitor, you would be viewin it serially.
    If you are staring at your computer monitor at
    savory JPEGS, it would be digitally, with your
    digit in your hand.
  • I agree, the experiment seemed rather contrived. I would think that some other form of experiment would need to be done to give more insight into this process. Say, you see the same picture of blocks but you have to select which block is "red" for instance (a random red block). I would imagine (I'm not a cognitive scientist by any means) that that sort of information would be processed in parallel. When looking for a particular color, I wouldn't think a person look at each block individually. I seem to remember something about the way the brain processess color, especially the color red...hmm...
    In any case, more experiments should be done before making statements such as the article is making. IMHO anyways... :)

    Ribo
  • One could improve the human vision greatly by placing the neurons bearing the signals from the sensors to the brain behind the retina rather than obscuring the light. While we're at it, why don't we turn the sensors about so that the sensetive part is facing outwards?

    I can't wait for a 1.1.x human body ;)

  • I agree. Simply because the human has arisen to become the dominant form of life on the planet, what does this have to bear on the overall scheme of intellegince? If machines can and do someday become intellligent, and do indeed surpass human intelligence, it is very unlikely that they will have all the same archetectures in place, and are smarter just because they are faster. Forget machines, who know swhat other forms of life are out there, posssible more intelligent than our own. Who knows how their brains process information?
  • by unAnonymous unCoward ( 63952 ) on Wednesday September 08, 1999 @08:15AM (#1695103) Homepage
    I seem to remember from long ago that the switching time for neurons is the same as that for mechanical relays .. on the order of milliseconds. Such low switching time makes it impossible for vision to be operating in anything other than a massively parallel manner.

    Given this, the article will have to do better than just state `vision is serial' w/o specifying how that is possible when using slow neurons.

    Joe
  • now my question is, might this not have to do with the human's eyes and focusing on one object at a time and switching between multiple images quickly to try to bring them into focus as simultaneously and seamlessly as possible?

    Bingo. The eye isn't capable of really examining something unless it's in the direct center of your field of view, which makes it only logical that a detailed glance be performed in a serial fashion. In this way I think the experiment was biased towards a serial method of examining the blocks. I bet when they first saw the blocks, though, they were able to find the red and the green block almost instantly, likely in more of a parallel fashion (since their eyes really didn't need to move).

    Though on the flip side of the coin, without using anything but your peripheral vision, try to count the number (or even color) of major items on the desk in front of you. You still end up doing it serially, concentrating on each item individually (though, it seems to me, a lot faster than moving your eyes around and focusing on each item).
  • I might have thought of it differently:

    Most of the imaging rods and cones are concentrated in the center allowing greater detail, so would it make sense that we are used to looking at one thing at a time, and changing focus on different items. I see much greater detail when looking at something directly and might say I process things serially. One thing at a time. This does not seem true when driving for long periods of time when tunnel vision and eye movement seems to be comotose.
  • I read it as an article trying to say that "the brain is serial!", rather than saying that visual input is serial. I say that the brain can be dynamically reconfigured to be serial or parallel, depending on the problem it's facing.

    --
  • well, then it's actually parallel for the simple reason of while we're processing the information that our eyes are bringing in, we're simultaneously telling our eyes where to get the next bit of information from. If we were serial, then we're bring the info in, process it, then tell the eye where to go next, process it, ad infinitum. If that's the case, then our reality would be staggered instead of fluid because our brain would have to get input, process input, execute the move eye function, get input, etc.
  • While I think we're pretty much certain the low-level aspects of the brain are handled entirely in parallel, it's certainly possible (and even likely) that most all cognitive tasks (requiring our attentive thought) be done in a serial fashion.

    I wonder what it would feel like to have two cognitive threads running at once inside your brain... Two lines of thought... weird.
  • Again, these are all issues that are handled by other areas of the brain, *in parallel*. The article only really discussed the COGNITIVE PROCESSING of the imagery. Tracking of the eyeball is something handled by multiple areas of the brain.
  • A straight quote from the article:

    "Luck was able to use N2PC to identify whether a person was processing visual signals one at a time or simultaneously. "

    Bummer of a name for a probability doctor, eh?



  • I agree. Simply because the human has arisen to become the dominant form of life on the planet, what does this have to bear on the overall scheme of intellegince?

    Let's see, out of how many billions of species over a few billion years have we come to dominate so totally (unless the Ants have nukes we don't know about)? My guess it has something to do with our brains, and how they work. Hands are pretty cool (read my thoughts) but I have to assume (unless you live in Kansas) that your brain helped them along to their current level of dexterity at some point (perhaps your parents chose white collar jobs?).

    If machines can and do someday become intellligent, and do indeed surpass human intelligence,

    it'll be 'cause we want them too. I'll leave it at that.

    Read my sig, and you'll see where I fall on the debate.
  • Right, but it's entirely possible that this massive parallelization is able to pull out some abstract shapes, hues and the like and pass it up to be handled by higher areas of the brain, which examine the discrete "object" in a serial fashion.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...