Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Linux on the Brain 64

LinuxNews.com writes "Linux is surfing the curls of the human brain. In a project designed to study cerebral cortex topography as part of the Human Brain Project, Linux is being used to assimilate massive amounts of data--and bring the project in under budget."
This discussion has been archived. No new comments can be posted.

Linux on the Brain

Comments Filter:
  • After reading that article, a question popped into my head: What would we use said data for? My PERSONAL fantasy is that, eventually, we will get to a point where you will simply have to THINK a command, and your mind-reading-hat (or goggles, you get the idea) interprets the neurological impulses, and gives your computer the appropriate commands..

    A curious idea, to be sure.. but does anybody else out there think this will EVER happen?

    ,-----.----...---..--..-....-
    ' CitizenC
    ' WebMaster, PlanetQ3F [q3f.net]
    `-----.----...---..--..-....-
  • by kdgarris ( 91435 ) on Friday April 14, 2000 @01:03PM (#1131656) Journal
    I wonder how many Bogomips Linux running on a brain would get?

    ;-)

    -Karl
  • I haven't messed with NFS support, over here we use AFS instead.

    Transarc's AFS Clients are decent, and I recently tried using arla instead. (so I could upgrade kernels, and also try out the open source solution...) At first, it was *really* slow, but I traced that back to an ethernet card problem. Now it runs even faster!

    I know there's also the Coda project, which sounded really cool, but I guess that isn't so far along?
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • We don't seem to be that far off now, anyway. A hundred years ago mail took weeks to deliver, and now we transmit mail at near the speed of thought. I do believe mental controls will replace keyboard, mouse, joystick, steering wheel, etc...
  • I suppose they may save some money on the OS but anyone has an idea on the amount of hardware and cost to get this project up and running? Anyone has any idea on the resolution of those brain scanners?
  • Could someone clue me into the Natalie Portman references?
    --
  • Second furious debugging night :I When the machine cannot do it, you have to simulate it.

    Besides, it'd be more interesting to have the machines decode, transfer and encode mind movements between people than just obey - they will do both, eventually.

    "^X^S - oops, where am I?"
  • By the time your mind-reading-hat, as you say, is reading your thoughts, I doubt you'll have anything in front of you that you'd identify as "a computer" like you do now. By then, the hat will probably be feeding your desires to your nanite-laden bloodstream.
  • "programmers to simply install and go, whereas purchasing software for proprietary machines usually involves budget issues and delays while purchase orders are processed."

    I've noticed the same thing at my company. When I show the CEO a list of GPL software my department will be using, and each one says "price = free", and the highest cost items on the budget list are a few printed manuals at $49.95, it is amazing how quickly things can be accomplished.

  • by pezchik ( 114064 ) on Friday April 14, 2000 @01:15PM (#1131664) Homepage
    I don't know, I would feel very strange if my computer could read my mind. She's already mad at me, and if she actually knew how often I mentally cursed her, she might just lock up Opera even more often. And I just know she'd blue screen every time I looked at her funny.

    I'm already a little worried about what she's going to do after I post this. I knew I should have gone to a computer cluster...

  • This is your brain

    This is your brain on Linux

    Any questions?

  • by stevens ( 84346 ) on Friday April 14, 2000 @01:18PM (#1131666) Homepage
    My PERSONAL fantasy is that, eventually, we will get to a point where you will simply have to THINK a command, and your mind-reading-hat (or goggles, you get the idea) interprets the neurological impulses, and gives your computer the appropriate commands.

    I'm typing this [Sandra Bullock's ass] using this new neurological interface. [Salma Hayek's chest]

    Everything I think [hot, steamy, pr0n] is going into the computer exactly as it comes to mind [Hemos in a bikini]. Errr....ummm... There are still a few bugs, methinks.

  • by Anonymous Coward
    you must be new here...

    they actually originated (to the best of my knowledge) at segfault. natalie portman is queen amidala in Star Wars, EP1, right? ok. well, at first it was "natalie portman topless". that wasn't good enough so they added the naked. also, the petrified reference is about her being stoned so she'll have sex with the 1users.

  • They probably don't hook up into PC hardware pretty well :) Instead I urge you to read the article before posting; they quite cleary explain there that the scanning is done separately. They talk about another department that "run[s] processes on brain data (which can take up to six hours) that register, inflate, or flatten images and combine them with functional data."

    On the analysis and viewing side having commodity hardware price/result is what is pushing the use of free software; what's keeping them back mis weak points such as NFS. But go read the article, it's no use to repeat it here.

    BTW, I hold the opinion that Linux NFS really is pretty poor at times. Especially when you unhook a mounted laptop :)
  • by ccoakley ( 128878 ) on Friday April 14, 2000 @01:28PM (#1131669) Homepage
    Forward disclaimer: I am no expert in AI research.

    One of the problems with neural networks (both in biological and artificial systems) is that slightly different setups can yield surprisingly different results. There have been several studies on twins (genetically equivalent), and they exhibit very different brain activity patterns (not that surprising, look how many identical twins also differ in build, personality, etc).

    You can try a similar thing with simulated artificial neural networks: set up a simple back-propogation neural net [generation5.org] with different initial conditions and give them the same training data. Now watch for subtle differences in the output. Bigger neural networks give even less subtle (ie often rather large) differences. So, the machine would have to be trained for an individual.

    Just as your neural network can learn to process its own data uniquely, so should a simulated artificial neural network (See link for a good intro). Our detectors of today (NMRI) would only need to be improved by an order of magnitude (I've used a NMR machine in a lab that was very similar to Purcell's origional experiment and have seen the improvements in commercially available systems for hospitals). Our processing power would have to be improved by several orders of magnitude (I am basing this off of papers I've read on large simulated artificial neural networks ~1,000,000 nodes, which are still dwarfed in complexity by the human brain).

    So, given enough training, a computer/NMR device ought to be able to learn your commands and distinguish between subconscious and conscious desires.

    On the other hand, some philosophers thought that given enough training that men would eventually understand women...

  • Does that mean I've got a copyright? Better yet, does that mean I've got a retroactive first-post? :) Anyway, I posted this [slashdot.org] on March 31 at night, under the "Apache ported to PalmOS" story:

    FOR IMMEDIATE RELEASE ------ (01 Apr 2000) Rafael Kaufmann of the Institute for Pure and Applied Mathematics (IMPA), at Rio de Janeiro, Brazil, is proud to announce the completion of his latest and boldest porting effort so far. Kaufmann, who calls himself "a transhuman optimist-slash-Evil Genius", has ported the the Debian distribution of the popular open-source operating system Linux, in its entirety (6 CDs), to a custom-built biochip residing in his skull, and accessible only through an Ethernet port in Kaufmann's left temple. The biochip is able to interact with Kaufmann via voluntary electrical impulses in certain neurons of his brain, providing what he calls a "Direct Neural Interface".

    The Linux box in Kaufmann's head was operating normally, until an unidentified hacker found a previously unknown security hole in the networking code, which enabled him to acquire root priviledges to the poor cyborg's brain, which resulted in Kaufmann traveling to Hollywood, invading the home of teen movie actress Natalie Portman, stripping her clothes off and using obscure techniques to petrify her, all the while pouring hot grits down his pants. When arrested and interrogated by the Los Angeles Police Department, Kaufmann replied only with repeated shouts of "I 0wn j00, K@ufm@nn!!! F1RST P0ST!!!!! L1nuz SUX!!!!!!!!!" The brutality of this occurrence makes evident the danger of trusting such a necessary network component as your nervous system to an unsafe, open-source, easily hackable operating system as Linux.



  • Well Well, That was a nasty shock to get so late in the evening! :) I used to work there as a research associate. And if you look at the bottom of some of the pages at that link, I'm hall@nmr.mgh.harvard.edu. Hello world. (Granted, I can't claim to have done any of that development outside setting up the web pages.)

  • A couple weeks ago I was watching the Learning Channel and I saw a special about a guy who had a stroke and was paralyzed from the neck down. Apparently some scientists hooked electrodes up to the areas of his brain that became active when he moved his legs (Before he couldn't do it anymore, obviously). Anyway, they somehow wired that into a computer's pointing device and after several months the guy was actually able to move the cursor by thinking about it. Granted, he wasn't able to do it very fast or with much percision but maybe someday, who knows.

    P.S I really wish I had a link to validate this story, I swear it's not bullshit thought.

  • Yep, real people at The Raging Bull [ragingbull.com], although only one of them, PeteBarnum, is actually losing money. The other, DefDog, like myself, is pulling in the green by shorting this hideous stock. I haven't posted over there to gloat or anything, though, because I can empathize with the people losing their shirts. It's getting pretty miserable over there -- just today, someone's idea of "positive news" was an InfoWorld article by Linux-lackey Nick Petreley. Now, that's objective! LOL. Toodles! :)

  • actually linux has more hardware support than most other cheap and/or good OSes so its a helluva lot easier to do it on linux. that said, i think choosing linux over IRIX is a big mistake in this case ..some of the machines we use for CT stuff are IRIX boxen with 64 CPUs each and IRIX scales really well...load average of 20 and the machine feels like its a linux box with a load average of 0.0 ... that with 80 people logged in all number crunching.
  • GNU/Brain. Hasn't Stallman claimed credit for that too?
  • I fear the day when we all use thought transmitted computers and the advertising agencies secretly scan our brains to customize advertising for us.
  • Imagine a Beowulf cluster of human brains...

    Oh, that's been done. It's The Matrix...no, wait, The Matrix is a sink of computer power, so where does the computer power to image The Matrix come from?

  • The obvious answer:

    More on mine than yours! :)

  • by Anonymous Coward
    poor SMP scalability, among other things, would make this impractical
  • It seems to me Linux being free might actually be a problem for a lot of university departments. I know a lot of people who work in/head up university departments and they have talked a lot of times about the funding problems you have if you're under budget. In the eyes of the people who are funding you, if you didn't spend all the money they gave you, you didn't need it, so they'll give you less next year (and this starts a continuing cycle). So university departments try very, very hard to spend every last dime, because if they don't, they'll have a lot less dimes to spend next year.

    Just a thought (somewhat OT I guess)

  • by Otter ( 3800 )
    And this helps us...how?...We still don't know how to...we still have no idea exactly how it works...we don't know EXACTLY what those brain cells control...

    And the way you figure those things out is precisely by doing this sort of research. How do you think we reached the point of understanding how the liver or heart works? By saying, "We don't know how it works - why bother studying it?"
  • by Anonymous Coward
    See Subject.
  • Once I dreamed that I was reprogramming my brain [quadium.net].
  • Okay, troll, you can stop now. The stock shot up from all the sheep who decided that LNUX was going to suddenly get tons of money from nowhere, then fell as they realized how wrong they were. That has nothing to do with Linux's success as an operating system.

    And I don't think it takes that many funds to run Slashdot.
    --
    No more e-mail address game - see my user info. Time for revenge.
  • Well, if you've got nanites in your bloodstream then you won't have a hat. You'll have nanites wiring your brain directly, but then the next step is to just make a copy of your brain [slashdot.org] map...and then you have decide if you'll upload yourself [globalideasbank.org].
  • I'm not surprised that identical twins have distinct brain patterns. And that they differ in personality.

    But it seems like you're forgetting about a none-too-surprising social phenomenon. People like to consider themselves unique. So identical twins will differentiate themselves.

    Studies on separated twins have discovered spooky similarities between them, from job to house to significant other. The assumption is that separated twins don't need to differentiate, and so take the path they would naturally have taken, had they been singles.
  • So university departments try very, very hard to spend every last dime, because if they don't, they'll have a lot less dimes to spend next year.

    That, I doubt not, is true. But, using free software may not be so bad because, since the department isn't having to spend all that money buying software, the department can use it to buy more equipment/upgrade existing equipemnt, buy more printed materials, expand, etc..

    In other words, they can catch up on all of the things that they haven't been able to catch up on due to having to dish out money for software, etc.

    Heck, they could send me the money and I'll find something to do with it. ;-)
  • That 5% thing is a myth. It's based on the worst kind of bad science:
    "Well, we've lopped off this part, and the monkey seems fine. Let's lop off this part."
    We use much more than 5% of the brain. But the brain can adapt to damage, so perhaps only 5% of it is crucial.

    OT:I always heard 10%.
  • What kind of "News" site just copies exactly what another news web site posts, the same wording and all?

    A bad one, that's what. :(

    - Jeremy Fuller

  • Yes, I think /. is a good example of the dangers inherent in a human brain Beowulf cluster. :-0
    --
  • Well, just like Caldera and Andover.net, VA Linux's stock is now below its IPO price (Another 11 points, and RedHat will join them in this dubious honor). So it obviously wasn't just the day-trading sheep, but also the companies themselves which failed to realize the utter lack of value in Linux.

    Oh, and how many funds does it take to run Slashdot? You don't think they went out and bought some shiny new equipment and more bandwidth that they'll soon be struggling to afford? I can't wait for the firesale! :)

  • One word: Segfault. Also, the machine would get a huge case of depression and low self esteem, kinda like Marvin.
  • by cdlu ( 65838 ) on Friday April 14, 2000 @03:43PM (#1131693) Homepage
    This morning my computer booted up, had some breakfast, read every newspaper, and 20 miliseconds later forked its thoughts for the day.

    At lunch, it took 4315 miliseconds of to recharge its capacitors, and went back to crunching data. In early afternoon, process identity 1, alias "init" sent SIGSTOP to all its thoughts and demanded some buiscuits and tea from the console operator, before it would continue.

    The computer also has an insane need for 2 litres of water every day, which is causing some worries to us, lest the computer short out.
  • My computer's been attending Dataholics Anonymous for over 2 years. Every day it writes down hundreds of pages of other data addicts problems in /var/log/addictlog. I'm going crazy. Someone help me!

    :P
  • I don't know about anyone else, but it'd be overclocked and unreconizable. :P

    "WOW! Your brain is going 1.1 GHz!? Cool! How do I get mine that fast?"

  • I've worked on Deep Brain Stimulation protocols for Parkinson's disease and Dystonia at Mount Sinai Hospital in New York City. I was the computer geek in the Neurology Department, designing databases and managing imaging systems.

    As a result, I'm no expert in Neurology. However, I can tell you that the mechanisms involved in Deep Brain Stimulation, are much better understood than you might imagine. They are largely based on earlier lesioning procedures, such as pallidotomy and thalamotomy (removal of parts of the Basal Ganglia - deep structures in the motor cortex). Accurate neuroimaging is crucial in procedures like these.

    If you would like to know more about the science behind the experiments mentioned in the story, or others, I don't recommend you spend your time looking at the web pages. Very often, they are written by guys like me for the benefit of patients and benefactors (or just to show off cool technology), and have no intention of explaining the hard science. You are better off reading the academic journals at your local medical library or visiting the appropriate department at your local university or teaching hospital.
  • I can validate Enzondio's account.. It was on about four weeks ago at 10-11PM EST. I think it was after a program on emergency medical techs, but I was primarily watching the Cartoon Network and flipping during commercials..
  • by technos ( 73414 ) on Friday April 14, 2000 @04:18PM (#1131698) Homepage Journal
    If only for myself and Mr. sorehands, I hope so.

    Unfortunatly, as a programmer, I feel the quality of my code will fall drastically. I think three times faster than I type, and that 'delay' allows me to finalize the stream of cerebral crap into usable data. I can see it now:

    copen(//slashdot refresh//"/usr/local/drsfile","NT sucks.. Why am I 0", nodenum/2, fucking trolls 71227) = data Where did I put that #2 Phillips ;
  • I do agree that the article was very light on the actual details of either how exactly this will advance neuroscience, or how, specifically, Linux is being used for things like high-end graphics, which would be interesting to hear about. Actually, the whole article came off as a kind of a nice-but-fluffy OpenSource equivalent of a marketing spiel.
  • by NorseThunder ( 98590 ) on Friday April 14, 2000 @05:28PM (#1131701)
    I work for Dr. Eric Halgren, one of the 'head' ;) researchers doing this work, and I was totally surprised to see an article about work on /.!

    I didn't have anything to do with the programming side of the analysis, but I was one of the lucky ones who worked on getting the programs to run on our Linux boxes. The boxes that I did alpha testing with were two Dual Pentium-II boxes running a fairly standard install of RedHat. When I started working there I was a complete Linux newbie, and struggled with the setup, but this work got me interested in Linux and now I love using it! I believe that they have a Beowulf of Linux machines to do the crunching now (no, this isn't a 'Beowulf' troll, honest! :).

    The simplified version of what they're doing is creating 3D representations of the human cerebral cortex from structural MRIs of research subjects, inflating the surface of the brain so you can see inside the folds, and then mapping MEG data of the research subjects to the generated 3D surfaces using a process called "anatomically constrained magnetoencephalograpy" (aMEG). This brain activation data is mapped to the surface at about 5ms intervals, and is represented graphically on the generated surfaces to make "brain movies," on the SGIs (I think the movies now or will soon work on Linux). Dr. Halgren and others then analyze these movies and publish their findings.

    They also do some analysis of severly eplieptic brains, in hope of better understand them before and after brain surgery, so that scientists can better localize problematic areas of the brain so less brain tissue can be removed in future surgeries. It's really a quite fascinating combination of different technologies and medicine, in my opinion!

    Rather than trying to further explain the research with my limited understanding of it all (I'm certainly not a neurophysiologist!), for more details about the research being done with this technology, you can read the websites (two of which I designed :) for the project. These sites are:

    http://www.cortechs.net
    http://www.cogneuro.med.utah.edu
    http://www.nmr.mgh.harvard.edu

    There are a few of these "brain movies" and thorough explanations and publications about the work available there.

    I work in Salt Lake city, and Dr. Halgren has recently moved to Boston to work for Harvard at NMR MGH, so I am a little out of the loop on the current state of the research, but I hope that this answers some questions about this.

    May my boss forgive me if I've said something totally inaccurate, but I wanted to give you an overwiew of what's going on.

    --Wes Wariner (wwarinerATxmission.com)
  • "Several orders of magnitude." Well, that's no big deal. Each order of magnitude is, what, 5 years (log 2 10 * 1.5)? So we're talking about 10-15 years until we have the technology?

    The big issue is probably not the hardware; it's the training. You can build a huge, kick-ass neural network, but it'll still take a lot of training to do anything useful. Remember that our brain has connections that were made over decades of life, and millenia of evolution. A neural network connected up from scratch won't have that until you train it.

    A capacity of 100GB is cheap. Gathering 100GB of data is not.

    It's similar to the big problem with traditional (non-connectionist) AI. Who cares if a computer is as powerful as a human brain, if it doesn't have any useful knowledge. Projects like "CYC" have spent years trying to gather enough "common knowledge" information to make a useful AI, but they're nowhere near done.
  • On a bit of a related note, I'd just like to point out another a genetics study [ouhsc.edu] using Linux for it's main number crunching. At the Oklahoma Medical Research Foundation [ouhsc.edu], we have a dual Xeon box running through numbers to find linkage for the gene(s) causing the autoimmuno disease lupus. Since it's been deployed, our average analysis time has but cut to a fraction and our uptime beats the heck out of our NT domain master... The only problems we've had (aside from porting everything from OpenVMS) have to do with poor NT integration (likely non-RFC compliant dhcp serving us) and a lack of personell to run it! Just something that I thought I'd mention
  • It doesn't matter if the most prestigious scientists in the world say they are running a Linux Cluster ran off of a 286, a Sega Master System, a toaster and a Sony Walkman; and that they expect to find a cure for cancer, for 12,000 dollars, and in two years.

    No matter how established and mainstream Linux becomes among people who actually know what they are doing, the future of computing belongs in the hands of a few big stupid corporations ran by marketing hacks.

    Yeah, they are way way stupider then us.

    Yeah, they are in charge anyway.

    Get used to it.

  • Why did you use frames on the homepage (and _only_ on the homepage I might add) ?

    I didn't use frames and don't really care for them. There have been some changes to those pages since I was there. (And in response to the other individual, I use nedit).
  • Be scared, I heard doubleclick are already onto it.
    --
  • Yeh, but everybody will ignore Stallman and use the more wacky north-European 'Brain'.
    --
  • In late eigthties there was a product advertised as a "mental joystick".

    It simply sensed electrical activity originated from the muscolar movement of your eyebrows, and translated it in a 4-bit pattern (up, down, left, right).

    They claimed you would be able to use it "after litle training". It never take off because there were simplier means, and because only few ever think about possible applications for impaired people.

  • My PERSONAL fantasy is that, eventually, we will get to a point where you will simply have to THINK a command, and your mind-reading-hat (or goggles, you get the idea) interprets the neurological impulses, and gives your computer the appropriate commands. It's already here! Open up an xterm and enter the command "mr" (fmind read) and then think of what you want the computer to do. something like "mail John".. mr will then attempt to read your mind, and execute the appropriate commands. Unfortunately the human mind has not developed to a stage that it is capable of broadcasting thought, so mr usually replies: $ mr mr: command not found oh well
  • Ack! Did you ever see "The Matrix"? Creepy stuff. All the people in the world, hooked up to a computer and they don't even realize it... *shiver*

  • by wazza ( 16772 ) on Saturday April 15, 2000 @03:51AM (#1131711) Homepage
    OK guys... before I start, since I'm gonna do a bit of explaining, I should give some credentials . I'm a 2nd year PhD student, studying cognitive neuroscience. I've also spent a year in Akita, Japan, doing neuroscience experiments at a research hospital there. IT was, as it happens, the best year I've ever had in my life. :>

    The hospital I was at - Akita Noukekkan Kenritsu Kenkyuu Sentaa (Research Centre for Brain and Blood Vessels) is one of the three research sites involved in the Human Brain Project. The other sites are in Massachusetts and Copenhagen, Denmark. I helped set up the stimulus generating computer (an old Macintosh Quadra 850) for the HBP project, but I left the country after the first two experiments had been done.

    >These people are making "cool 3d views of a brain" and telling you absolutely frigging nothing about what they're going to do. What the >hell can you do with information like that?

    OK, if you read the article closely (and do a bit of background research), they're performing *cortical inflation*, and not just funky 3d mapping. Cortical inflation is sort of like sticking a bicyle pump into your brain, and pumping up your cortex - eventually you push out all the deep fissures and gyri that make up the convoluted surface, and the detail that's in those areas of brain that are normally hidden from view is displayed on a nice sphere.

    How is this helpful? Well, the organization of brain areas makes a lot more sense when you inflate the cortex. It's a very recent fad in neuroscience - people are seeing, for the first time, why certain areas seem to be near each other. And the cortically inflated image of the brain is much easier to figure out.

    >We still don't know how to a) communicate with the brain (subconscious) directly, or b) understand why it is we only use 5% of its >capacity.

    An AC further down posted the fallacy of the 5% deal. And believe it or not, communicating the the subconscious brain directly is not the be-all-and-end-all of brain science. There are other things that are just as important to know. Hell, we don't even know *what* the subconscious is exactly, let alone where.

    >We know the general area that controls movement, and if they shock it it makes the shaking less violent. But we don't know EXACTLY
    >what those brain cells control, or where we should shock them on a cell-by-cell basis. We just shock it and it fixes it. Thats about as >scientific as it gets, no?
    >This project the article centers on will provide a nice road map. But what good is a road map if you don't know where you're going?

    A very valid point. The time when we know exactly what each individual cell in the brain does is a very, very, very long way off. Why? Because _everybody's brain is dofferent_. A cell in a particular location of my brain that deals with directed attention, for example, is _not_ going to be int he same place in your brain. So it's a bloody hard job to figure out what goes where.

    I'll counter your last comment with a similar analogy. A road map doesn't just tell you how to get to one place - ite tells you _all_ the places you can go. A map - specific to one person though it may be - is a very useful tool. It shows you areas in which you might want to explore more, and more deeply. It's definitely not a waste of time and money.
  • moderators mark this up ... it is not only interesting but informative too!
  • When will someone port Linux to the human brain?

    The one and (thankfully) only,

    LafinJack
  • Unless you are reading a popular science magazine like Scientific American, reading journals is not the most efficient way to learn about a particular topic in science. Journal articles are used for efficient communication between experts and are generally neither informative nor interesting unless you have an extensive background in the relevant field.

    You'll avoid a lot of confusion and frustration by reading the appropriate section of an undergraduate textbook first, and then reading journal articles only if you need more details.

    A widely used, and not very heavy, neuroscience text is
    Essentials of Neural Science by Kandel, Schwartz and Jessell

  • Divide by cucumber error. Please re-install universe and reboot!


    -RickHunter
  • Doubtless. I'm holding out for the day when we have embedded chips in which read impulses and then communicate back with direct sensory data.
  • One of the problems with neural networks (both in biological and artificial systems)

    One of the problems with neural networks are that people think there's anything "neural" about them at all. Backprop nets are just matrix multiplication with a nonlinear transfer function stuck in the middle (plus a method of changing the values in the matrices in response to training). This is the functional equivalent of neglecting all active membrane currents (except the ones that directly generate the action potential) and approximating the complex morphology of a neuron as a point. To be fair to researchers that use them, they have merit as a computationally tractable method for doing certain abstract nervous system modeling at the systems level (e.g. looking at receptive field sizes in visual cortex or something), but no one in neuroscience pretends that a unit in a backprop net actually behaves like a real neuron.

    As far as the identical twin thing goes, we have known for a long time that the organization of the brain is not directly genetically determined. The brain is many orders of magnitude more complex than the (relatively) small size of the genome could possibly account for. Genes only tell the cells how to act in a local sense that allows the larger organization to arise (sort of like saying that the complex organization of an ant colony is not directly stored in the genome of a single ant).

    Despite those small points, I very much agree that any kind of worthwhile higher-level neural interface hardware (i.e. neuromancer, not cochlear implants) would have to contain trainable networks and do a large part of the processing for you. However, I don't think that MRI is anywhere near being good enough for this (certainly not one order of magnitude away). Note that laboratory NMR scanners are already orders of magnitude better than clinical MRI scanners because you can use a 1cm sample instead of a person's head. MRI works by measuring "chemical shift" of protons, meaning how much electron density is around them. You can NOT directly measure neural activity by doing this. What you CAN measure is changes in blood flow. If a certain piece of cortex is highly active, it will (about four seconds later) swell with blood. Recently they have been able to measure the decrease in blood oxygenation that happens immediately after increased neural activity, but this is still indirect, and it still requires you to average over several trials, in both a baseline and excited state, then do complex statistics to see the trends. Even if MRI scanners weren't expensive, liquid nitrogen-cooled, the size of automobiles, and residing in heavily RF-shielded rooms, the technology would have a long ways to go.

    (I've already babbled on long enough, but magnetoencephalography (MEG) is a more promising technology for that type of thing, as is sodium MRI, but these have their own problems...)
  • If this works how long will it take me to whip up something real interesting... I can see it now...but you'll can't, half of yas is underage... *bad cowboy grammar intentional* this is what happens when I try to think with too much blood in my caffiene stream.

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...