Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science Technology

Spaun: a Large-Scale Functional Brain Model 101

New submitter dj_tla writes "A team of Canadian researchers has created a state-of-the-art brain model that can see, remember, think about, and write numbers. The model has just been discussed in a Science article entitled 'A Large-Scale Model of the Functioning Brain.' There have been several popular press articles, and there are videos of the model in action. Nature quotes Eugene Izhikevich, chairman of Brain Corporation, as saying, 'Until now, the race was who could get a human-sized brain simulation running, regardless of what behaviors and functions such simulation exhibits. From now on, the race is more [about] who can get the most biological functions and animal-like behaviors. So far, Spaun is the winner.' (Full disclosure: I am a member of the team that created Spaun.)"
This discussion has been archived. No new comments can be posted.

Spaun: a Large-Scale Functional Brain Model

Comments Filter:
  • by Anonymous Coward on Friday November 30, 2012 @12:31PM (#42143839)

    .... something the size of a human skull?

    Just imagine how many people this could help!

    • .... something the size of a human skull?

      Just imagine how many people this could help!

      Too bad you posted that like an Anonymous Coward - Too funny! You just made my day :)

  • Close... (Score:5, Funny)

    by tool462 ( 677306 ) on Friday November 30, 2012 @12:32PM (#42143851)

    It's not purely functional unless it's written in Haskell.

    • It's not purely functional unless it's written in Haskell.

      By way of logic, would programming it in pure Haskell eliminate filthy thoughts?

      • By way of logic, would programming it in pure Haskell eliminate filthy thoughts?

        Why, are we worried about AIs committing venial sins or something?

        • by sp332 ( 781207 )

          If I make a robot that kills people, am I responsible for the deaths? If I make an AI that lusts after my neighbor's wife, am I responsible?

    • by Anonymous Coward

      So you are saying Scheme was ruled out because if it learns to talk it will have a Lisp?

  • My God... (Score:5, Interesting)

    by Anonymous Coward on Friday November 30, 2012 @12:32PM (#42143857)
    The golden age of humanity will start soon. The last gasp of the fossil fuel powered consumer society is now. We will create a new model of society, with longer living people and an understanding of how life works at the cellular, and most importantly, mathematical level. The future is not space, it's synthetic biology.
    • IN SPACE! Come on man, this stuff would be perfect in space. Also once we modify ourselves to meld with the artificial, we'll be able to move beyond this pale blue dot.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      When the moon is in the Seventh House
      And Jupiter aligns with Mars
      Then peace will guide the planets
      And love will steer the stars

      This is the dawning of the Age of Aquarius
      Age of Aquarius
      Aquarius! Aquaaaaaarrrrriusssssss!

    • The golden age of humanity will start soon. [..] The future is not space, it's synthetic biology.

      The power of synthetic biology unleashed upon a society that is run on the administrative and economic protocols that apply now, will be a nightmare, not a golden age.

      You think you are going to be doing synthetic biology in your garage? For one, you will have to do exactly what the supersoldiers tell you to. How do you recon this amazing technology will reach you? Even if there comes to be such a thing as a technologically developed Elixir of Life, the Universe, and Everything, you think that people on top

    • by Anonymous Coward

      Or we revert to a dark age. My money is on dark age.

    • by wurp ( 51446 )

      WTF does this comment have to do with this story? Why is it +4 interesting?

  • by Anonymous Coward on Friday November 30, 2012 @12:34PM (#42143901)

    This simulation takes an hour to simulate one second of neural activity, but the researchers want to speed it up to real time. Why stop there? This brain would be much more interesting if it could simulate an hour of neural activity in one second.

    Just a caution, though: make sure the physical arm it's connected to isn't within reach of any nuclear footballs.

    • by dj_tla ( 1048764 ) *

      We'd be happy with faster than real-time too! Baby steps.

    • At that speed I would be more concerned if the arm was within reach of other appendages.

    • Re: (Score:3, Funny)

      by Tablizer ( 95088 )

      I have a few co-workers like that

    • re: Runs at 1/3600 real time
      .
      No, it appears that running the Spaun model on NENGO in a Java Virtual machine on a quad-core cpu running at 2.5 GHz takes 3 hours to run an emulated 1 second:
      .
      http:models.nengo.ca/spaun [nengo.ca]
      Notes:
      --------
      - This model requires a machine with at least 24GB of RAM to run the full implementation.
      Estimated run times for a quad-core 2.5GHz are 3 hours per 1 second of simulation time.
      - See the run_spaun.py file in the spaun directory for experiment options.:
    • Ah, yes, "weak superhumanity".

      What we really need to simulate in faster-than-realtime is the brain processes for optimizing this model. Once we can do that, things should click along nicely./p>

  • "I still have the highest confidence in the mission, Dave."

  • What to teach Spaun: that it was intelligently designed, or evolved from its predecessors?

    • Re: (Score:2, Funny)

      by CanHasDIY ( 1672858 )

      What to teach Spaun: that it was intelligently designed, or evolved from its predecessors?

      Why not both?

    • Unlike living organisms, It is not a derivation of it's predecessors put into place by selective forces acting on semi-random changes, but rather designed by people.

      I'd say intelligently designed.

    • What to teach Spaun: that it was intelligently designed, or evolved from its predecessors?

      Intelligently designed, and that it had better fucking behave itself or watch out!

  • by kris_lang ( 466170 ) on Friday November 30, 2012 @12:39PM (#42143963)

    Could you possibly post a link to a version that is not hidden behind a paywall? Perhaps a pre-print on your own research site; perhaps an HTML web page summary of your work?
    .

    http://nengo.ca/ [nengo.ca]
    .

    It looks like they use Python scripting in their NENGO simulator: http://www.frontiersin.org/neuroinformatics/10.3389/neuro.11/007.2009/abstract [frontiersin.org]

  • This type of tech is a central part of Robert J. Sawyer's sci-fi novel "The Terminal Experiment". Very good read, if a bit dated now.

    http://en.wikipedia.org/wiki/The_Terminal_Experiment [wikipedia.org]
    http://www.sfwriter.com/exte.htm [sfwriter.com]

  • by xtal ( 49134 ) on Friday November 30, 2012 @01:02PM (#42144389)

    The fact this responds in similar ways is astonishing.. not because of what this model has accomplished, but because it's a great big flashing light pointing to this being the right way to machine intelligence. "HEY OVER HERE!"

    I'll pre order his book, and wait patiently for an open source version of this research / model to appear for people to hack on.

    It's slow now, but 1/3600 speed within the next generation of computers to do in real time - and that's without optimization.

    Interesting times indeed.

    • This model is already open source! I have been very adamant about keeping this the case in the Eliasmith lab. The model is here [nengo.ca], and the software running it is here [nengo.ca] (and on github [github.com]).

      • by Anonymous Coward

        I knew I should have opted for 24GB DDR3 instead of a 8GB kit.

      • by Anonymous Coward

        Are you guys looking for [remote] volunteers? Is there some component of this research that can be crowdsourced? A mechanism for providing donations to help fund this effort? Basically, is there a way for engineers or scientists that are interested in this field of study to help out short of dropping their lives and becoming research assistants in Canada?

        • I mean, we would be ecstatic to have people contribute in any way! That could even just mean learning the framework and the software and using it in your own research. We have lots of tutorials, and we'd be happy to help if you want to make your own models. The software itself is pretty good, but it's academic software, and certainly we'd welcome anyone's contributions to the software! We're pretty responsive, either at any of our emails [uwaterloo.ca], or by making a github issue [github.com] if you need any assistance.

          Unfortunately, I don't know if we have a lot of "low hanging fruit", things that we need done but are just too lazy to do so. Though I'm sure we could come up with some of those tasks if desired, as we're certainly lazy.

      • by xtal ( 49134 )

        I am intrigued by your ideas and will most certainly subscribe to your newsletter.

        While I am by no means formally trained in neuroscience, I am an EE who's followed with much interest the field of neural modelling and I have waited for decades for someone to publish results like this. Best of luck with your research and I look forward to going through the code!

    • by Anonymous Coward

      Ughh... this is interesting, but is exactly sort of the useless attention-seeking research that's ruining neuroscience and turning it into a snake-oil show.

      Sorry, that sounds harsh, and I don't mean to be offensive, but there's lots of people that have been able to model these tasks better than these individuals.

      The problem is that the outcomes they're evaluating--performance on these sorts of pre-defined tasks--don't mean there's an accurate model of what's actually happening on the brain as a whole. The o

      • by perceptual.cyclotron ( 2561509 ) on Friday November 30, 2012 @07:40PM (#42150475)

        I was actually about to upmod because in general I agree (and for the record, I have a PhD in cog neuro), and based on the summary and the nature write-up, I was underwhelmed. But skimming the Science paper, these guys have legitimately done something that really hasn't been done before. The model gets a picture that tells it what task it's supposed to do, preserves that context while getting the task-relevant input, gets the answer to the problem (and the problems are, computationally, pretty wide-ranging), and writes the answer (i.e., it's not being read out from the state of a surface layer and transcoded to a human-readable result). All of this is being done with reasonably-realistic spiking neurons (with Eliasmith, these are probably single-compartment LIF), configured in a gross-scale topology commensurate with what we know about neuroanatomy and connectivity.

        Is this going to unleash a new revolution in AI and cybernetics? Nope. But it's definitely both impressive and progressive for the field. Both Eliasmith and Izhikevich are the real deal. And while we certainly don't understand the brain well enough to make truly general-intelligence models, this kind of work is precisely the sort of step we need to be taking – scaling down the numbers but trying to reproduce the known connectivity is a lot more useful than building 10^10 randomly connected McCulloch-Pitts neurons...

  • Hmmm...that seems to be more capable than a large segment of the human population. Impressive.

  • Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5

    What the article doesn't tell you is that the "5" was followed by "@r@H c0Nn0r?"...

  • Not sure, but if computers can get intelligent, isn't there then a special ethics to it? Perhaps let information be oriented on by computers but a way to back up its intelligence so that it can delete parts of its development for as long necessary? (meaning it can have pains and decide to continue with pains. "A mad machine' "? (bezerking?))
  • This model requires a machine with at least 24GB of RAM to run the full implementation. Estimated run times for a quad-core 2.5GHz are 3 hours per 1 second of simulation time.
    :>)
    So running the model requires running it inside a Java Virtual Machine and running the Spaun model appears to require having a machine with 24 GigaBytes of RAM to allow the JVM enough space for doing its thing.
    And the simulation runs at 3 hours of wall-clock time (I assume) per 1 second of simulated time, ~ 10800:1.
    .
    Perhaps
    • Hey, we're definitely thinking about this! In fact, the Java version can run on a GPU. And we're in the process of making a fast Python version based on theano [deeplearning.net]. Unfortunately, even with all of these speedups, we're still talking about lots of neurons and lots of computation.

      However, there are plenty of smaller scale models that you can run in Nengo to get a sense of what's going on in the larger Spaun model! The tutorials [nengo.ca] are a good place to start.

      • by Suiggy ( 1544213 )

        What platform are you using for GPU computing? CUDA? OpenCL?

        • by dj_tla ( 1048764 ) *

          We're using CUDA, so you'll need a recent NVidia card to run models on the GPU.

          • by Suiggy ( 1544213 )

            Thanks, also thanks for making the source code and the rest of your work available. Very promising research you've done.

      • Are you using the rootbeer Java->GPU compiler, by any chance? If not, how are/i> you doing it? (I'm interested in running JVM languages on a GPU myself.)
        • by dj_tla ( 1048764 ) *

          Whoa, I had no idea rootbeer existed! That sounds quite useful, thanks for the link!

          We're doing this very explicitly, by writing CUDA code in C and interacting with it in our Java program with JNI.

    • you imagine python is faster than java at common scientific computational tasks? it's about 20 to 50 times slower

      • by dj_tla ( 1048764 ) *

        Because Python's such a nice glue language, it can talk to super fast Fortran and C libraries to be at at least comparable speed with Java. See NumPy [scipy.org]. This is kind of stuff scientists use for number crunching.

        • I agree Python is a wonderful language, and of course ForTran (the most optimzable language for number crunching) libraries can be faster than java's in some cases.

          Being old, I spelled the name meaning FormulaTranslation the old way for amusement.

    • by Jartan ( 219704 )

      Actually this may be a good example of something that could heavily benefit from JIT compiling. Since they're unsure what their own program needs to do it's going to be hard to optimize it manually.

      • by dj_tla ( 1048764 ) *

        Hmm... that's an excellent point! We have a few of hot loops that run over and over and over again. But, because of that we know what needs to be optimized and we've tried to do so. It's true though that some optimizations we've tried haven't worked out well, so the JVM's JIT compiler might be what's saving us most of the time. Personally I'd like to write it all in RPython and have it make a JIT for us ;)

    • by Suiggy ( 1544213 )

      Yeah, seriously. When it comes to maximizing performance/watt, C and OpenCL would be the way to go. I've taken a cognitive vision system using SURF + cluster analysis originally written in Java running at non-interactive rates on quad-core desktop system requiring gigs of RAM, and rewrote it in C using SIMD intrinsics and various other optimizations to improve cache efficiency and had it running at interactive rates on a single-core 800MHz ARM Cortex A8, 512MB of RAM, and PowerVR SGX 535 (a common mobile ph

      • by dj_tla ( 1048764 ) *

        Yeah, I wouldn't call our Java implementation "naive" per se, but we are definitely looking into implementing other simulation back-ends and decoupling the Nengo GUI from the simulation core. I'll look more seriously into C and OpenCL -- does it run on both ATI and NVidia cards? We have mostly NVidia cards here.

        • by Suiggy ( 1544213 )

          You'll probably be best off with sticking with CUDA on nVidia hardware for now. nVidia's OpenCL implementation isn't quite as polished as it should be, but it's getting there. There are some idiosyncrasies between vendor's implementations, namely to do with auto-vectorization. AMD's compiler doesn't auto-vectorize, and nVidia's Fermi/Kepler hardware is scalar, so you end up having to write multiple code-paths for each architecture anyway to get best performance. All of the companies use highly-patched versi

          • by dj_tla ( 1048764 ) *

            That's interesting... we are looking to interface with FPGA hardware too for some things, so a heterogeneous implementation would be great. I'll keep checking in on OpenCL as time goes on. Thanks!

  • Destination: Void is a great Frank Herbert book along these lines; always thought it would make a great play. Admittedly, the last line is a bit too cheesy, though.
  • by gordona ( 121157 ) on Friday November 30, 2012 @03:57PM (#42147261) Homepage
    If the brain were simple enough to be understood, it would be too simple to understand itself. (anonymous author).
    • by ceoyoyo ( 59147 )

      That stupid quote has always bothered me. "Understanding" doesn't mean knowledge of every feature to an arbitrary level of detail. I understand the basic functioning of my couch even though it contains more particles than my brain so my brain cannot possibly contain arbitrarily detailed knowledge of it.

    • Neither correctly recalled nor attributed but that's not an issue in these days so don't worry too much about it.

      "If the brain were so simple we could understand it, we would be so simple we couldn't."
      Lyall Watson

      I started typing and Google showed me the way.

      On a somewhat related note although Bing is improving I hope it never becomes a verb...

    • If the brain were simple enough to be understood, it would be too simple to understand itself. (anonymous author).

      Which is why we just build bigger brains. o_O

  • Think how much the spammers and data miners would pay for such a simulated brain. Typical anti-spammer challenges on the web involve presentation of a picture of a sequence of characters and digits, which you must identify and repeat back as ASCII text. This simulated brain could easily accomplish that task. What challenge system will we switch to next...
    • by leonem ( 700464 )

      Greg Egan describes a world in http://en.wikipedia.org/wiki/Permutation_City [wikipedia.org] where the spam arms race has led to intelligent spam bots, and spam filters which are a simplified simulation of your own brain, which reads the spam to decide if you would want to see it yourself.

      (N.B. I wouldn't want to give the impression that's all there is to the book, it's just a very, very small part of it. It's a good read if you think you'd like a novel which explores some of the implications of simulating human brains. Th

We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise. -- Larry Wall

Working...