Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science Technology

Spaun: a Large-Scale Functional Brain Model 101

New submitter dj_tla writes "A team of Canadian researchers has created a state-of-the-art brain model that can see, remember, think about, and write numbers. The model has just been discussed in a Science article entitled 'A Large-Scale Model of the Functioning Brain.' There have been several popular press articles, and there are videos of the model in action. Nature quotes Eugene Izhikevich, chairman of Brain Corporation, as saying, 'Until now, the race was who could get a human-sized brain simulation running, regardless of what behaviors and functions such simulation exhibits. From now on, the race is more [about] who can get the most biological functions and animal-like behaviors. So far, Spaun is the winner.' (Full disclosure: I am a member of the team that created Spaun.)"
This discussion has been archived. No new comments can be posted.

Spaun: a Large-Scale Functional Brain Model

Comments Filter:
  • by perceptual.cyclotron ( 2561509 ) on Friday November 30, 2012 @08:40PM (#42150475)

    I was actually about to upmod because in general I agree (and for the record, I have a PhD in cog neuro), and based on the summary and the nature write-up, I was underwhelmed. But skimming the Science paper, these guys have legitimately done something that really hasn't been done before. The model gets a picture that tells it what task it's supposed to do, preserves that context while getting the task-relevant input, gets the answer to the problem (and the problems are, computationally, pretty wide-ranging), and writes the answer (i.e., it's not being read out from the state of a surface layer and transcoded to a human-readable result). All of this is being done with reasonably-realistic spiking neurons (with Eliasmith, these are probably single-compartment LIF), configured in a gross-scale topology commensurate with what we know about neuroanatomy and connectivity.

    Is this going to unleash a new revolution in AI and cybernetics? Nope. But it's definitely both impressive and progressive for the field. Both Eliasmith and Izhikevich are the real deal. And while we certainly don't understand the brain well enough to make truly general-intelligence models, this kind of work is precisely the sort of step we need to be taking – scaling down the numbers but trying to reproduce the known connectivity is a lot more useful than building 10^10 randomly connected McCulloch-Pitts neurons...

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...