Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Education Science

Ask Slashdot: DIY Computational Neuroscience? 90

An anonymous reader writes "Over the last couple years, I have taught myself the basic concepts behind Computational Neuroscience, mainly from the book by Abbott and Dayan. I am not currently affiliated with any academic Neuroscience program. I would like to take a DIY approach and work on some real world problems of Computational Neuroscience. My questions: (1) What are some interesting computational neuroscience simulation problems that an individual with a workstation class PC can work on? (2) Is it easy for a non-academic to get the required data? (3) I am familiar with (but not used extensively) simulators like Neuron, Genesis etc. Other than these and Matlab, what other software should I get? (4) Where online or offline, can I network with other DIY Computational Neuroscience enthusiasts? My own interest is in simulation of Epileptogenic neural networks, music cognition networks, and perhaps a bit more ambitiously, to create a simulation on which the various Models of Consciousness can be comparatively tested."
This discussion has been archived. No new comments can be posted.

Ask Slashdot: DIY Computational Neuroscience?

Comments Filter:
  • Check out NEST (Score:4, Informative)

    by somepunk ( 720296 ) on Saturday November 30, 2013 @12:37PM (#45561571) Homepage
    It's open source, and integrates with Python and the whole SciPy suite. I'm not a neuroscientist, but I work in one's lab. I haven't used the software extensively, but it's installed on a Linux VM wasiting for some love while we work on other things. http://www.nest-initiative.org/ [nest-initiative.org]
  • Some answers (Score:4, Informative)

    by Okian Warrior ( 537106 ) on Saturday November 30, 2013 @02:10PM (#45562211) Homepage Journal

    I research hard AI. In my view thinking through and tackling example problems is the best way to explore a topic. If you require your system to mirror our current understanding of neuroscience, then you're essentially researching the algorithms of the brain.

    If you're specifically looking into epilepsy and related, consider checking out William Calvin's [williamcalvin.com] website. He's an experimental neuroscientist [wikipedia.org] from University of Washington, who wrote many books that explain the neurological foundations of the brain in readable form with good detail.

    (1) What are some interesting computational neuroscience simulation problems

    Pretty much anything AI falls under that category. Go over to Kaggle.com [slashdot.org] and check out some of their competitions, including their past competitions. Check out the Google AI lab [google.com] and see what they're doing, and check out recent publications [arxiv.org] to see what people are trying to solve. Ask yourself: Are humans better than the computer, and can it be done better?

    Here's a video [youtube.com] of a system that uses neuron simulation (of a sort) to recognize hand-written digits. A hand-written digits dataset is in the UCI archive below.

    (2) Is it easy for a non-academic to get the required data?

    Generally, yes. UCI has a repository [uci.edu] of machine-learning datasets. The researchers supporting Kaggle [slashdot.org] competitions frequently release their data.

    I've found that researchers are generally approachable, and will give away copies of their data (I have 4 datasets from researchers). As a personal anecdote, last week a researcher from this very forum sent me his dataset of Mars altitude images [wikipedia.org] - I'm trying to come up with an algorithm to recognize craters.

    (3) I am familiar with (but not used extensively) simulators like Neuron, Genesis etc. Other than these and Matlab, what other software should I get?

    In my view, pick a computer language that has a wide support network of libraries, and code things from scratch.Something like Perl or R. At some point you will want to break open the box and see what's actually happening inside, and familiarity with the system (having constructed it) is key. You will want to insert trace statements, print out intermediate results, and so on. Most of the pre-built systems don't have what you will ultimately want, and building simulation objects isn't terribly hard.

    (4) Where online or offline, can I network with other DIY Computational Neuroscience enthusiasts?

    Please let me know if you find any (by posting a response).

    I've found that most AI enthusiasts are really "big data" enthusiasts, and most of them are all about business rather than AI. The IRC AI chatrooms [irc] are all but dead, and most of what is there are students asking for help with their homework. (Although to be fair, the lurkers there know everything about AI and can answer questions and make suggestions if you're stuck.)

    The NEAI meetup [meetup.com] in Cambridge is mostly spectators - people who want to find out about AI or how to use AI ("how can I use AI to improve the performance of my financial company?"). I hear there's an AI meetup out on the West coast that's pretty good.

    See if there's a meetup [meetup.com] in your area for something related, or start one and see if anyone shows up.

  • by upontheturtlesback ( 2605689 ) on Saturday November 30, 2013 @02:13PM (#45562225)
    I have a recent PhD in neural computation, though from a functional cognitive and language modeling perspective, and not a neuroanotomical modeling perspective -- so it may be a different area than you're interested in. From a high-level perspective, neural computation has moved a lot in terms of scale in the past two decades (simulations can have millions of nodes), and it has moved a lot in terms of modeling the processes of individual neurons and neurochemistry. Very high-level functional mapping work has also moved a good deal with fMRI, EEG, and MEG becoming relatively inexpensive and very common techniques in cognitive experiments. One area that, in my opinion, has moved very little in the past 20 years is the ability for neural networks to learn non-trivial domain-general representations and processes, and to generalize from those representations and processes to novel (untrained) instances. In the late 80s, after connectionism had made return with Rummelhart and McClelland's popularization of the backpropagation algorithm and demonstration of its utility in a number of tasks earlier in the decade, a good deal of the literature demonstrated very basic limitations and failures of these systems to generalize to untrained instances, or to move away from toy problems. Fodor and Pylyshyn's "Connectionism and Cognitive Architecture" is a classic paper from that era, and Pinker wrote a lot language-specific criticisms as well. Stefan Frank has the most recent long-standing research program in this area that I'm aware of, and his earlier papers have good literature reviews that can further help guide ones background reading. There have been some limited demonstrations of systematicity with different architectures (like echo state networks), and comparatively little work on storing representations and processes simultaneously in a network, but so far these are long-standing and fundamental issues that need revitalization. When convincing demonstrations do arise, they'll likely not need more than a desktop to run, as it will be demonstrations in learning algorithms and architectures, not scale. For non-neural folks, classical neural network architectures are essentially very good at pattern matching and classification (e.g. being trained on handwriting, and trying to classify each letter as one of a set of known letters (A-Z) that it's seen many hundreds of instances of before), or things that involve a large set of specific rules (if X then Y). They're much less good at things that involve domain-general computation, that involve learning both representations and processes and storing them in the same system (i.e. let's read this paragraph and summarize it, or answer a question, or let's write a sentence describing a simple scene that I'm seeing). That's not to say that you couldn't make a neural system that did this -- you could sit down and hard-code an architecture that looked something like a von-neumann CPU architecture and program it to play chess or be a word processor, if you really wanted, but the idea is developing a learning algorithm that, by virtue of exposure to the world, will craft the architecture as such. The idea being that, after years of exposure, the world will progressively "program" the computational/representational substrate that is the brain to recognize objects, concepts, words, put them together into simple event representations, and do simple reasoning with them, much like an infant. I hope that helps. Of course all of this is written by someone interested in developmental knowledge representation and language processing, so it may be a completely different question than you'd wanted answered. best wishes.

One way to make your old car run better is to look up the price of a new model.

Working...