Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Programming Software Science Technology

The New AI: Where Neuroscience and Artificial Intelligence Meet 209

An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
This discussion has been archived. No new comments can be posted.

The New AI: Where Neuroscience and Artificial Intelligence Meet

Comments Filter:
  • Geoffrey Hinton (Score:5, Informative)

    by IntentionalStance ( 1197099 ) on Tuesday May 07, 2013 @08:15PM (#43660281)
    Is referenced in the article as the father of neural networks.

    He has a course on them at coursera that is pretty good.

    https://www.coursera.org/course/neuralnets [coursera.org]

  • by Dorianny ( 1847922 ) on Tuesday May 07, 2013 @08:20PM (#43660329) Journal
    Drosophila melanogaster is commonly known as the fruit fly. Its brain has about 100,000 neurons. The human brain avarages 85,000,000,000.
  • Re:Geoffrey Hinton (Score:5, Informative)

    by Baldrson ( 78598 ) * on Tuesday May 07, 2013 @08:50PM (#43660535) Homepage Journal
    I've had "Talking Nets: An Oral History of Neural Networks" for several weeks on interlibrary loan. It interviews 17 "fathers of neural nets" (including Hinton) and it isn't even a complete set of said "fathers".

    Look, this stuff goes back a long ways and has had some hiccups along the way, like the twenty year period it was treated with little more respect by the scientific establishment than has cold fusion for the last twenty years. There are plenty of heroics to go around.

    I can recommend the book highly.

  • by Baldrson ( 78598 ) * on Tuesday May 07, 2013 @08:55PM (#43660559) Homepage Journal
    The claim that "winning both industrial and academic data competitions with minimal effort" might be more impressive if it included the only provably rigorous test of general intelligence:

    The Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net]

    The last time anyone improved on that benchmark was 2009.

  • by Slicker ( 102588 ) on Tuesday May 07, 2013 @09:00PM (#43660601)

    Neural Net's were traditionally based off old Hodgkins and Huxley models and then twisted for direct application for specific objectives, such as stock market prediction. In the process they veered from a only very vague notion of real neurons to something increasingly fictitious.

    Hopefully, the AI world is on the edge of moving away from continuously beating their heads against the same brick walls in the same ways while giving themselves pats on the heads. Hopefully, we realize that human-like intelligence is not a logic engine and that conventional neural nets are not biologically valid and posses numerous fundamental flaws.

    Rather--a neurons draws new correlating axons to itself when it cannot reach threshold (-55mv from a resting state of -70mv) and weakens and destroys them when over threshold. In living systems, neural potential is almost always very close to threshold--it bounces a tiny bit over and under. Furthermore, inhibitory connections are also drawn in from non-correlating axons. For example, if two neural pathways always excite when the other does not, then each will come to inhibit the other. This enables contexts to shut off irrelevant possible perceptions, e.g. If you are in the house, you are not going to get rained on. More likely, somebody is squirting you with a squirt gun.

    Also--a neuron perpetually excited for too long shuts itself off for a while. We love a good song but hearing it too often makes us sick of it, at least for a while.. like Michael Jackson in the late 1980's.

    And very importantly--signal streams that dissappear but recur after increasing time lapses stay potentiated longer.. their potentiation dissipates slower. After 5 pulses with a pause between a new receptor is brought in from the same axon as an existing one. This causes slower dissipation. It will happen again after another 5 pulses repeatedly, except that the time lapse between them must be increased. It falls in line with the scale found on the Wikipedia page for Graduated Interval Recall--exponentially increasing time lapses 5 times, each... take a look at it. Do the math. It matches what is seen in biology, even though this scale was developed in the 1920's.

    I have a C++ neural modal that does this. I am mostly done also with a Javascript modal (employing techniques for vastly better performance), using Nodejs.

  • by Hentes ( 2461350 ) on Tuesday May 07, 2013 @09:15PM (#43660695)

    Neural networks are certainly not new, or groundbreaking. We already know their strengths and weaknesses, and they aren't a universal solution to every AI problem.
    First of all, while they have been inspired by the brain, they don't "mimic" it. Neural networks are based on some neurons having negative weights, reversing the polarity of the signal, which doesn't happen in the brain. They are also linear, which bears similarities to some simple parts of the brain, but are very far from modeling its complex nonlinear processing. Neural networks are useful AI tools, but aren't brain models.
    Second neural networks are only good at things when they have to immediately react to an input. Originally, neural networks didn't have memory, and while it's possible to add it, it doesn't fit right into the system and is hard to work with. While neural networks make good reflex machines, even simple stateful tasks like a linear or cyclic multi-step motion are nontrivial to implement in them. Which is why they are most effective in combination with other methods, instead of declared a universal solution.

  • by Muad'Dave ( 255648 ) on Wednesday May 08, 2013 @08:32AM (#43663907) Homepage

    Aka 'Purple Drank' and 'Sizzurp'. That stuff killed a lot of its early proponents [wikipedia.org]. The original recipe contained codeine and promethazine, not DXM.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...