Please create an account to participate in the Slashdot moderation system


Forgot your password?
Biotech AI Technology

Why Not Every New "Like the Brain" System Will Prove Important 47

An anonymous reader writes "There is certainly no shortage of stories about AI systems that include the saying, 'like the brain'. This article takes a critical look at those claims and just what 'like the brain' means. The conclusion: while not a lie, the catch-phrase isn't very informative and may not mean much given our lack of understanding on how the brain works. From the article: 'Surely these claims can't all be true? After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so. We may have a lot of outstanding questions about how it works, but work a certain way it must. But here's the thing: this "like the brain" label usually isn't a lie — it's just not very informative. There are many ways a system can be like the brain, but only a fraction of these will prove important. We know so much that is true about the brain, but the defining issue in theoretical neuroscience today is, simply put, we don't know what matters when it comes to understanding how the brain computes. The debate is wide open, with plausible guesses about the fundamental unit, ranging from quantum phenomena all the way to regions spanning millimeters of brain tissue.'"
This discussion has been archived. No new comments can be posted.

Why Not Every New "Like the Brain" System Will Prove Important

Comments Filter:
  • by AK Marc ( 707885 ) on Thursday May 22, 2014 @06:53PM (#47070963)
    The brain runs without compiling, and re-writes its own source code and hardware while under use. It never crashes. Nothing has ever tried to come close, and those that pretend to emulate it have always failed miserably.
  • by msobkow ( 48369 ) on Thursday May 22, 2014 @07:02PM (#47071031) Homepage Journal

    "Like the brain" is a fundamentally wrong-headed approach in my opinion. Biological systems are notoriously inefficient in many ways. Rather than modelling AI systems after the way "the brain" works, I think they should be spending a lot more time talking to philosophers and meditation specialists about how we *think* about things.

    To me it makes no sense to structure a memory system as inefficiently as the brain's, for example, with all it's tendancy to forgetfulness, omission, and random irrelevant "correlations". It makes far more sense to structure purely synthetic "memories" using database technologies of various kinds.

    Sure, biologicial systems employ some interesting short cuts to their processing, but always at a sacrifice in their accuracy. We should be striving for systems that are *better* than the biological, not just similar, but in silicon.

"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet