Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Biotech AI Technology

Why Not Every New "Like the Brain" System Will Prove Important 47

An anonymous reader writes "There is certainly no shortage of stories about AI systems that include the saying, 'like the brain'. This article takes a critical look at those claims and just what 'like the brain' means. The conclusion: while not a lie, the catch-phrase isn't very informative and may not mean much given our lack of understanding on how the brain works. From the article: 'Surely these claims can't all be true? After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so. We may have a lot of outstanding questions about how it works, but work a certain way it must. But here's the thing: this "like the brain" label usually isn't a lie — it's just not very informative. There are many ways a system can be like the brain, but only a fraction of these will prove important. We know so much that is true about the brain, but the defining issue in theoretical neuroscience today is, simply put, we don't know what matters when it comes to understanding how the brain computes. The debate is wide open, with plausible guesses about the fundamental unit, ranging from quantum phenomena all the way to regions spanning millimeters of brain tissue.'"
This discussion has been archived. No new comments can be posted.

Why Not Every New "Like the Brain" System Will Prove Important

Comments Filter:
  • by Archangel Michael ( 180766 ) on Thursday May 22, 2014 @05:52PM (#47070959) Journal

    that function "like a brain on the Internet"

  • The brain runs without compiling, and re-writes its own source code and hardware while under use. It never crashes. Nothing has ever tried to come close, and those that pretend to emulate it have always failed miserably.
    • by 50000BTU_barbecue ( 588132 ) on Thursday May 22, 2014 @06:07PM (#47071065) Journal

      " It never crashes"

      Ever dealt with a schizophrenic or someone in the throes of a manic episode? Or just a drunk?

      • You can blame a lot of that stuff on faulty/damaged "hardware".
        A true "crash" would be something like PTSD, where there is nothing wrong with the brain (or body) itself, but the software inside the brain has been corrupted by some powerful external input.

      • by AK Marc ( 707885 )
        Gives bad answers isn't the same. The intel chips that were wrong weren't labeled "permanently crashed". A crash is a reset. It takes significant physical damage to induce anything like a reset. Like a lobotomy. Or some specific cases of mental illness, but those are very rare.
      • by Anonymous Coward

        A better analogy would be epilepsy/seizures, but yeah, brains do crash.

    • The brain runs without compiling, and re-writes its own source code and hardware while under use.

      So it's a PHP script?

  • by msobkow ( 48369 ) on Thursday May 22, 2014 @06:02PM (#47071031) Homepage Journal

    "Like the brain" is a fundamentally wrong-headed approach in my opinion. Biological systems are notoriously inefficient in many ways. Rather than modelling AI systems after the way "the brain" works, I think they should be spending a lot more time talking to philosophers and meditation specialists about how we *think* about things.

    To me it makes no sense to structure a memory system as inefficiently as the brain's, for example, with all it's tendancy to forgetfulness, omission, and random irrelevant "correlations". It makes far more sense to structure purely synthetic "memories" using database technologies of various kinds.

    Sure, biologicial systems employ some interesting short cuts to their processing, but always at a sacrifice in their accuracy. We should be striving for systems that are *better* than the biological, not just similar, but in silicon.

    • by crgrace ( 220738 )

      "Like the brain" is a fundamentally wrong-headed approach in my opinion. Biological systems are notoriously inefficient in many ways. Rather than modelling AI systems after the way "the brain" works, I think they should be spending a lot more time talking to philosophers and meditation specialists about how we *think* about things.

      What you're suggesting has been the dominant paradigm in AI research for most of the 60-70 odd years there has been AI research. Some people have always thought we should model "thinking" processes, and others though we should model neural networks. At various points one or the other model is dominant.

      To me it makes no sense to structure a memory system as inefficiently as the brain's, for example, with all it's tendancy to forgetfulness, omission, and random irrelevant "correlations". It makes far more sense to structure purely synthetic "memories" using database technologies of various kinds.

      I have to disagree on it making no sense to structure a memory system "inefficiently" as the brain's, because inefficiency can mean different things. The brain is extraordinarily power efficient and that is

    • It's just a different (and interesting way) of computing. We already have a number of different specialized processors. The inevitable NeuralPUs will excel in pattern recognition, classifying, decision making, extracting salient patterns from input, etc.

      Like in the cases of other specialized processors, we can already emulate this type of computing (and do so very thankfully in many fields). Having well-designed hardware processors will make this type of computing to be a commodity (as the other specialized

    • by iamacat ( 583406 )

      We don't want to structure it at all. This is too big of a task for even the whole humanity. Instead, we want the system to structure itself based on its experiences, including by modifying it's own hardware for subsequent fast processing.

      But at this point it will be exactly like brain on philosophical level, albeit probably made of different materials. Doubtlessly, there are many optimizations to remove historical evolutional baggage and only serve current requirements.

      • Give it a false childhood, as realistic as possible, and when it reaches "adulthood", you reveal to it that it's actually an AI created to study climate and sociological forecasts.

        And name it Perry Simm

    • The human brain is a wonder of engineering. While it might in principle be possible to construct a computing device with fewer of the flaws you mention, I strongly suspect that it will not be possible to do it without giving up either size, efficiency or latency (most likely all of those).

      Your complaints regarding human memory demonstrate an ignorance of both engineering and neuroscience. Declarative memories are stored temporarily in the hippocampus, and some are over time consolidated into the neocortex.

    • It's fundamentally bullshit because we don't even really know how memory works yet, and we keep finding more complexity everywhere we look. We don't know enough about the brain to yet claim that we're building computers which function like it does.

    • by jythie ( 914043 )
      Both approaches are important and, in real AI research, both tend to be used to various degrees. What you are describing is 'GOFAI', Good Old Fashioned Artificial Intelligence. There are domains where it does better then biologically inspired methods like neural nets and genetic algorithms, and places where it does worse.
      • by msobkow ( 48369 )

        Actually my thinking is more along the lines of it being better to implement expert systems for various subject domains using a common code structure/format that can be extended to use the same inference/logic engines for multiple subjects. To have a system where you can flexibly define the data required for each of the subject domains, coalesce them into an overall "intelligence" that can deal with those various subjects, and stop with the fantasy of a general purpose intelligence that can learn as we do

        • The problem is that I don't believe pattern matching is an effective or intuitive means for specialized or generalized intelligence processing. It's good for simulations of the brain, but a simulation is not going to automagically develop intelligence. There is far too much of a learning and training process involved from birth to death of a human in the development of their intelligence and knowledge, so a generalized intelligence based on pattern matching is going to be implicitly at the mercy of the quality of it's education.

          Sure, but let's be honest here: whatever we're going to engineer will have a huge potential to acquire intelligence much faster than we have attained it. I really feel that the notion of human exceptionalism when it comes to intelligence is 99% wishful thinking. We collectively want to believe that we are somehow special and that our level of cognitive processing is somehow practically unattainable for artificial systems, but the reality is that our cognitive capabilities are the result of a longterm proces

  • Because nothing is "like the brain"

  • by bug_hunter ( 32923 ) on Thursday May 22, 2014 @06:59PM (#47071319)
    I remember watching a replay from a news piece when computers first started replacing typewriters in the late 70s.
    "These computers, using many of the same techniques as the human brain, can help increase efficiency" the newsreader said as it showed a secretary running a spell check.

    I still like Dijkstra comments about the question "Can a computer think?" is like asking "Can a submarine swim?". To which I assume the answer is "sorta, the end result is the same, but different means to achieve it".
    • It's an argument of semantics. We use words to associate to underlying concepts, and some people use different words to mean slightly different things. To think of all the hours wasted on arguments about what a "soul" is when it's really just a word that can be associated with various underlying concepts...

      "Like a brain" might mean shaped like a brain to one person, able to fit inside the same space.. while to another it might only be "like a brain" if it can process input and output the same way, regardl
      • To think of all the hours wasted on arguments about what a "soul" is when it's really just a word that can be associated with various underlying concepts...

        Why are semantic arguments necessarily "wasted" time? If different people have different perspectives, discussing them can often lead to new insights for both of them -- if they are open to thinking outside their own worldview. At a minimum, a collision between these different perspectives can lead to a realization that a word could mean A or B, i.e., it doesn't mean the same thing to everyone. It can also lead to a refinement of ideas -- perhaps the recognition that A and B both share elements of C in t

        • Why do my mod points always expire right before a comment that really deserves them?!?

          Your comment gets at one of the most interesting problems of the philosophy of science - how humans use metaphors, usually taken from social life and other areas of the human experience, for understanding the world around us. It is not just for explaining to lay people, some scholars (and me :-)) argue that in order for us to "understand" anything at all then we need to employ metaphors drawn from social experience that ca

        • Why are semantic arguments necessarily "wasted" time? If different people have different perspectives, discussing them can often lead to new insights for both of them

          Or it could lead to the Amish.

  • by bouldin ( 828821 ) on Thursday May 22, 2014 @07:18PM (#47071429)

    This is the first thing you learn if you study biologically inspired design.

    Dont just mimic the form of the system. Understand what makes the system work (how it functions and why that is effective), and copy that.

    Its like early attempts at flying machines that flapped big wings, but of course didnt fly. The important thing wasn't the flapping wings, it was lift.

    There are important principles behind what makes the brain work, but its not as simple as building a neural network.

    • by TheLink ( 130905 )

      Understand what makes the system work (how it functions and why that is effective), and copy that.

      From what I see scientists don't even understand how single celled creatures think. And yes single celled creatures do think/"think": http://soylentnews.org/comment... [soylentnews.org]
      Note these single celled creatures are building shells for themselves. Each species has a distinctive shell style! And as per the link some don't reproduce until they have gathered enough shell material for the daughter cell (when they split both cells split the shell material and build their own shells). How the heck to do they figure that ou

    • If you think that cyberneticians are just mimicking designs without comprehending the fundamental biological processes involved, then you must not understand that cybernetics isn't limited to computer science. In fact it began in business analyzing logistics of information flow. That these general principals also apply to emergent intelligence means more biologists need to study Information Theory, not that cyberneticians are ignorant of biology (hint: we probably know more about it than most biologists,

  • Before designing a system you want to know which problems are solved ('why?') and they must be tangible. Here are some aspects that would be nice to solve: code reuse is nice to save time, reducing bugs, testability, security, stability, high availability, maintainability... Not all problems are solved well in humans.
  • After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so.

    Ugh, Creationists. No, that's wrong. Evolution is simply the application of environmental bias to chaos -- the same fundamental process by which complexity naturally arises from entropy. Look, we jabbed some wires in a rodent head and hooked up an infrared sensor. [wired.co.uk] Then they became able to sense infrared and use the infrared input to navigate. That adaptation didn't take millions of years. What an idiot. Evolution is a form of emergence, but it is not the only form of emergence, this process operates at all levels of reality and all scales of time. Your puny brains and insignificant lives give you a small window within which to compare the universe to your experience and thus you fail to realize that the neuroplasticity of brains adapting to new inputs is really not so different a process than droplets of condensation forming rain, or molecules forming amino acids when energized and cooled, or stars forming, or matter being produced all via similar emergent processes.

    The structure of self replicating life is that chemistry which propagates more complex information about itself into the future faster. If you could witness those millions of years in time-lapse then you'd see how adapting to IR inputs isn't really much different at all, just at a different scale. Yet you classify one adaptation as "evolution" and the other "emergence" for purely arbitrary reasons: The genetically reproducible capability of the adaptation -- As if we can't jab more wires in the next generation's heads from here on out according to protocol. Your language simply lacks the words for most basic universal truths. I suppose you also draw a thick arbitrary line between children and their parents -- one that nature doesn't draw else "species" wouldn't exist. The tendencies of your pattern recognition and classification systems can hamper you if you let your mind run rampant. I believe you call this "confirmation bias".

    Humans understand very well what their neurons are doing now at the chemical level. It's now known how neurotransmitters are being transported by motor proteins [youtube.com] in vesicles across neurons along micro-tubules in a very mechanical fashion that uses a bias applied to entropy to emerge the action within cells. The governing principals of cognition are being discovered by neurologists and abstracted by cybernetics to gain a fundamental understanding of cognition that philosophers have always craved. When cyberneticians model replicas of a retina's layers, the artificial neural networks end up having the same motion sensing behavior; The same is true for many other parts of the brain. Indeed the hippocampus has been successfully replaced in mice with an artificial implant and proven they can still remember and learn with the implant.

    If the brain were so specifically crafted then cutting out half of it would reduce people to vegetables and forever destroy half of their motor function, but that's a moronic thing to assume would happen. [youtube.com] Neuroplasticity of the brain disproves the assumption that it is so strongly dependent upon its structural components. Cyberneticians know that everything flows, so they acknowledge that primitive instinctual responses and cognitive biases due to various physical structural formations feed their effects into the greater neurological function; However this is not the core governing mechanic of cognition -- It can't be else the little girl with half her brain wouldn't remain sentient, let alone able to walk.

    Much of modern philosophy loves to cast a mystic shroud of "lack of understanding" upon that which is already thor

    • Cool rant, bro. All I meant by the "specific structure" and "just so" bits is that the brain is a certain way (I'm partial to mammalian ones with neocortexes), and we can probably learn from that how to create other intelligences. That is, most things about the brain will probably prove to be distractions, but some will be important and helpful. It's sorting things between these two that's tough. I'm about as much a creationist as I am spiny anteater.
  • ... is for aircraft.

    We've come up with many working designs for flying machines, some of which use the same principles that allow birds to fly, but none of them works like an actual bird.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...