Forgot your password?
typodupeerror
AI Programming Software Science Technology

The New AI: Where Neuroscience and Artificial Intelligence Meet 209

Posted by Soulskill
from the on-an-internet-dating-site-like-everyone-else dept.
An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
This discussion has been archived. No new comments can be posted.

The New AI: Where Neuroscience and Artificial Intelligence Meet

Comments Filter:
  • no (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 07, 2013 @07:06PM (#43660197)

    Are we ready to blur the line between hardware and wetware?

    No. You can't ask that every time you find a slightly better algorithm. Ask it when you think you understand how the mind works.

  • fly brains (Score:4, Interesting)

    by femtobyte (710429) on Tuesday May 07, 2013 @07:10PM (#43660227)

    Are we ready to blur the line between hardware and wetware?

    We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster. Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

    • by Dorianny (1847922) on Tuesday May 07, 2013 @07:20PM (#43660329) Journal
      Drosophila melanogaster is commonly known as the fruit fly. Its brain has about 100,000 neurons. The human brain avarages 85,000,000,000.
    • by Tailhook (98486)

      Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

      Since what folks have in mind by "AI" changes to exclude anything within the capability of machines, we're implicitly ready for whatever emerges.

      • by femtobyte (710429)

        AI certainly is a moving target --- I remember when "play chess at an advanced human level" was considered an (unachievable) goalpost for "real AI." On the other hand, I'm not certain we're ready by default for the capabilities of machines, intelligent or no.

    • Re:fly brains (Score:5, Insightful)

      by AthanasiusKircher (1333179) on Tuesday May 07, 2013 @08:19PM (#43660735)

      I say all of the following as a big fan of AI research. I just think we need to drop the rhetoric that we're somehow recreating brains -- why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

      Anyhow...

      We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster.

      Interesting wording. Let's take this apart:

      • now: the present
      • almost convincingly: not really "convincingly" then, right? since "convincingly" isn't really a partial thing -- evidence is usually enough to "convince" you or not, if I say study data "almost convinced me," I usually mean it had argument and fluff that made it appear to be good but it turned out to be crap in the end
      • partially recreate: yeah, it's pretty "partial," and you have to read "recreate" as something more like "make a very inexact blackbox model that probably doesn't work at all the same but maybe outputs a few things in a similar fashion"
      • functions: this word is chosen wisely, since the "neural net" models are really just algorithms, i.e., functions, which probably don't act anything like real "neurons" in the real world at all

      In sum, we have a few algorithms that seem to take input and produce some usable output in a manner very vaguely like a few things that we've observed in the brains of fruit flies. Claiming that this at all "recreates" the "wetware" implies that we understand a lot more about brain function and that our algorithms ("artificial neurons"? hardly) are a lot more advanced and subtle than they are.

      • Re:fly brains (Score:4, Interesting)

        by femtobyte (710429) on Tuesday May 07, 2013 @08:34PM (#43660835)

        Yes, I intended my very weasel-worded phrase to convey that even our present ability to "understand" Drosophila melanogaster is rather shallow and shaky --- your analysis of my words covers what I meant to include pretty well.

        why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

        I don't think we do. In fact, machines acting in utterly un-brainlike manners are extremely useful to me *today* --- when I want human-style brain functions, I've already got one of those installed in my head; computers are great for doing all the other tasks. However, making machines that work like brains might be the only way to understand how our own brains work --- a separate but also interesting task from making machines more useful at doing "intelligent" work in un-brainlike manners.

        • why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

          I don't think we do.

          I absolutely understand what you mean here. I don't think most AI researchers actually think they are "recreating wetware" explicitly or that the "artificial neurons" in "neural nets" are really anything like real neurons.

          On the other hand, a lot of the nomenclature of AI seems to deliberately try to make analogies -- "deep learning," "neural nets," "blur the line between hardware and wetware," etc. -- to human or animal brain functions.

          Hence my rhetorical question about why we feel the need to claim t

          • Re:fly brains (Score:4, Interesting)

            by femtobyte (710429) on Tuesday May 07, 2013 @09:17PM (#43661163)

            I suppose some of the urge to "anthropomorphize" AIs comes from the lack of precedent, and even understanding of what is possible, outside of the two established categories of "calculating machine" and "biological brain." Some tasks and approaches are "obviously computery": if you need to sort a list of a trillion numbers, that's clearly a job for an old-fashioned computer with a sort algorithm. On the other hand, other tasks seem very "human": say, having a discussion about art and religion. There is some range of "animal" tasks in-between, like image recognition and navigating through complex 3D environments. But we have no analogous mental category to non-biological "intelligent" systems --- so we think of them in terms of replicating and even being biological brains, without appropriate language for other possibilities.

    • Re:fly brains (Score:4, Interesting)

      by White Flame (1074973) on Tuesday May 07, 2013 @08:45PM (#43660927)

      Biological neurons are far more complex than ANN neurons. At this point it's unknown if we can make up for that lack of dynamic state by using a larger ANN, or by increasing the per-neuron complexity to try to match the biological counterpart. I do have my doubts about the former, but that doubt is merely intuition, not science. We simply don't know yet.

      as is whether this is what folks have in mind by "AI."

      In my own studies, this isn't the branch of AI I'm particularly interested in. I don't care about artificial life, or structures based around the limitations of the biological brain. I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

      I don't see modeling biological brains as being a workable approach to that view of AI, except maybe the "tirelessly" part. I'm more interested in cognitive meta-models of intelligence itself than the substrate on which currently known knowledge happens to reside.

      • by foobsr (693224)
        I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

        I see a combinatorial explosion at the horizon.

        CC.

        • Right, that's why dealing with such things requires intelligence, and if it could do such a thing would generally be considered intelligent. It's also why symbolic AI has failed to produce any general intelligence, because it simply cannot scale. In order for a system to exhibit such behavior, it needs to adaptively and "intelligently" prioritize what it's doing and on what it's working, as well as to predictively index and preprocess information, in order to even begin to achieve any sense of tractabilit

          • by foobsr (693224)
            So, more detail.
            perfect recall

            Conflicts with prioritizing if you have provisions for priority zero (forgetting, irrelevant if the link goes away or the information is erased).
            could converse about its knowledge and thought processes
            Telling more than we can know (Nisbett &Wilson, 1977, Psychological Review, 84, 231–259), protocol analysis, expert interviews: evidence that this is at least not always possible. My hypothesis is that too much metaprocessing would lead to a deadlock.
            conversationa

            • I'm in a hurry, but I'll dump some disconnected thoughts at you. I appreciate my having to take these still-vague thoughts and being more specific with them in discussion.

              (perfect recall)
              Conflicts with prioritizing if you have provisions for priority zero (forgetting, irrelevant if the link goes away or the information is erased).

              Recall of perceptions in particular is a very useful measure, considering "Oh, that's what you meant" style moments as new information is gained in the future, allowing reinterpretation of past input. This actual storage & recall is not a large technical challenge, especially when no continuous real-world sensors are involved (just d

      • Biological neurons are far more complex than ANN neurons. At this point it's unknown if we can make up for that lack of dynamic state by using a larger ANN, or by increasing the per-neuron complexity to try to match the biological counterpart.

        Why? I mean, I vaguely remember reading about a decade ago about a discrete 'neuron' IC that mimics the electrical properties of a biological neuron with perfect accuracy. If it can be done in hardware, it can be done in software. There's no requirement for ANNs to be matrices of coefficients; arbitrary degrees of complexity are possible, both in terms of ANN size and the complexity of neurons. Basically, you're saying that we may not be able to increase per-neuron complexity without bound. Why?

    • by c0lo (1497653)

      Are we ready to blur the line between hardware and wetware?

      We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster. Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

      Wake me up when the AI will be just as complex as my guts [wikipedia.org] (10^8 neurons the same magnitude as the cortex of a cat [wikipedia.org]) and then I'll ask them if they feel they are ready for the AI.

  • Geoffrey Hinton (Score:5, Informative)

    by IntentionalStance (1197099) on Tuesday May 07, 2013 @07:15PM (#43660281)
    Is referenced in the article as the father of neural networks.

    He has a course on them at coursera that is pretty good.

    https://www.coursera.org/course/neuralnets [coursera.org]

    • Re:Geoffrey Hinton (Score:5, Informative)

      by Baldrson (78598) * on Tuesday May 07, 2013 @07:50PM (#43660535) Homepage Journal
      I've had "Talking Nets: An Oral History of Neural Networks" for several weeks on interlibrary loan. It interviews 17 "fathers of neural nets" (including Hinton) and it isn't even a complete set of said "fathers".

      Look, this stuff goes back a long ways and has had some hiccups along the way, like the twenty year period it was treated with little more respect by the scientific establishment than has cold fusion for the last twenty years. There are plenty of heroics to go around.

      I can recommend the book highly.

    • by citizenr (871508)

      If you really want to learn about _working_ AI and not "when I was a boy we did it in the snow, both ways, uphill" then do
      https://class.coursera.org/ml/class [coursera.org]
      Machine Learning by Andrew Ng.
      After that you can do
      http://work.caltech.edu/telecourse.html [caltech.edu]
      Learning from data by Yaser Abu-Mostafa

      Half of Hintons course was about history and what didnt work in AI. Its great to know those things if you have interest in the field, but its not something you should start with (snorefest).

    • There's also a great tutorial by Andrew Ng's group at:

      http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial [stanford.edu]

      There are two types of deep learning currently by the way:
      - restricted Boltzmann machines (RBM)
      - sparse auto-encoders

      Google / Andrew Ng use sparse auto-encoders. Hinton uses (created) deep RBM networks. They both work in a similar way: each layer learns to reconstruct the input, using a low-dimensional representation. In this way, lower layers build up for example line detectors, and higher

  • For such a blatant, transparent, promotional, hyperbolic "story", I wish soulskill would at least throw in a sarcastic jab or two to balance out the stench a bit.
    • by ebno-10db (1459097) on Tuesday May 07, 2013 @08:00PM (#43660609)

      For such a blatant, transparent, promotional, hyperbolic "story", I wish soulskill would at least throw in a sarcastic jab or two to balance out the stench a bit.

      Agreed. This story smells of the usual Google hype.

      I think it's great that there is more research in this area, but "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI" suggests that Google is at the forefront of this stuff. They're not. Look at the Successes in Pattern Recognition Contests since 2009 [wikipedia.org]. None of Ng, Stanford, Google or Silicon Valley are even mentioned. Google's greatest ability is in generating hype. It seems to be the stock-in-trade of much of Silicon Valley. Don't take it too seriously.

      Generating this type of hype for your company is an art. I use to work for a small company run by a guy who was a wiz at it. What you have to understand is that reporters are always looking for stories, and this sort of spoon fed stuff is easy to write. Forget about "Wired". The guy I knew could usually get an article in NYT or WSJ in a day or two.

  • by Baldrson (78598) * on Tuesday May 07, 2013 @07:55PM (#43660559) Homepage Journal
    The claim that "winning both industrial and academic data competitions with minimal effort" might be more impressive if it included the only provably rigorous test of general intelligence:

    The Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net]

    The last time anyone improved on that benchmark was 2009.

    • by wierd_w (1375923)

      I may be misinterpreting or missing the intent here, but humans are demonstrably NOT lossless storage mediums.

      HUGE amounts of data are lost, simply between your eyeballs and your visual cortex. That's kinda the point that the summary makes about neural net based vision systems. They take a raw flood of data, and pick it apart into contextually useful elements of artificial origin, which then get strung together to build a high level experience.

      Lossless storange of human knowledge is strictly speaking, a com

      • Human intelligence is clearly a particular kind of intelligence but when I said "general intelligence" was referring to something more general that is sometimes called "universal artificial intelligence [amazon.com]".

        If the goal is to pass the Turing Test, that is one thing. But clearly they are trying for something more general in some of their contests. I'm just informing them (assuming they are watching) that better tests are available.

      • You're missing the point. Lossless text compression basically comes down to probability estimates - a direct analog for artificial intelligence. Your estimates don't have to be perfect, but the closer they get the better your compression ratio will be.

        • by wierd_w (1375923)

          Its been my experience that (at least the implementation running on my meaty hardware) human intelligence actually makes more use of data-deduplication and semantic cross linking to cull as much information as possible, while leaving metadata cues to reconstruct the data on the fly.

          This is why people misremember things, and why odd and spurrious sensory stimuli can cause a memory to surface.

          Humans dont store data losslessly, and make lossy heuristic decisions instead.

          the kind of intelligence you are referri

    • From the task description:

      "Restrictions: Must run in 10 hours on a 2GHz P4 with 1GB RAM and 10GB free HD"

      So, even if you could write an algorithm that fits in a couple of meg, and magically generates awesome feature extraction capabilities, which is kind of what deep learning can do, you'd be excluded from using it in the Hutter prize competition.

      For comparison, the Google / Andrew Ng experiment where they got a computer to learn to recognize cats all by itself used a cluster of 16,000 cores (1000 nodes * 1

  • by Okian Warrior (537106) on Tuesday May 07, 2013 @07:58PM (#43660585) Homepage Journal

    Andrew Ng is a brilliant teacher who I respect, but I have questions:

    1) What is the constructive definition of intelligence? As in, "it's composed of these pieces connected this way" such that the pieces themselves can be further described. Sort of like describing a car as "wheels, body, frame, motor", each of which can be further described. (The Turing Test doesn't count, as it's not constructive.)

    2) There are over 180 different types of artificial neurons. Which are you using, and what reasoning implies that your choice is correct and all the others are not?

    3) Neural nets in the brain have more back-propagation connections than forward. Do your neural nets have this feature? If not, why not?

    4) Neural nets typically have input-layers, hidden-layers, output layers - and indeed, the image in the article implies this architecture. What line of reasoning indicates the correct number of layers to use, and the correct number of nodes to use in each layer? Does this method of reasoning eliminate other choices?

    5) Your neural nets have an implicit ordering of input => hidden => output, while the brain has both input and output on one side (ie - both the afferent and efferent neuron enter the brain at the same level, and are both processed in a tree-like fashion). How do you account for this discrepancy? What was the logical argument that led you to depart from the brain's chosen architecture?

    Artificial intelligence is 50 years away, and it's been that way for the last 50 years. No one can do proper research or development until there is a constructive definition of what intelligence actually is. Start there, and the rest will fall into place.

    • by White Flame (1074973) on Tuesday May 07, 2013 @09:00PM (#43661027)

      I'd mod you up if I could, but I think I can help out with a few points instead:

      1) There is no concrete constructive definition of intelligence yet, and I think anybody at a higher level in the field knows that. Establishing that definition is a recognized part of AI research. Intelligence is still recognized comparatively, usually related to something like the capability to resolve difficult or ambiguous problems with similar or greater effect than humans, or can learn and react to dynamic environmental situations to similar effect as other living things. Once we've created something that works and that we can tangibly study, we can begin to come up with real workable definitions of intelligence that represent both the technological and biological instances of recognized intelligence.

      4) Modern ANN research sometimes includes altering the morphology of the network as part of training, not just altering the coefficients. I would hope something like that is in effect here.

      • Intelligence is still recognized comparatively, usually related to something like the capability to resolve difficult or ambiguous problems with similar or greater effect than humans, or can learn and react to dynamic environmental situations to similar effect as other living things.

        The second part is an important milestone for me. Take the Big Dog - a marvel of robotics in itself. If it interacted with environment and its operator on a level that real dogs interact with environment and their masters, we would have a real breakthrough. An essential thing would be to make it learn new tricks, like a dog learns, instead of programming them in.

        • If you look again it's really the first part (conceptualization & learning (mammalian-level)) that you wish to add to the second part (dynamic responses to environment (insectoid-level)) to complete the whole picture.

          Getting just the former to work yields brains in a jar like the Star Trek ship computer. Getting just the latter to work gets you semi-autonomous "dumb but capable" things like Big Dog. These are the two main facets of we view as intelligence, and to get "real" AI or artificial life does

    • by ChronoFish (948067) on Tuesday May 07, 2013 @10:13PM (#43661575) Journal
      "..No one can do proper research or development until there is a constructive definition of what intelligence actually is..."

      That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

      The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

      -CF
      • That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

        The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

        Do you consider proper definitions necessary for the advancement of mathematics?

        Take, for example, the [mathematics] definition of "group". It's a constructive definition, composed of parts which can be further described by *their* parts. Knowing the definition of a group, I can test if something is a group, I can construct a group from basic elements, and I can modify a non-group so that it becomes a group. I can use a group as a basis to construct objects of more general interest.

        Are you suggesting that m

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          Do you consider proper definitions necessary for the advancement of mathematics?

          Take, for example, the [mathematics] definition of "group". It's a constructive definition, composed of parts which can be further described by *their* parts. Knowing the definition of a group, I can test if something is a group, I can construct a group from basic elements, and I can modify a non-group so that it becomes a group. I can use a group as a basis to construct objects of more general interest.

          Are you suggesting that mathematics should proceed and be developed... without proper definitions?

          That a science - any science - can proceed without such a firm basis is an interesting position. Should other areas of science be developed without proper definitions? How about psychology (no proper definition of clinical ailments)? Medicine? Physics?

          I'd be interested to hear your views on other sciences. Or if not, why then is AI is different from other sciences?

          The view of mathematics as proceeding from clear-cut definitions and axioms is really an artifact of the way we teach it. Over time theorems can become definitions, and we may choose definitions so as to make certain theorems that ought to be true, true.

          If you want an example, look at how much real analysis was going on before we had a proper definition of continuity.

          An obsession with rigorous definitions right at the start of a field serves only to force our intuitions to be more specific than they are, wi

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          Mathematics is not a science, it's just used by science. Science is about studying phenomena in reality. Mathematics is not part of reality - mathematics is entirely apriori. You can't prove anything in mathematics by studying reality. Mathematics is made entirely of definitions and symbol manipulation, so of course you shouldn't be doing mathematics without definitions. Science isn't like that. Proper definitions can often help in Science, that's true, but it's not a prerequisite. The suggestions is not th

  • by Slicker (102588) on Tuesday May 07, 2013 @08:00PM (#43660601)

    Neural Net's were traditionally based off old Hodgkins and Huxley models and then twisted for direct application for specific objectives, such as stock market prediction. In the process they veered from a only very vague notion of real neurons to something increasingly fictitious.

    Hopefully, the AI world is on the edge of moving away from continuously beating their heads against the same brick walls in the same ways while giving themselves pats on the heads. Hopefully, we realize that human-like intelligence is not a logic engine and that conventional neural nets are not biologically valid and posses numerous fundamental flaws.

    Rather--a neurons draws new correlating axons to itself when it cannot reach threshold (-55mv from a resting state of -70mv) and weakens and destroys them when over threshold. In living systems, neural potential is almost always very close to threshold--it bounces a tiny bit over and under. Furthermore, inhibitory connections are also drawn in from non-correlating axons. For example, if two neural pathways always excite when the other does not, then each will come to inhibit the other. This enables contexts to shut off irrelevant possible perceptions, e.g. If you are in the house, you are not going to get rained on. More likely, somebody is squirting you with a squirt gun.

    Also--a neuron perpetually excited for too long shuts itself off for a while. We love a good song but hearing it too often makes us sick of it, at least for a while.. like Michael Jackson in the late 1980's.

    And very importantly--signal streams that dissappear but recur after increasing time lapses stay potentiated longer.. their potentiation dissipates slower. After 5 pulses with a pause between a new receptor is brought in from the same axon as an existing one. This causes slower dissipation. It will happen again after another 5 pulses repeatedly, except that the time lapse between them must be increased. It falls in line with the scale found on the Wikipedia page for Graduated Interval Recall--exponentially increasing time lapses 5 times, each... take a look at it. Do the math. It matches what is seen in biology, even though this scale was developed in the 1920's.

    I have a C++ neural modal that does this. I am mostly done also with a Javascript modal (employing techniques for vastly better performance), using Nodejs.

    • Good points (Score:3, Interesting)

      by Okian Warrior (537106)

      You highlight important points, of which AI researchers should take note.

      We don't know what intelligence actually is, but we have an example of something that is unarguably intelligent: the mammalian brain. Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain. Most AI research fails this test.

      I personally think in-depth modeling of individual neurons is too deep of a level - it's like trying to make a CPU by modeling transistors. We might be better off using

      • by Trepidity (597)

        Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain.

        This doesn't make any sense unless you have a purely tautological definition in mind.

      • We don't know what intelligence actually is, but we have an example of something that is unarguably intelligent: the mammalian brain.

        To further this point, a brain is also not necessarily just a pile of neurons. There is specialization in the brain, and while other portions of the brain can make up for damaged parts, that only goes so far.

        Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain. Most AI research fails this test.

        I would offer some more subjectivity to that. Mammalian brains tire, need sleep, do not have perfect recall, run things out of time order and convinces itself otherwise, take a long time to train, and has strong emotional needs.

        With a view of "intelligence" that would include tools and helpers, and no

        • by foobsr (693224)
          Mammalian brains tire, need sleep, do not have perfect recall, run things out of time order and convinces itself otherwise, take a long time to train, and has strong emotional needs.
          Who has ruled out that these are preconditions?

          CC.

      • Modern neural science and biologically realistic neural simulations (such as some of the best Deep Learning systems) use the neuron as its most fundamental primitive. One neural does a lot, actually. It draws in new correlating axons (those firing when its other receptors are firing) when its total potentiation is insufficient to excite. It weakens and destroys them in the inverse case. It also draws in non-correlating axons as inhibitory receptors. And long term potentiation (widely viewed as the basi

      • "Asking whether a computer can be intelligent is like asking whether a submarine can swim".

        An airplane doesn't flap its wings, but flies faster than birds can.

        Submarines don't swim, but they move through the water faster than dolphins.

        Not everything has to copy nature exactly in order to be effective.

    • by naroom (1560139)

      I'm still mystified by the desire to make computational neural nets more like biological ones. Biological neurons are *bad* in many ways -- for one, they are composed of a large number of high signal-to-noise sensors (ion channels). This random behavior is necessary to conserve energy and space in a brain. But computers have random-access memory and energy isn't really a limiting factor; why impose these flaws?

      Sure, there may be things that can be discovered by playing with network models more inspired by b

      • by naroom (1560139)
        Typo: That should be *low* signal to noise. Someday we'll get an edit button.
      • by wierd_w (1375923) on Tuesday May 07, 2013 @09:22PM (#43661197)

        I could give a number of clearly unsubstantiated, but seemingly reasonable answers here.

        1) the assertion that because living neurons have deficits compared against an arbitrary and artificial standard of efficiency (it takes a whole 500ms for a neuron to cycle?! My diamond based crystal oscillator can drive 3 orders of magnitude faster!, et al.)that they are "faulted" is not substantiated: as pointed out earlier in the thread, no high level intelligence built using said "superior" crystal oscillators exists. Thus the "superior" offering is actually the inferior offering when researching an emergent phenomenon.

        2) artificially excluding these principles (signal crosstalk, propogation delays, potentiation thresholds of organic systems, et al) completely *IGNORES* scientifically verified features of complex cognitative behaviors, like the role of mylein, and the mechanisms behind dentrite migration/culling.

        In other words, asserting something foolish like "organic neurons are bulky, slow, and have a host of computationally costly habbits" wit the intent that "this makes them undesirable as a model for emergent high level intelligence" ignores a lot of verified information in biology, that shows that these "bad" behaviors directly contribute to intelligent behaviors.

        Did you know that signal DELAY is essential in organic brains? That whole hosts of disorders with debilitating effects come from signals arriving too early? Did you stop to consider that thse faults may actually be features that are essential?

        If you don't accurately model the biological reference sample, how can you riggorously identify which is which?

        We have a sample implementation, with features we find dubious. Only buy building a faithful simulation that works, then experimentally removing the modeled faults do we really systematically break down the real requirements for self directed intelligences.

        That is why modeling accurate neurons that faithfully smulate organic behavior is called for, and desirable. At least for now.

        • by GauteL (29207)

          "Did you know that signal DELAY is essential in organic brains? That whole hosts of disorders with debilitating effects come from signals arriving too early? Did you stop to consider that thse faults may actually be features that are essential?"

          Are you saying that our maker created a system with severe race condition problems? I guess that is another issue to add to the existing; inclusion of obsolete and potentially dangerous features (the appendix), only poor and limited third party replacement parts avai

    • This is really interesting. How well does your code perform?

  • by Hentes (2461350) on Tuesday May 07, 2013 @08:15PM (#43660695)

    Neural networks are certainly not new, or groundbreaking. We already know their strengths and weaknesses, and they aren't a universal solution to every AI problem.
    First of all, while they have been inspired by the brain, they don't "mimic" it. Neural networks are based on some neurons having negative weights, reversing the polarity of the signal, which doesn't happen in the brain. They are also linear, which bears similarities to some simple parts of the brain, but are very far from modeling its complex nonlinear processing. Neural networks are useful AI tools, but aren't brain models.
    Second neural networks are only good at things when they have to immediately react to an input. Originally, neural networks didn't have memory, and while it's possible to add it, it doesn't fit right into the system and is hard to work with. While neural networks make good reflex machines, even simple stateful tasks like a linear or cyclic multi-step motion are nontrivial to implement in them. Which is why they are most effective in combination with other methods, instead of declared a universal solution.

    • First of all, while they have been inspired by the brain, they don't "mimic" it.

      That is true.

      Neural networks are based on some neurons having negative weights, reversing the polarity of the signal, which doesn't happen in the brain.

      There are in fact inhibitory connections in the brain.

      They are also linear

      That is false. The only way an ANN could be linear is if each "neuron" used a squashing function f(x) = x. Then they'd just be doing linear algebra, namely change-of-basis computations. But no one uses that. Even the super-simple heaviside squash used in the 1950s perceptrons made them do nonlinear computations.

      Second neural networks are only good at things when they have to immediately react to an input. Originally, neural networks didn't have memory, and while it's possible to add it, it doesn't fit right into the system and is hard to work with. While neural networks make good reflex machines, even simple stateful tasks like a linear or cyclic multi-step motion are nontrivial to implement in them.

      That is also false. I suspect there are limits to what kind of stateful computations you can do with an ANN, but you can certai

    • I'm curious why you characterized nn's as linear? They are universal function approximators and their power comes from approximating non-linear functions.
  • by ebno-10db (1459097) on Tuesday May 07, 2013 @08:16PM (#43660715)

    What's actually new in the neural net business? That's a real question - not a sarcastic or rhetorical one.

    Artificial neural nets were suggested and tried for AI at least 50 years ago. They were bashed by the old Minsky/McCarthy AI crowd, who didn't like the competition's idea (always better to write another million lines of Lisp). They wrote a paper that showed neural nets couldn't implement an XOR. That's true - for a 2 layer net. A 3 layer net does it just fine. Nevertheless M&M had enough clout to put bury NN research for years. Then in the 80's(?) they became a hot new thing again. One of the few good things about getting older is that you can remember hearing the same hype before.

    However, I'm not saying there hasn't been progress. Sometimes a field needs to go through decades of incremental improvement before you can get decent non-trivial applications. It's not all giant breakthroughs. Sometimes just having faster hardware can make a dramatic difference. Loads of things that weren't practical became practical with better hardware. So what's really improved w/ neural nets these days?

    • The improvement has been using multiple ANNs together as communicating units that can both communicate information and dynamically train each other, instead of trying to make a single large ANN and using external training sets. Of course, this isn't that new, as these guys [imagination-engines.com] have been working on such models since at least the early 90s.

    • What's actually new in the neural net business? That's a real question - not a sarcastic or rhetorical one.

      What's reportedly new is the ability to train feed-forward networks with many layers. They have never trained well with backpropagation because the backpropagated error estimate becomes "diluted" the further back it goes, and as a result most of the training happens to the weights closest to the output end.

      The notion that the first hidden layer is a low-level feature detector and each successive layer is a higher-level feature layer is ancient lore in the ANN research community. The claims of the Deep Lea

    • by Milo77 (534025)
      You should go watch Jeff Hawkins TED talk on HTMs (hierarchical temporal memory) . It's old-ish (over 5 years), but he's referenced in the article and he founded the Redwood Neuroscience institute. You should be able to also find a white paper or two on HTMs. Jeff's theoretical model of the brain may have changed some in the last 5 years (I don't know, I haven't been paying attention), but HTMs were basically a hierarchical structure of nodes, with one layer feeding up to the layer above it. The nodes weren
    • by foobsr (693224)
      One of the few good things about getting older is that you can remember hearing the same hype before.

      At least, sometimes, the hype spirals in a promising direction :)

      CC.

  • by ChronoFish (948067) on Tuesday May 07, 2013 @10:02PM (#43661511) Journal
    "I like beaver. Can you tell me where to get some tail?"

    "I like cats. Where can I pick one up?"

    Let me know when AI can understand the difference between the preceding sentences.

    -CF
    • by tftp (111690)

      A child will fail this test. A person who is not familiar with slang will fail this test. But they both are intelligent. That's the problem with the TT - it's testing for a characteristic that we cannot define, much like one of US judges [wikipedia.org], who proclaimed that "hard-core pornography" was hard to define, but that "I know it when I see it."

      Similarly, a TT cannot be conducted if the parties don't speak the same language, or don't share the same culture, or just are of different genders. How would you think a

  • by S3D (745318) on Wednesday May 08, 2013 @01:53AM (#43662663)
    Deep learning system are not quite simulations biological of neural nets. The breakthrough in DL happened then researcher stopped trying to emulate neurons and instead applied statistical (energy function) approach to simple refined model. Modern "ANN" used in deep learning in fact are mathematical optimization procedures, gradient decent on some hierarchy of convolutional operators, more in common with numerical analysis then biological networks.
  • I wonder when our new AI overlords will create AI themselves because they are too bored and tired of doing actual work themselves.

  • When I was at Stanford. On a much smaller scale then due to week computers.

    I dont htink the problems of "brittleness"- unpredictable result if new inputs are presented- and "opaqueness"- weights are interpretable- have changed.

It was kinda like stuffing the wrong card in a computer, when you're stickin' those artificial stimulants in your arm. -- Dion, noted computer scientist

Working...