Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming Software Science Technology

The New AI: Where Neuroscience and Artificial Intelligence Meet 209

An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
This discussion has been archived. No new comments can be posted.

The New AI: Where Neuroscience and Artificial Intelligence Meet

Comments Filter:
  • no (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 07, 2013 @08:06PM (#43660197)

    Are we ready to blur the line between hardware and wetware?

    No. You can't ask that every time you find a slightly better algorithm. Ask it when you think you understand how the mind works.

  • by Okian Warrior ( 537106 ) on Tuesday May 07, 2013 @08:58PM (#43660585) Homepage Journal

    Andrew Ng is a brilliant teacher who I respect, but I have questions:

    1) What is the constructive definition of intelligence? As in, "it's composed of these pieces connected this way" such that the pieces themselves can be further described. Sort of like describing a car as "wheels, body, frame, motor", each of which can be further described. (The Turing Test doesn't count, as it's not constructive.)

    2) There are over 180 different types of artificial neurons. Which are you using, and what reasoning implies that your choice is correct and all the others are not?

    3) Neural nets in the brain have more back-propagation connections than forward. Do your neural nets have this feature? If not, why not?

    4) Neural nets typically have input-layers, hidden-layers, output layers - and indeed, the image in the article implies this architecture. What line of reasoning indicates the correct number of layers to use, and the correct number of nodes to use in each layer? Does this method of reasoning eliminate other choices?

    5) Your neural nets have an implicit ordering of input => hidden => output, while the brain has both input and output on one side (ie - both the afferent and efferent neuron enter the brain at the same level, and are both processed in a tree-like fashion). How do you account for this discrepancy? What was the logical argument that led you to depart from the brain's chosen architecture?

    Artificial intelligence is 50 years away, and it's been that way for the last 50 years. No one can do proper research or development until there is a constructive definition of what intelligence actually is. Start there, and the rest will fall into place.

  • Re:fly brains (Score:5, Insightful)

    by AthanasiusKircher ( 1333179 ) on Tuesday May 07, 2013 @09:19PM (#43660735)

    I say all of the following as a big fan of AI research. I just think we need to drop the rhetoric that we're somehow recreating brains -- why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

    Anyhow...

    We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster.

    Interesting wording. Let's take this apart:

    • now: the present
    • almost convincingly: not really "convincingly" then, right? since "convincingly" isn't really a partial thing -- evidence is usually enough to "convince" you or not, if I say study data "almost convinced me," I usually mean it had argument and fluff that made it appear to be good but it turned out to be crap in the end
    • partially recreate: yeah, it's pretty "partial," and you have to read "recreate" as something more like "make a very inexact blackbox model that probably doesn't work at all the same but maybe outputs a few things in a similar fashion"
    • functions: this word is chosen wisely, since the "neural net" models are really just algorithms, i.e., functions, which probably don't act anything like real "neurons" in the real world at all

    In sum, we have a few algorithms that seem to take input and produce some usable output in a manner very vaguely like a few things that we've observed in the brains of fruit flies. Claiming that this at all "recreates" the "wetware" implies that we understand a lot more about brain function and that our algorithms ("artificial neurons"? hardly) are a lot more advanced and subtle than they are.

  • by wierd_w ( 1375923 ) on Tuesday May 07, 2013 @10:22PM (#43661197)

    I could give a number of clearly unsubstantiated, but seemingly reasonable answers here.

    1) the assertion that because living neurons have deficits compared against an arbitrary and artificial standard of efficiency (it takes a whole 500ms for a neuron to cycle?! My diamond based crystal oscillator can drive 3 orders of magnitude faster!, et al.)that they are "faulted" is not substantiated: as pointed out earlier in the thread, no high level intelligence built using said "superior" crystal oscillators exists. Thus the "superior" offering is actually the inferior offering when researching an emergent phenomenon.

    2) artificially excluding these principles (signal crosstalk, propogation delays, potentiation thresholds of organic systems, et al) completely *IGNORES* scientifically verified features of complex cognitative behaviors, like the role of mylein, and the mechanisms behind dentrite migration/culling.

    In other words, asserting something foolish like "organic neurons are bulky, slow, and have a host of computationally costly habbits" wit the intent that "this makes them undesirable as a model for emergent high level intelligence" ignores a lot of verified information in biology, that shows that these "bad" behaviors directly contribute to intelligent behaviors.

    Did you know that signal DELAY is essential in organic brains? That whole hosts of disorders with debilitating effects come from signals arriving too early? Did you stop to consider that thse faults may actually be features that are essential?

    If you don't accurately model the biological reference sample, how can you riggorously identify which is which?

    We have a sample implementation, with features we find dubious. Only buy building a faithful simulation that works, then experimentally removing the modeled faults do we really systematically break down the real requirements for self directed intelligences.

    That is why modeling accurate neurons that faithfully smulate organic behavior is called for, and desirable. At least for now.

  • by ChronoFish ( 948067 ) on Tuesday May 07, 2013 @11:13PM (#43661575) Journal
    "..No one can do proper research or development until there is a constructive definition of what intelligence actually is..."

    That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

    The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

    -CF
  • by __aaltlg1547 ( 2541114 ) on Tuesday May 07, 2013 @11:47PM (#43661805)

    That presumes that the approach you take is going to be using the same kind of models you have now and just running them on bigger, faster hardware. If our models lead us to *understanding* of how brains work, we could get there a good deal faster and find that present day computers are plenty complex to handle cognition on a human-equivalent level.

    Take Google self-driving cars for example. Driving a car is definitely an AI task, and it can be handled by present day computers. It's a subset of the tasks humans can learn. Google didn't do it by modeling the part of your brain that drives a car. Hell, we don't even know what subset of our brain is sufficient to drive a car. They did it by understanding how to drive a car.

    What I'm proposing is that human-level AI won't be created first by modeling a whole brain. It will more likely be created by scientists by studying the brain come to understand what the big-picture behavior of brain subsystems and modeling those subsystems at a behavioral level rather than at a neural-network level.

  • by TapeCutter ( 624760 ) on Wednesday May 08, 2013 @12:01AM (#43661879) Journal

    Forget the long-standing problems that make this approach a non-starter.

    Did you actually watch IBM's "Watson" beat the snot out of the best Jepordy champions humanity could muster? I can't believe that anyone who knows anything about computers and AI is not blown away by Watson's demonstration, I know I was. My significant other who has a phd in marketing just shrugged and said "it's looking up the answers on the internet, so what?". In other words if your not impressed by Watson's performance, it's because you have no idea how difficult the problem is.

  • by narcc ( 412956 ) on Wednesday May 08, 2013 @12:45AM (#43662121) Journal

    I'm saying that it's unsolved (er, well, I thought that would go without saying!) and that, at present, it and similar problems strongly suggest that this type of approach is fundamentally flawed.

    My main point was that it's unreasonable to believe that those problems will be solved by magic and wishful thinking. This cargo-cult approach to AI purports to do just that. (If we just ignore the problems hard enough, technology will deliver us!)

  • by nebosuke ( 1012041 ) on Wednesday May 08, 2013 @03:30AM (#43662769)

    Your assertion that a 'cargo cult' approach cannot achieve a given effect contains the assumption that it is necessary to first develop an accurate understanding of why and how a potential mechanism works before it can be implemented.

    All crop development prior to Mendel or Darwin, for example, was essentially cargo cult directed evolution--and yet it resulted in incredible development (e.g., corn from teocinte).

    More generally, achievement of an effect isn't just possible without understanding, it's possible without intent. Predators culling prey populations such that frequency of undesirable alleles within the prey population is minimized is an entirely unintentional effect. "Cargo Cult" solutions are simply scenarios where you have intent but lack understanding (which again does not mean that the solution will necessarily be ineffective).

    With respect to the neuron modeling approach, it actually builds on lots of earlier successful work in computer science with respect to emergent properties of systems of finite automata. Essentially the approach follows the sequence:

    1. 1) Observe a complex phenomenon that you do not understand and further do not understand how to analyze in its entirety.
    2. 2) Identify discrete components of the phenomenon that you can analyze (e.g., neurons)
    3. 3) Model those components as finite automata and tweak the number of components in the model, as well as the configuration of the interaction topology and properties of individual automata until you recreate the original phenomenon (or alternatively other unexpected but interesting phenomena) (e.g., play with simulated neural nets)
    4. 4) Use the resulting working model to help you identify and analyze attributes of the system and their effect on the emergent property of interest, which leads to further understanding of the phenomenon (already has happened in fields like image recognition)

    Note that in the above approach you not only recreate something before you understand it or how it works--you do so specifically to gain a better understanding of how it works. This is certainly a realistic scenario of how strong AI could be developed via "cargo cult" methodology. It is entirely possible that creating synthetic intelligence will be a step towards the understanding intelligence as opposed to an outcome of that understanding.

  • by Anonymous Coward on Wednesday May 08, 2013 @03:47AM (#43662827)

    Do you consider proper definitions necessary for the advancement of mathematics?

    Take, for example, the [mathematics] definition of "group". It's a constructive definition, composed of parts which can be further described by *their* parts. Knowing the definition of a group, I can test if something is a group, I can construct a group from basic elements, and I can modify a non-group so that it becomes a group. I can use a group as a basis to construct objects of more general interest.

    Are you suggesting that mathematics should proceed and be developed... without proper definitions?

    That a science - any science - can proceed without such a firm basis is an interesting position. Should other areas of science be developed without proper definitions? How about psychology (no proper definition of clinical ailments)? Medicine? Physics?

    I'd be interested to hear your views on other sciences. Or if not, why then is AI is different from other sciences?

    The view of mathematics as proceeding from clear-cut definitions and axioms is really an artifact of the way we teach it. Over time theorems can become definitions, and we may choose definitions so as to make certain theorems that ought to be true, true.

    If you want an example, look at how much real analysis was going on before we had a proper definition of continuity.

    An obsession with rigorous definitions right at the start of a field serves only to force our intuitions to be more specific than they are, with no understanding of the
    consequences.

  • by Anonymous Coward on Wednesday May 08, 2013 @04:31AM (#43662999)

    Mathematics is not a science, it's just used by science. Science is about studying phenomena in reality. Mathematics is not part of reality - mathematics is entirely apriori. You can't prove anything in mathematics by studying reality. Mathematics is made entirely of definitions and symbol manipulation, so of course you shouldn't be doing mathematics without definitions. Science isn't like that. Proper definitions can often help in Science, that's true, but it's not a prerequisite. The suggestions is not that proper definitions are bad, the suggestion is that intelligence is too slippery to properly define now - possibly too slippery to ever define properly. As long as there are objective tests to compare efforts at intelligence, it's not absolutely necessary to have a proper definition of what it is, so there is no reason to stop all the research until a proper definition is in.

  • by Vintermann ( 400722 ) on Wednesday May 08, 2013 @05:00AM (#43663107) Homepage

    All crop development prior to Mendel or Darwin, for example, was essentially cargo cult

    No, that's not cargo cult. Cargo cult is when you imitate the actions of someone for whom those actions have meaning, without understanding their meaning yourself (or totally misunderstanding their meaning). Crop development was haphazardly experimental, not cargo cult.

"If it ain't broke, don't fix it." - Bert Lantz

Working...