Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming Software Science Technology

The New AI: Where Neuroscience and Artificial Intelligence Meet 209

An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"
This discussion has been archived. No new comments can be posted.

The New AI: Where Neuroscience and Artificial Intelligence Meet

Comments Filter:
  • fly brains (Score:4, Interesting)

    by femtobyte ( 710429 ) on Tuesday May 07, 2013 @08:10PM (#43660227)

    Are we ready to blur the line between hardware and wetware?

    We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster. Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

  • by ebno-10db ( 1459097 ) on Tuesday May 07, 2013 @09:00PM (#43660609)

    For such a blatant, transparent, promotional, hyperbolic "story", I wish soulskill would at least throw in a sarcastic jab or two to balance out the stench a bit.

    Agreed. This story smells of the usual Google hype.

    I think it's great that there is more research in this area, but "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI" suggests that Google is at the forefront of this stuff. They're not. Look at the Successes in Pattern Recognition Contests since 2009 [wikipedia.org]. None of Ng, Stanford, Google or Silicon Valley are even mentioned. Google's greatest ability is in generating hype. It seems to be the stock-in-trade of much of Silicon Valley. Don't take it too seriously.

    Generating this type of hype for your company is an art. I use to work for a small company run by a guy who was a wiz at it. What you have to understand is that reporters are always looking for stories, and this sort of spoon fed stuff is easy to write. Forget about "Wired". The guy I knew could usually get an article in NYT or WSJ in a day or two.

  • by femtobyte ( 710429 ) on Tuesday May 07, 2013 @09:17PM (#43660717)

    Do you really think AI will not follow a moore type law? It will probably be even more aggressive.

    I personally expect Moore's Law to set a lower bound on the time needed for advancement. Doubling every 18-24 months means 20-30 years to get human-sized big ol' clusters of neurons. However, there's also so much work to do on understanding the specifics of how to get particular results (e.g. language and "symbolic thought") instead of just gigantic twitching masses of incoherent craziness.

    In order to try out ideas and test hypotheses, you really need to be able to run a whole bunch of human-brain-scale simulators at far higher speed than the human brain (learning a language takes a couple years for a developing human brain, and you're very unlikely to get this "right" with only one or two tries). I think once we have 10^3 - 10^6 times more "raw neuron simulation" processing power than a single human brain (so another 10 to 40 years after the 20-30 years for single-brain neuron simulations), then we'll be able to crank out simulations of the "hard stuff" fast enough to make rapid progress on the high-level issues. Of course, this means once you do have a couple "breakthroughs" in generating self-aware, learning, human-language-understanding machines, you're very suddenly dropped into having far-exceeding-human artificial intelligences, without so much of a slow progression through "retarded chimpanzee" stages first.

  • Good points (Score:3, Interesting)

    by Okian Warrior ( 537106 ) on Tuesday May 07, 2013 @09:17PM (#43660725) Homepage Journal

    You highlight important points, of which AI researchers should take note.

    We don't know what intelligence actually is, but we have an example of something that is unarguably intelligent: the mammalian brain. Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain. Most AI research fails this test.

    I personally think in-depth modeling of individual neurons is too deep of a level - it's like trying to make a CPU by modeling transistors. We might be better off using the fundamental function of a neuron as a basis - sort of like simulating a CPU using logic gates instead of transistors.

    But your point is well taken. Lots of research is done under the catch-all phrase AI simply because they do not constrain themselves in any way. What they make doesn't have to pass any criterion for reality, or even reasonableness.

  • Re:fly brains (Score:4, Interesting)

    by femtobyte ( 710429 ) on Tuesday May 07, 2013 @09:34PM (#43660835)

    Yes, I intended my very weasel-worded phrase to convey that even our present ability to "understand" Drosophila melanogaster is rather shallow and shaky --- your analysis of my words covers what I meant to include pretty well.

    why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

    I don't think we do. In fact, machines acting in utterly un-brainlike manners are extremely useful to me *today* --- when I want human-style brain functions, I've already got one of those installed in my head; computers are great for doing all the other tasks. However, making machines that work like brains might be the only way to understand how our own brains work --- a separate but also interesting task from making machines more useful at doing "intelligent" work in un-brainlike manners.

  • Re:fly brains (Score:4, Interesting)

    by White Flame ( 1074973 ) on Tuesday May 07, 2013 @09:45PM (#43660927)

    Biological neurons are far more complex than ANN neurons. At this point it's unknown if we can make up for that lack of dynamic state by using a larger ANN, or by increasing the per-neuron complexity to try to match the biological counterpart. I do have my doubts about the former, but that doubt is merely intuition, not science. We simply don't know yet.

    as is whether this is what folks have in mind by "AI."

    In my own studies, this isn't the branch of AI I'm particularly interested in. I don't care about artificial life, or structures based around the limitations of the biological brain. I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

    I don't see modeling biological brains as being a workable approach to that view of AI, except maybe the "tirelessly" part. I'm more interested in cognitive meta-models of intelligence itself than the substrate on which currently known knowledge happens to reside.

  • by White Flame ( 1074973 ) on Tuesday May 07, 2013 @10:00PM (#43661027)

    I'd mod you up if I could, but I think I can help out with a few points instead:

    1) There is no concrete constructive definition of intelligence yet, and I think anybody at a higher level in the field knows that. Establishing that definition is a recognized part of AI research. Intelligence is still recognized comparatively, usually related to something like the capability to resolve difficult or ambiguous problems with similar or greater effect than humans, or can learn and react to dynamic environmental situations to similar effect as other living things. Once we've created something that works and that we can tangibly study, we can begin to come up with real workable definitions of intelligence that represent both the technological and biological instances of recognized intelligence.

    4) Modern ANN research sometimes includes altering the morphology of the network as part of training, not just altering the coefficients. I would hope something like that is in effect here.

  • Re:fly brains (Score:4, Interesting)

    by femtobyte ( 710429 ) on Tuesday May 07, 2013 @10:17PM (#43661163)

    I suppose some of the urge to "anthropomorphize" AIs comes from the lack of precedent, and even understanding of what is possible, outside of the two established categories of "calculating machine" and "biological brain." Some tasks and approaches are "obviously computery": if you need to sort a list of a trillion numbers, that's clearly a job for an old-fashioned computer with a sort algorithm. On the other hand, other tasks seem very "human": say, having a discussion about art and religion. There is some range of "animal" tasks in-between, like image recognition and navigating through complex 3D environments. But we have no analogous mental category to non-biological "intelligent" systems --- so we think of them in terms of replicating and even being biological brains, without appropriate language for other possibilities.

  • by Black Parrot ( 19622 ) on Tuesday May 07, 2013 @10:37PM (#43661333)

    What precisely are those long-standing problems?

    I ask because I actually know people who are starting to demonstrate the rudiments of intelligence using simulations of ~100,000 neurons.

    Per upthread, that's a long way from a brain, and in fact we don't even know how all of the brain is wired, let alone how it works. But you might want to consider this [engineerin...lenges.org] and this [wikipedia.org] and this [nature.com].

    If they're attempting the impossible, you should let them know not to waste their money.

  • by Okian Warrior ( 537106 ) on Tuesday May 07, 2013 @11:50PM (#43661819) Homepage Journal

    That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

    The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

    Do you consider proper definitions necessary for the advancement of mathematics?

    Take, for example, the [mathematics] definition of "group". It's a constructive definition, composed of parts which can be further described by *their* parts. Knowing the definition of a group, I can test if something is a group, I can construct a group from basic elements, and I can modify a non-group so that it becomes a group. I can use a group as a basis to construct objects of more general interest.

    Are you suggesting that mathematics should proceed and be developed... without proper definitions?

    That a science - any science - can proceed without such a firm basis is an interesting position. Should other areas of science be developed without proper definitions? How about psychology (no proper definition of clinical ailments)? Medicine? Physics?

    I'd be interested to hear your views on other sciences. Or if not, why then is AI is different from other sciences?

  • by femtobyte ( 710429 ) on Wednesday May 08, 2013 @12:54AM (#43662157)

    As I agree in another branch of this thread, we probably will find "non-brainlike" methods to generate all sorts of "intelligent" behavior, continuing the same type of progress (not particularly worrying about biologically accurate brain models) that gives us self-driving cars. On the other hand, it's a separate worthwhile field of study to learn how *our* brains work, through models that capture key features of biological brains.

    If our models lead us to *understanding* of how brains work, we could get there a good deal faster and find that present day computers are plenty complex to handle cognition on a human-equivalent level.

    Maybe; maybe not. Our understanding might well *not* allow much brain function (above the Drosophila level, which is about appropriate for a moderate sized supercomputer today) to be vastly simplified for lesser computing resources --- maybe you do *need* zillions of complexly interlinked neurons to see more interesting higher level behaviors (in a brain-like manner, not by creating non-brainlike intelligences like the self-driving car that have similar "skills"). The brain may not neatly "factor" into simple-to-computationally-model "subsystems". If you look at, e.g., chemical pathway maps for how a cell functions, everything is tangled together with everything else --- biological systems often evolve "spaghetti code" solutions to problems, without the neatly defined boundaries and modularity that a "top down" systems designer would impose.

"If it ain't broke, don't fix it." - Bert Lantz

Working...