Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Science

A.I. Advances Through Deep Learning 162

An anonymous reader sends this excerpt from the NY Times: "Advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking. ... But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just 'neural nets' for their resemblance to the neural connections in the brain. 'There has been a number of stunning new results with deep-learning methods,' said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. 'The kind of jump we are seeing in the accuracy of these systems is very rare indeed.' Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. ... But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs. From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent."
This discussion has been archived. No new comments can be posted.

A.I. Advances Through Deep Learning

Comments Filter:
  • by drooling-dog ( 189103 ) on Sunday November 25, 2012 @12:44AM (#42085161)

    I wonder how much of these improvements in accuracy are due to fundamental advances, vs. the capacity of available hardware to implement larger models and (especially?) the availability of vastly larger and better training sets...

  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Sunday November 25, 2012 @01:06AM (#42085237) Homepage

    I wonder how much of these improvements in accuracy are due to fundamental advances

    I was wondering the same thing, and just now found this interview [kaggle.com] on Google. Perhaps someone can fill in the details.

    But basically, machine learning is at its heart hill-climbing on a multi-dimensional landscape, with various tricks thrown in to avoid local maxima. Usually, humans detemine the dimensions to search on -- these are called the "features". Well, philosophically, everything is ultimately created by humans because humans built the computers, but the holy grail is to minimize human invovlement -- "unsupervised learning". According to the interview, this one particular team (the one mentioned at the end of the Slashdot summary) actually rode the bicycle with no hands and to demonstrate how strong their neural network was at determining its own features, did not guide it, even though it meant their also-excellent conventional machine learning at the end of the process would be handicapped.

    The last time I looked at neural networks was circa 1990, so perhaps someone writing to an audience more technically literate than the New York Times general audience could fill in the details for us on how a neural network can create features.

  • by Anonymous Coward on Sunday November 25, 2012 @01:06AM (#42085239)

    Don't forget that it's not impossible to build a specially designed processor to do a particular task; such as the digital orrery. Such devices created to do nothing but neural net simulations would be more efficient than using a general purpose computer. It would be linked to such to provide a convenient interface but do most of the heavy lifting itself.

  • Re:Deep learning? (Score:4, Insightful)

    by AthanasiusKircher ( 1333179 ) on Sunday November 25, 2012 @01:27AM (#42085299)

    A lot of vague marketing-speak in this article. "Deep learning"?

    Agreed. Why do we need the adjective "deep"? Perhaps it's because a lot of AI jargon uses "learning" when they really just mean "adaptive" (as in, "programmed to respond to novel stimuli in anticipated ways"), whereas normal human "learning" is much more fluid.

    The article basically talks about neural networks

    Yet another victory for marketing. These things have been around for at least 25-30 years, and the connection to what little we actually have deciphered about how the brain encodes, decodes, and processes information has always been incredibly tenuous. There always seems to be these AI strands of "cognitive science" or "neural modeling," which are often nothing than just somebody's pet algorithm or black box dressed up with words that make it sound like it has some scientific basis in actual neurophysiology or something.

    Don't get me wrong -- I'm sure some of the examples in TFA have made great advances, partly due to speed and hardware unthinkable 25-30 years ago. And some of the functionality of the "neural nets" might give significantly better results than previous models.

    But I really wish people would lay off the pretend connections to humanity. Why can't we just accept that a machine might just function better with a better program or algorithm or whatever, rather than saying that "our research in cognitive science [i.e., BS philosophy of the mind] has resulted in neural networks [i.e., a mathematical model instantiated into programming constructs] that exhibit deep learning [i.e., work better than the previous crap]."

    (Please note: I mean no insult to anyone who works in neuroscience or AI or whatever. But I do question the jargon that seems to make unfounded connections and assumptions that the brain works anything like many algorithmic "models." We may succeed in creating artificial intelligence by developing our own algorithms or we might succeed by imitating the brain, but I don't think we're making progress by pretending that we're imitating the brain when we're really just using marketing jargon for our pet mathematical algorithm.)

  • by iggymanz ( 596061 ) on Sunday November 25, 2012 @01:27AM (#42085301)

    no, that first sentence pretty much sums up digital neural nets over two decades ago. So more likely the over two orders magnitude processing power per chip improvement since then, with addressable memory over three orders magnitude bigger....

  • by Daniel Dvorkin ( 106857 ) on Sunday November 25, 2012 @02:01AM (#42085395) Homepage Journal

    the holy grail is to minimize human invovlement -- "unsupervised learning"

    Unsupervised learning is valuable, but calling it a "holy grail" is going a little too far. Supervised, unsupervised, and semi-supervised learning are all active areas of research.

  • by PlusFiveTroll ( 754249 ) on Sunday November 25, 2012 @02:15AM (#42085415) Homepage

    Article didn't say, but if I had to make a guess, this is where I would start.

    http://www.neurdon.com/2010/10/27/biologically-realistic-neural-models-on-gpu/ [neurdon.com]
    "The maximal speedup of GPU implementation over dual CPU implementation was 41-fold for the network size of 15000 neurons."

    This was done on cards 7 years old now. The massive increase of power in GPUs in the past few years along with more features and better programing languages for them means the performance increase could possibly be many hundreds of times faster. An entire cluster of servers gets crunched down in to one card, multiple cards in one server, and build a cluster of those and you can quickly see that amount of computing power available to neural networks is much much larger now. I'm not even sure how to compare the GT6800 to a modern GTX680 because of their huge differences, but the 6800 did 54 FLOPs and the 680 does 3090.4. A 57x increase. CPU's how far back to we have to go where CPUs are 57 times slower. If everything scales the same in the papers calculations it would mean over a 2000x performance increase on a single computer with 1 GPU. In 7 years.

  • by timeOday ( 582209 ) on Sunday November 25, 2012 @03:26AM (#42085575)

    You can see from the numbers in the article the results are about what you'd expect from improved hardware (as opposed to actually solving the problem)

    "As opposed to actually solving the problem"? You brain has about 86 billion neurons and around 100 trillion synapses. It accounts for 2% of body weight and 20% of energy consumed. Do you think these numbers would be large if they didn't need do be?

    I think the emphasis in computer science on focusing so exclusively on polynomial-time algorithms has really stunted it. Maybe most of the essential tasks for staying alive and reproducing don't happen to have efficient solutions, but the constants of proportionality are small enough to brute-force with several trillion neurons.

  • by smallfries ( 601545 ) on Sunday November 25, 2012 @06:02AM (#42085901) Homepage

    The problem comes when you try larger inputs. Regardless of constant factors if you are playing with O(2^n) algorithms then n will not increase above about 30. If you start looking at really weird stuff (optimal circuit design and layout) then the core algorithms are O(2^2^n) and then if you are really lucky n will reach 5. Back in the 80s it only went to 4, buts thats Moore's law for you.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...