Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Science

A.I. Advances Through Deep Learning 162

An anonymous reader sends this excerpt from the NY Times: "Advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking. ... But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just 'neural nets' for their resemblance to the neural connections in the brain. 'There has been a number of stunning new results with deep-learning methods,' said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. 'The kind of jump we are seeing in the accuracy of these systems is very rare indeed.' Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. ... But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs. From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent."
This discussion has been archived. No new comments can be posted.

A.I. Advances Through Deep Learning

Comments Filter:
  • by PlusFiveTroll ( 754249 ) on Sunday November 25, 2012 @12:49AM (#42085175) Homepage

    from TFA

    " Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.

    These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision. "

    Sounds like both.

  • Deep Belief Networks (Score:5, Informative)

    by Guppy ( 12314 ) on Sunday November 25, 2012 @01:04AM (#42085223)

    A lot of vague marketing-speak in this article. "Deep learning"? The article basically talks about neural networks, just one of the techniques in machine learning.

    It's hard to tell from the article, but they probably are trying to refer to Deep Belief Networks [scholarpedia.org], which are a more recent and advanced type of Neural Network, which incorporates many layers:

    Deep belief nets are probabilistic generative models that are composed of multiple layers of stochastic, latent variables. The latent variables typically have binary values and are often called hidden units or feature detectors. The top two layers have undirected, symmetric connections between them and form an associative memory. The lower layers receive top-down, directed connections from the layer above. The states of the units in the lowest layer represent a data vector.

  • by Anonymous Coward on Sunday November 25, 2012 @01:49AM (#42085365)

    Glad they were able to make it work so quick, but drug discovery has been done like this for over a decade. I worked at an "Infomesa" startup that was doing this in Santa Fe in 2000.

  • by Prof.Phreak ( 584152 ) on Sunday November 25, 2012 @01:57AM (#42085385) Homepage

    The ``new'' (e.g. last decade or so) advances are in training hidden layers of neural networks. Kinda like peeling an onion, each layer getting progressively coarser representation of the problem. e.g. if you have 1000000 inputs, and after a few layers, only have 100 hidden nodes, those 100 nodes are in essence representing all the ``important'' (some benchmark you choose) information of those 1000000 inputs.

  • by Black Parrot ( 19622 ) on Sunday November 25, 2012 @02:30AM (#42085439)

    I wonder how much of these improvements in accuracy are due to fundamental advances, vs. the capacity of available hardware to implement larger models and (especially?) the availability of vastly larger and better training sets...

    I'm sure all of that helped, but the key ingredient is training mechanisms. Traditionally networks with multiple layers did not train very well, because the standard training mechanism "backpropagates" an error estimate, and it gets very diffuse as at goes backwards. So most of the training happened in the last layer or two.

    This changed in 2006 with Hinton's invention of the Restricted Boltzman Machine, and someone else's insight that you can train one layer at a time using auto-associative methods.

    "Deep Learning" / "Deep Architectures" has been around since then, so this article doesn't seem like much news. (However, it may be that someone is just now getting the kind of results that they've been expecting for years. Haven't read up on it very much.)

    These methods may be giving ANN a third lease on life. Minsky & Papiert almost killed them off with their book on perceptrons in 1969[*], then Support Vector Machines nearly killed them again in the 1990s.

    They keep coming back from the grave, presumably because of their phenomenal computational power and function-approximation capabilities.[**]

    [*] FWIW, M&P's book shouldn't have done anything, since it was already known that networks of perceptrons don't have the limitations of a single perceptron.

    [**] Siegelmann and Sontag put out a couple of papers, in the 1990s I think, showing that (a) you can construct a Turing Machine with an ANN that uses rational numbers for the weights, and (b) using real numbers (real, not floating-point) would give a trans-Turing capability.

  • by Anonymous Coward on Sunday November 25, 2012 @02:32AM (#42085443)

    I'm doing Prof Hinton course on Neural Network on Coursera this semester. It covers the old school stuff plus the latest and greatest. From what I gather from the lecture, training neural networks using lots of layers hasn't been practical in the past and was plauged with numerical and computational difficulties. Nowadays, we have better algorithms and much faster hardware. As a result we now have the ability to use more complex networks for modelling data. However, they need a lot of computational power thrown at them to learn compared to other machine learning algorithms (random forest). The lecture quotes training taking days on a Nvidia GTX 295 GPU to learn the MNIST handwritten dataset. Despite this, the big names are already using this technology for applications like speech recognition (Microsoft, Siri), object recognition (Google Cat video, okay that's not a real application yet).

  • Re:Deep learning? (Score:4, Informative)

    by Black Parrot ( 19622 ) on Sunday November 25, 2012 @02:35AM (#42085453)

    Why do we need the adjective "deep"?

    Because the "deep learning" technologies use artificial neural networks with many more layers than traditionally, making them "deep architectures".

    It's widely accepted that the first hidden layer of an ANN serves as a feature detector (possibly sub-symoblic features that you can't put a name to), and each successive layer serves as a detector for higher-order features. Thus the deep architectures can be expected to have some utility for any problem that depends on feature analysis.

  • by ShanghaiBill ( 739463 ) on Sunday November 25, 2012 @03:03AM (#42085513)

    Why build a special processor when ATI and Nvidia already do. Probably at a much lower cost per calculation then a custom machine.

    A GPU can run a neural net much more efficiently than a general purpose CPU, but specialized hardware designed just for NNs could be another order of magnitude more efficient. Of course GPUs are more cost effective because they are mass market items, but if NN applications take off it is likely that everyone will want one running on their cellphone, and then customized NN hardware will be mass market too.

  • by Tagged_84 ( 1144281 ) on Sunday November 25, 2012 @03:44AM (#42085623)
    IBM recently announced success in simulating 2 billion of their custom designed synaptic cores, 1 trillion synapses apparently. Here's the pdf report [modha.org]
  • by maxwell demon ( 590494 ) on Sunday November 25, 2012 @07:58AM (#42086223) Journal

    Given that almost every real number encodes an uncountable number of bits of information, I guess this isn't especially surprising in retrospect. The result though should make us suspicious of the assumption that the physical constants and properties in our physical theories can indeed take any real number value.

    The number of bits needed to represent an arbitrary real number exactly is infinite, but not uncountable.

  • by Anonymous Coward on Sunday November 25, 2012 @09:34AM (#42086451)

    A garden snail has about 20,000 neurons, a cat has 1 billion neurons, a human has 86 billion neurons.

    http://www.guardian.co.uk/science/blog/2012/feb/28/how-many-neurons-human-brain [guardian.co.uk]

  • by Fnord666 ( 889225 ) on Sunday November 25, 2012 @11:38AM (#42087057) Journal
    Here [youtube.com] is a good video of a talk given by Dr. Hinton about Restricted Boltzman Machines. It is a very promising technique for deep learning strategies.

There are two ways to write error-free programs; only the third one works.

Working...