Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Programming Science Technology

The Believers: Behind the Rise of Neural Nets 45

An anonymous reader writes Deep learning is dominating the news these days, but it's quite possible the field could have died if not for a mysterious call that Geoff Hinton, now at Google, got one night in the 1980s: "You don't know me, but I know you," the mystery man said. "I work for the System Development Corporation. We want to fund long-range speculative research. We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers." The Chronicle of Higher Ed has a readable profile of the minds behind neural nets, from Rosenblatt to Hassabis, told primarily through Hinton's career.
This discussion has been archived. No new comments can be posted.

The Believers: Behind the Rise of Neural Nets

Comments Filter:
  • NSA (Score:5, Funny)

    by thhamm ( 764787 ) on Thursday February 26, 2015 @04:34AM (#49135239)
    "You don't know me, but I know you," the mystery man said.

    We call them "NSA" now.
    • Or RTFA... (Score:2, Informative)

      by Anonymous Coward

      "You don’t know me, but I know you," Smith told him. "I work for the System Development Corporation. We want to fund long-range speculative research. We’re particularly interested in research that either won’t work or, if it does work, won’t work for a long time. And I’ve been reading some of your papers."

      Hinton won $350,000 from this mysterious group. He later learned its origins: It was a subsidiary of the nonprofit RAND Corporation that had ended up making millions in profit

    • My guess is Rusty Shackleford.

    • by ceoyoyo ( 59147 )

      If you read the article, it turned out to be someone from the RAND corporation. So you're modded as funny but.

    • by PPH ( 736903 )

      Funny, but System Development Corporation (aka RAND) is primarily a supplier for the US Military and other three letter intelligence agencies. There was probably more good research in various fields that was intercepted by the likes of them, stamped 'Top Secret' and lost from public view for decades.

      I used to work for an outfit with some serious machine learning, natural language recognition applications. When 9/11 hit, they saw the handwriting on the wall. With the Patriot Act, Homeland Security and the N

  • by narcc ( 412956 ) on Thursday February 26, 2015 @04:38AM (#49135247) Journal

    We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers.

    Sounds like a pretty damning indictment.

    • by Anonymous Coward

      Makes perfect sense. You research what won't work, then you tell your enemies that it will work, and watch them waste their time trying to make it work.

    • We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers.

      Sounds like a pretty damning indictment.

      It actually has at least a couple of advantages. The sooner you learn what doesn't work the quicker you avoid sinking vast sums of money into trying to make it work. If an adversary is working on it you can be assured it is a waste of time and money they could spend on something that might actually work. Of course, sometimes people are wrong about what won't work because they give up to soon; which is the downside of asking what won't work.

  • Ha (Score:2, Insightful)

    by Anonymous Coward

    No mystery caller was responsible for neural nets taking off. Computers exist to compute as extensions of ourselves, a neural net is the way to extend more of ourselves into the computational system. Saying "neural nets wouldn't exist if x didn't call y in the middle of the night" is a bit like saying "the if statement wouldn't exist if the orignal person to think of the word 'if' didn't exist" - it filled a role so it was a natural advancement and the stranger thing would be it not existing.

    • Re:Ha (Score:4, Interesting)

      by TapeCutter ( 624760 ) on Thursday February 26, 2015 @07:38AM (#49135845) Journal
      Skimmed the article, conspiratorial themes aside, it seems like a good general history of neural nets.

      To answer what I see as the main question in TFA - Here's the difference "this time around".

      I've been interested in AI and automata since the early 80's, sporadically following closely over the years. Life distracted me from this interest for most of the noughties. The first time I watched IBM's Jeopardy stunt with Watson I was blown away, the missus shrugged and said "It's impressive but what's the big deal, it's just looking up the answers, like google with talking, right?" I tried to explain why my jaw was on the floor, but all I got was a blank look and a change of subject.

      Far from being overhyped I think the general public simply don't comprehend the significance of these developments. They see it as 'hype' because like my missus they simply don't comprehend the problem and tend to grossly underestimate the difficulty of solving it. IMO the Watson stunt is one of the most significant technological feats I've witnessed since the moon landings, and possibly the start of a new Apollo style arms race based on the same old fears. That doesn't mean I think all the problems in AI have been solved, but machines like Watson are very strong evidence that we have recently cleared a significant hurdle (that few in the general public have even noticed).

      To me, this period in AI is very reminiscent of where digital comms were in the early 90's. Most of the bits for the comms revolution existed but rarely talked to each other; pagers, email, mobile phones, computers, printers, fax, GPS, fibre optics, etc. Just a few years later everyone was talking about "convergence", "as foretold" pretty much all of those things and more have now converged into the ubiquitous smart phone. In 1990, virtually nobody on the planet saw the internet coming (including me), I was at Uni, mature age CS/Math student, 88-91. I was perfectly placed in space and time to see it born but didn't notice it.

      I first heard about HTML and Mosaic at Uni, one of our CS lectures was very impressed and went on a tangential rant about it one day in a networking lecture. Still, nobody in his hijacked audience I talked to afterwards could figure out why he was so impressed. "What's wrong with zmodem?" was a typical comment that I would have agreed with then.

      I think we are more or less at that "1990" point where everyone will soon start talking more and more about "convergence" in AI. The Watson that won Jeopardy in 2011(?) required 20 tons of air-conditioning alone, today an instance of Watson fits on a "pizza box" server and you can try out your own Watson instance for free with a web based developer's API (google it). Their goal is to squeeze Watson into a smart phone.

      A couple of things that a Watson style AI may "converge" with aside from phones are, "Big Dog" which has pretty much solved the autonomous movement/balance problem, and face recognition software which has also made big strides in the past few years. What the end result will be when it all converges and evolves, or even when it will converge, I have no idea, but a dystopian SkyNet style future is no longer purely fiction. From a less pessimistic POV, AI could serve as a "check and balance" in a democracy full of bullshitters, a tool to fact check the waffle and make evidence based, transparent, recommendations on public policy free from partisan politics, in other words "speak truth to power", like the public service in a democracy is supposed to be doing now.

      Disclaimer: The "missus" is far from dumb, she has a Phd in Business and Marketing, she lectures to several hundred students at a time. I sometimes fail to see why she is interested/impressed by some obscure event in the Business News and politely change the subject :)
      • Yea, I'm with you on people not getting it. I wanted to show people a picture of a shed that I had taken a couple of months back. It was buried under hundreds of photos, so was hard to find. I just punched in "shed" into the Google Photos search for my photo collection and low and behold dozens of pictures of different types of sheds in different angles all showed up in my search results. Typing in "brown shed" filtered it down to brown, and then "light brown shed" gave me just the light brown shed, whi
  • by Pinky's Brain ( 1158667 ) on Thursday February 26, 2015 @07:11AM (#49135721)

    Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms ... and the best ANNs used spiking neurons with Hebbian learning which are not amenable to efficient digital implementation.

    • by CanarDuck ( 717824 ) on Thursday February 26, 2015 @08:05AM (#49135985)

      Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms ... and the best ANNs used spiking neurons with Hebbian learning which are not amenable to efficient digital implementation.

      Is it possible that last time you checked was a long time ago? Deep neural networks are again all the rage now (i.e. huge teams working with them at Facebook and Google) because

      1. (1) They have resulted in a significant performance improvement over previously state-of-the-art algorithms in many application tasks,
      2. (2) Although they are computation-heavy, they are amenable to massive parallelization (modern computational power is probably the main reason why they have improved singificantly with respect to ANNs of the 80-90s, given that the main architecture itself has not changed a lot, except possibly for the "convolution" trick which effectively introduces hard-coded localization and spatial invariance).

      Check the wikipedia page for "convolutional neural networks" as well as other /. entries: http://slashdot.org/tag/deeple... [slashdot.org] , and from yesterday http://tech.slashdot.org/story... [slashdot.org] .

      • by lorinc ( 2470890 )

        Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms ... and the best ANNs used spiking neurons with Hebbian learning which are not amenable to efficient digital implementation.

        Is it possible that last time you checked was a long time ago? Deep neural networks are again all the rage now (i.e. huge teams working with them at Facebook and Google) because

        1. (1) They have resulted in a significant performance improvement over previously state-of-the-art algorithms in many application tasks,
        2. (2) Although they are computation-heavy, they are amenable to massive parallelization (modern computational power is probably the main reason why they have improved singificantly with respect to ANNs of the 80-90s, given that the main architecture itself has not changed a lot, except possibly for the "convolution" trick which effectively introduces hard-coded localization and spatial invariance).

        To be fair, it always seems to me that (1) and (2) are very closely related. CNNs that won recent computer vision benchmarks are the only methods that used so much processing power so far. Not that they're less efficient than other, tough. It's just that I would love to see other methods with that many engineering, tunning, dedicated computational power and how they compare.
        Also, not that when it comes to classification, the standard is to throw the last layer and train a linear SVM on the penultimate layer

      • by SpinyNorman ( 33776 ) on Thursday February 26, 2015 @10:41AM (#49137255)

        Compute power is only part of the reason for the recent success of neural nets. Other factors include:

        - Performance of neural nets increase with the amount of training data you have, almost without limit. Nowadays big datasets are available on the net (plus we have the compute power to handle them).

        - We're now able to train deep (multi-layer) nerural nets using backprop whereas it used to be considered almost impossible. It turns out that initialization is critical, as well as various types of data and weight regularization and normalization.

        - A variety of training techniques (SGD + momentum, AdaGrad, Nesterov accelerated gradients, etc, etc) have been developed that both accelerate training (large nets can take weeks/months to train) and remove the need for some manual hyperparameter tuning.

        - Garbage-In, Garbage Out. You're success in recognition tasks is only going to be as good as the feature representation available to the higher layers of your algorithms (whether conventional or neural net). Another recent advance has been substituting self-learnt feature representations for laboriously hand-designed ones, and the recent there is now a standard neural net recipe of autoencoders+sparsity for implementing this.

        - And a whole bunch of other things...

        As Newton said "if I have achieved great things it is by standing on the shoulders of giants".. there are all sorts of surprising successes (e.g. language translation) and architectural advances in neural nets that are bringing the whole field up.

        These arn't your father's neural nets.

      • Neural Networks were the rage when I was in grad school at UCSD, and also genetic algorithms a bit later. A rage across many departments. They did some good work, though there was this attitude that only their favorite methods counted for anything and that more traditional AI was not worth discussing. But you should use all techniques if you can, otherwise it's like trying to build a circuit using only capacitors.

    • Nowadays (typically deep, convolutional) neural nets are achieving state of the art (i.e. better than any other technique) results in most perception fields such as image recognition, speech recognition, handwriting recognition. For example, Google/Android speech recognition is now neural net based. Neural networks have recently achieved beyond-human accuracy on a large scale image recognition test (ImageNet - a million images covering thousands of categories including fine-grained ones such a as recognizin

    • Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms ...

      This is true, but someone has to write those more efficient algorithms. ANNs learn, and program themselves. Once a ANN has been trained to solve a problem, it can often be trimmed to a minimal implementation, making it more efficient, but no longer trainable.

  • Sounds a bit reminiscent of the Eschaton [tvtropes.org]...

  • by CrimsonAvenger ( 580665 ) on Thursday February 26, 2015 @09:03AM (#49136355)

    This sounds like the LRF from Heinlein's Time for the Stars.

    They were required to spend their money researching things whose payback was so far in the future that no-one else would touch it.

    And they kept making embarrassing amounts of money as a result of the products of their research. wonder if this lot will do the same?

  • Should they imitate how we imagine the mind to work, as a Cartesian wonderland of logic and abstract thought that could be coded into a programming language? Or should they instead imitate a drastically simplified version of the actual, physical brain, with its web of neurons and axon tails, in the hopes that these networks will enable higher levels of calculation? Itâ(TM)s a dispute that has shaped artificial intelligence for decades.

    I suspect to get "true" AI, both of these will have to work together

  • Its "System Development Foundation [virginia.edu]" not "System Development Corporation" and Charlie's full name is Charles Sinclair Smith [diogenesinstitute.org]. He's semi-retired now and living the next county over from me in southeast Iowa where we've been collaborating on a couple of projects -- one of which is to photosynthesize all of the CO2 effluent from US fossil fuel power plants [diogenesinstitute.org] (as Charlie got his start co-founding the Energy Information Administration of the DoE [eia.gov] under Carter).

    Its ironic that in the 80s I was living in La Jolla, whi

Keep your boss's boss off your boss's back.

Working...