


A Common Logic To Seeing Cats and the Cosmos 45
An anonymous reader sends this excerpt from Quanta Magazine:
"Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.
"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.
"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.
Re: (Score:1)
.. taking averages and culling the relevant information.
Nope.
Re: Better known as... (Score:1)
Cats like sticking their ass in the air. I will be impressed if it can properly identify a cats ass or soulskill. There may not be much difference really
Re: (Score:3)
Re: (Score:2)
I also think they're underestimating cats. But If they're connecting deep learning systems to telescopes (which is not explicitly stated), when a cat is positively identified, perhaps somewhere millions of light-years away and hundreds of thousands of light-years across, I guess we'll be sorry.
And then all we would need to do to contact extraterrestrial beings is turn our solar system into a giant can opener and broadcast the sound for about ten seconds. This would have the added benefit of allowing our study of the seemingly faster than light response cats tend to exhibit...
Re: (Score:1)
Amazing.
An AC modded down for a crudely shortened summary, to a minus 1. Actually, no.
And the next poster at this moment in time, another AC, says 'nope'.
Has the mod (wo)man posted as AC to strengthen her statement?
While AC's comment wasn't really up to the article, the paper by Kadanoff does what parent is saying; and it is what re-normalization is about.
http://www.studiolo.org/Mona/M... [studiolo.org] and http://etd.lsu.edu/docs/availa... [lsu.edu]
are prior art. The former if you're more in arts, the second if you're more in phys
too many words (Score:1)
Re: (Score:1)
Re:too many words (Score:5, Informative)
This article is way overblown. This is not the kind of paper that is likely to attract significant attention in the deep learning community. And the person who they got to say it was important, Ilya Nemenman, is not someone I have heard of.
Move along. Nothing to see here.
Re:too many words (Score:5, Interesting)
Could you comment on some of the claims in the abstract?
1. Deep learning is a broad set of techniques that uses multiple layers of representation...
Agreed- that's what "deep" implies.
Is multi-scale analysis a primary component of 'deep learning'?
This may be true in vision, but not in general (e.g. in linguistics tasks and in speech, there is usually not a natural notion of scale).
2. "relatively little is understood theoretically about why these techniques are so successful at feature learning and compression.
True... deep learning methods are not very easy to analyze (personally I am skeptical that there is much point in trying very hard to analyze them).
"We construct an exact mapping from the variational renormalization group..." Is this not new, not correct, or is this simply not of much use to deep learning?
I think the closest is to say it's not of much use. I didn't read the paper super carefully (and I'm not a physicist so am not familiar with the renormalization group), but I imagine the analogy is not very close at all and only applies in specific cases, e.g. in convolutional nets or something like that.
The renormalization group theory is so general and powerful, it's had profound impacts on many areas of theoretical and mathematical physics. Do you think this can't or won't impact the field of deep learning? If deep learning has multi-scale analysis at its heart, it appears on the surface that RG should be a good treatment. Have there been attempts to use RG for deep learning aside from the present work?
If the connection is real, it would seem to suggest that perhaps deep learning may have something to offer physics, if it really is "employing a generalized RG-like scheme." Do you have any comment on this?
I haven't read the paper in detail but I just don't think it's plausible that there is a very interesting connection as they are such different things.
To pick a random example, imagine you are a botanist and someone told you there is a connection between hydroelectric dams and oranges. Even if there is a connection, it's probably not something that is going to help you very much, and you probably wouldn't be so excited to read the paper explaining the purported connection.
Re: (Score:3)
As somebody who has worked on artificial neural networks in the past, and holds a physics degree, I don't think that this assessment is wrong.
I think at this point this is more a curiosity. Interesting in it's own right, but not something that I would expect to yield new and improved algorithms.
Re: (Score:2)
Of course! How do you solve the problem of identifying digital representations of cats? Imagine identifying a single cat in a box. Then 2 cats in a larger box, etc. and you can identify in such a way any arbitrary cat configuration in the universe via machine learning. Genius.
Err... except how do you tell if the cat is alive or dead?
Re: (Score:2)
Re: (Score:1)
The person being quoted was being quoted as an unrelated independent expert. Having a recognizable (at least in one of the domains) name would certainly help for that.
well, i thought we couldn't know (Score:2)
Everything is a cat (Score:1)
I too can write software that categorises everything as a cat.
Re: (Score:2)
Cats ARE from the Cosmos.
They are superior beings from a far far far away galaxy.
Can anyone prove they evolved on Earth? Does the Bible say God created them?
No and No!
See.
I think you might be on to something.
So who invented buttered toast? And if you strap it to the cosmos and drop it, what happens?
Re: (Score:2)
Gravity.
Re: (Score:1)
Gravity.
Too bad... I was going for Levity....
Re: (Score:2)
It is known that local application of buttered toast/feline combinations generate levitons in the surrounding environment, though gravitons are generated and remain internal to the cat/toast system (levity is conserved), therefore it is possible that what we perceive as the cosmos is actually a cat with a piece of buttered toast strapped to it's back.
The arxiv paper (Score:4, Insightful)
Time is short (Score:1)
The AI is coming.
Re: (Score:3)
That's OK. It looks like we can open a can of kitty food and distract it.
Re:Time is short (Score:4, Informative)
Re: (Score:1)
That's what I was just thinking right meow.
Re: (Score:2)
Kind of sad (Score:2)
That we might make an artificial intelligence greater than human intelligence and it will sit around watching lolcats.
Dang (Score:2)
This is not what sci-fi had in mind (Score:1)
"Dave, I cannot open the pod bay doors, but I can show you a cat video."
Obligatory Abstruse Goose (Score:3)
Let's hope this approach works better than the current state of the art. [abstrusegoose.com]
Re: (Score:2)
We know better than to get excited about silly nonsense like this.
Answer to Google's non-CAPTCHA? (Score:1)
Interesting how this popped up days after Google's revelation to 'phase out' CAPTCHAs in favor of 'identify the picture' games (amongst other things) featuring - you guess it - cats, for example.