It is not "deep" in the things it can do. It is just deep in the net used which means it can be made generically and trained more. It will still give worse results than a shallow, custom-designed neural net. We are already seeing severe limitations in what training can do, and the "deeper" the net, the worse these get. Even carefully trained networks can be tricked and exhibit surprising behavior. It is time to acknowledge that neural nets are a nifty special-purpose tool with specific limitations, but not a way to "intelligence" at all.
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
GPT-3 is just a powerful type of "language model". It can only do one thing - predict the next word in some text that you feed it. Of course you can then feed the word it predicted/genretated back into itself and have it predict/generate the *next* w
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
Not really. With some more relevant background (which I have) these are nice tricks, but not that impressive. For example, the average entropy of an English word in a text is just around 2.5 bits, so guessing the next word is not actually that hard. Of course the guessing errors pile up and eventually the whole thing descends into chaos and nonsense, because GPT-3 has zero understanding of what it does and zero understanding of the real world. Scaling up "dumb" does not remove the dumb. It just makes it a b
Our own intelligence is built out of dumb components.. ultimately neurons, boichemicals and the laws of physics.
Your argument is like saying the laws of physics can never produce intelligence if you scale them up. But they do.
You could also argue that humans doing "intelligent" stuff like designing rockets, or whatever, makes it hard to see that we'e really just a mass of dumb old chemicals. And you'd be right.
You either didn't watch the OpenAI Codex demo video I linked, or you didn't understand what you w
Our own intelligence is built out of dumb components.. ultimately neurons, boichemicals and the laws of physics.
Pure conjecture. And not even a very convincing one. Physicalism is religion, not Science. Since it is conjecture, any conclusions drawn from it are also conjecture.
Here is a hint: You cannot use an argument by elimination unless you have a complete and accurate model of the system giving you _all_ possibilities. Currently known Physics is _known_ to be incomplete and that makes this approach invalid as a proof technique. This is really very basic.
As to your invalid AdHominem-type argument, I have been foll
> Pure conjecture. And not even a very convincing one.
The last 50 years of neuroscience would beg to differ. The biochemical operation of neurons at this point is understood in fairly exquisite detail, and it's just good old fashioned chemisty.
I've no idea what you mean by the laws of Physics being incomplete... no doubt there are new discoveries and to be had, but nonetheless the behavior of the universe, and everything in it, is *perfectly* predicted by a combination of quantum theory and the theory o
If all the world's economists were laid end to end, we wouldn't reach a
conclusion.
-- William Baumol
"Deep" learning is not deep (Score:2)
It is not "deep" in the things it can do. It is just deep in the net used which means it can be made generically and trained more. It will still give worse results than a shallow, custom-designed neural net. We are already seeing severe limitations in what training can do, and the "deeper" the net, the worse these get. Even carefully trained networks can be tricked and exhibit surprising behavior. It is time to acknowledge that neural nets are a nifty special-purpose tool with specific limitations, but not a way to "intelligence" at all.
Re: (Score:2)
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
https://www.youtube.com/watch?... [youtube.com]
GPT-3 is just a powerful type of "language model". It can only do one thing - predict the next word in some text that you feed it. Of course you can then feed the word it predicted/genretated back into itself and have it predict/generate the *next* w
Re: (Score:2)
You might want to check out what OpenAI's GPT-3 can do, or better yet what "OpenAI Codex" can do. Yes, they're still one-trick ponies, but it's rather counter-intuitive how far one-trick can take you, if you pick the right trick.
Not really. With some more relevant background (which I have) these are nice tricks, but not that impressive. For example, the average entropy of an English word in a text is just around 2.5 bits, so guessing the next word is not actually that hard. Of course the guessing errors pile up and eventually the whole thing descends into chaos and nonsense, because GPT-3 has zero understanding of what it does and zero understanding of the real world. Scaling up "dumb" does not remove the dumb. It just makes it a b
Re: (Score:2)
Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.
Your argument is like saying the laws of physics can never produce intelligence if you scale them up. But they do.
You could also argue that humans doing "intelligent" stuff like designing rockets, or whatever, makes it hard to see that we'e really just a mass of dumb old chemicals. And you'd be right.
You either didn't watch the OpenAI Codex demo video I linked, or you didn't understand what you w
Re: (Score:2)
Our own intelligence is built out of dumb components .. ultimately neurons, boichemicals and the laws of physics.
Pure conjecture. And not even a very convincing one. Physicalism is religion, not Science. Since it is conjecture, any conclusions drawn from it are also conjecture.
Here is a hint: You cannot use an argument by elimination unless you have a complete and accurate model of the system giving you _all_ possibilities. Currently known Physics is _known_ to be incomplete and that makes this approach invalid as a proof technique. This is really very basic.
As to your invalid AdHominem-type argument, I have been foll
Re: (Score:2)
> Pure conjecture. And not even a very convincing one.
The last 50 years of neuroscience would beg to differ. The biochemical operation of neurons at this point is understood in fairly exquisite detail, and it's just good old fashioned chemisty.
I've no idea what you mean by the laws of Physics being incomplete ... no doubt there are new discoveries and to be had, but nonetheless the behavior of the universe, and everything in it, is *perfectly* predicted by a combination of quantum theory and the theory o