Physicist Proposes New Way To Think About Intelligence 233
An anonymous reader writes "A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research, reports Inside Science. Recent work in cosmology has suggested that universes that produce more entropy (or disorder) over their lifetimes tend to have more favorable properties for the existence of intelligent beings such as ourselves. A new study (pdf) in the journal Physical Review Letters led by Harvard and MIT physicist Alex Wissner-Gross suggests that this tentative connection between entropy production and intelligence may in fact go far deeper. In the new study, Dr. Wissner-Gross shows that remarkably sophisticated human-like "cognitive" behaviors such as upright walking, tool use, and even social cooperation (video) spontaneously result from a newly identified thermodynamic process that maximizes entropy production over periods of time much shorter than universe lifetimes, suggesting a potential cosmology-inspired path towards general artificial intelligence."
Oh, he's back from his tour of the universes? (Score:3, Insightful)
How was the weather?
Re: (Score:2)
Why is this down modded?
It is the correct response. insightful sarcasm aimed at this "scientist's" complete lack of any supporting evidence. because, of course, we have not discovered one, let alone the many, intelligence species in other universes. We do not even know if any other universes ever existed or ever will.
Re:Oh, he's back from his tour of the universes? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3, Insightful)
But if we don't ask "what if", how is there any advancement? Yes, we'd most likely be wrong--it's only through error do we find the truth. What I fail to understand is how science advances without speculation?
The problem - eloquently expressed here The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next [amazon.com] - arises when theoretical physics comes up with hypotheses that are untested (and even potentially untestable) but people start treating them as grounded theories. This is essentially no different from the intelligent design argument; a position that relies on unprovable speculation that you just have to take on faith.
Re:Oh, he's back from his tour of the universes? (Score:4, Informative)
How is this paper not a scientific approach to empirical data? We have empirical observations of a wide range of animal/human behaviors. The authors propose a toy mathematical model that reproduces key features of several interesting observed behaviors. This is perfectly good science, just like saying "hey, an F=G*m_1*m_2/r^2 force between massive objects recreates the observed motions of the heavenly bodies" --- a predictive, testable mathematical model that can be compared with measurements of the motions and behaviors of actual critters to see how well it works.
Re: (Score:2, Interesting)
Yeah, like string theory.
The problem is that the approach is essentially definitional: They created a software model that does X. X "looks like" the sort of things an animal or an "intelligence" might do, thus the investigator postulates the "physical process of trying to capture as many future histories as possible" as "intelligence."
The problem comes from the analogy of the behavior of th
Re: (Score:3)
String theory stands in a "philosophical gray area" because it's (currently) too bloody complicated to calculate predictions for things we can currently measure, and the predictions string theory can make produce effects only visible at far too high energy scales to reach with current experimental technology. Also, string theories have a zillion parameters, so you might be able to match them to basically any observed process.
The paper's mechanism doesn't have these problems: it predicts behavior accessible
Re:Oh, he's back from his tour of the universes? (Score:4, Insightful)
Your opinion is the exact reason that most pre-graduate school science classes are garbage...the professors teaching the subject have no imagination and therefore cannot present topics in anything but a dry, unenthusiastic drawl that kills the motivation of potential creative thinkers to seek out new knowledge and to advance our species' understanding of the nature of things. The author is not stating that this new research is anything close to finding a 99.9% solid theory...FTA:
Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.
Their results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems. In other words, more research needs to be done but their initial review and experimental process based on recent advancements in other fields points to a result which he has published a paper around. First, How is this not science? And second, How is this not exactly the type of science that moves our knowledge and understanding forward in the best way? Even if he is proven to be wrong, his results are enticing and can lead others to consider how these results can be differently applied.
Re: (Score:2)
You characterize their claims in the most anodyne way possible. These quotes, also from their paper, is magical realism more appropriate in a J. G. Ballard novel:
Re: (Score:2)
There is some physical basis for the paper - one of the great mysteries early on in the 1600's, was why two clocks would either synchronise in phase or out of phase (http://en.wikipedia.org/wiki/Odd_sympathy). This was investigated by Christiaan Huygen. Turns out each clock would exert a tiny force on its surrounds. That was enough to create a coupled driver oscillator.
Relevant xkcd (Score:5, Insightful)
Re: (Score:2)
From TFA:
To the best of our knowledge, these tool use puzzle and social cooperation puzzle results represent the first successful completion of such standard animal cognition tests using only a simple physical process. The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior.
So, yah, XKCD nailed it... clearly trying to maximize the overall diversity of accessible future paths of their worlds.
Re:Relevant SMBC (Score:5, Insightful)
And here's a relevant SMBC:
http://www.smbc-comics.com/?id=2556 [smbc-comics.com]
Re: (Score:2)
Re: (Score:2)
http://xkcd.com/793/ [xkcd.com]
My field is <mate selection>, my complication is <social transactions in symbolic discourse>, my simple system is <you> and the only equation I need is <you're not getting any>. Thanks for offering to prime my pump with higher mathematics. But you know, if you'd like to collaborate on a section on this intriguing technique of speaking in angle brackets to deliver a clue where no clue has gone before, perhaps we should meet for coffee—if you can refrain you
Re: (Score:3)
Mods: feel free to stop by xkcd and let him know how insightful he is, but the link itself is entirely derivative.
There's no need to throw a wobbly just because you didn't think of it first. Maximize your disorder, dude!
Re: (Score:2)
Re:Relevant xkcd (Score:4, Informative)
nintendo! (Score:2, Interesting)
Interesting idea. http://techcrunch.com/2013/04/14/nes-robot/
That guy took basically a random generator and 'picked' good results to build on. However the input is basically chaos.
Re: (Score:2)
But you do get self-organisation in nature: reaction-diffusion equations can create spots, stripes, tip-splitting and scroll waves patterns from initial random conditions.
when I want to maximize entropy ... (Score:3)
... I burn stuff. Now I can feel smarter about it. Win!
Re: (Score:2)
The point of the paper is that intelligent behaviour maximizes longterm, not immediate entropy gain.
Re: (Score:2)
(Isn't the heat-death of the universe a process that results in maximal long-term entropy growth?)
Re:when I want to maximize entropy ... (Score:4, Interesting)
The point in the paper that addresses the "burn shit to be smart!" concept is that the "intelligence" is operating on a simplified, macroscopic model of the world, which doesn't pay attention to the microscopic entropy of chemical bonds (increased by setting stuff on fire). In this simplified "critter-scale" world, shorter-term entropy gain *is* the driving compulsion. The toy model "crow reaching food with a stick" example wasn't driven by the crow thinking "gee, if I don't eat now, I'll be dead next year, so I'd better do something about that." Instead, the problem was "solved" by the crow maximizing entropy a few seconds ahead --- e.g. it moves to reach the stick, because there are a lot more system states available if the stick can be manipulated instead of just lying in the same place on the ground. The "intelligent behavior" only needs to maximize entropy on the time-scale associated with completing the immediate task --- a few seconds --- rather than "long term" considerations about nutritional needs.
Re: (Score:2)
This model posits that an entropy-maximizing motivation (regardless of any food-eating motivation) would allow the crow to "figure out" how to dislodge the food from its nook (without consciously setting the goal of "dislodge food from nook"). Once the food is dislodged, some other motivation (like instincts to grab yummy-smelling stuff with your beak) would be needed to model/explain what completes the getting-food-in-stomach cycle. However, in addition to a grab-food-with-beak ability/instinct (which is o
Re: (Score:2)
Re: (Score:2)
The PRL journal article is not "pop lit," and precisely defines the "causal path entropy S_c" in a technically correct manner. Admittedly, loose use of the word "entropy" is causing a lot of confusion among the less-scientifically-literate folks posting to Slashdot, so your formulation of "maximizes possible future states of local environment" is indeed better for avoiding confusion in non-technical discussion. Nevertheless, "maximize entropy" (in the formal sense described in the article) is not an incorre
Re: (Score:2)
Speak for yourself. When I barbecue a marshmallow, I rather enjoy the immediate entropy gain.
Re: (Score:2)
There's no way humankind will be getting out of this solar system if they are all fighting over the last remaining tanks of propane, and given what we know about humanity, if there is to be an intelligent agent to speed up the heat death, we are probably destined to do so.
Hrm... maybe horders are actually hyperintelligent beings?
Intelligence a man made idea. (Score:3, Interesting)
Intelligence was invented by man, as a way to make them seem better then other animals in the world.
Then we further classified it down so we can rank people.
So it isn't surprising if we want to find intelligent life outside of earth, then we need to change the rules again, as well we need to change the rules of what intelligence is by the fact we have created technology that emulates or exceeds us in many areas we use to classify intelligence.
Intelligence is a man made measurement, I expect it will always be in flux. However you shouldn't dismiss or automatically accept as good ideas just because someone number that was granted by a fluctuating scale.
Re:Intelligence a man made idea. (Score:5, Interesting)
We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.
Re:Intelligence a man made idea. (Score:5, Interesting)
For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.
-- Douglas Adams
Re: (Score:2)
Thank you. ;-)
Re: (Score:2)
He wasn't. He asked the mice about it.
Re: (Score:2, Insightful)
We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.
Better? Not really, more resourceful? Yes, definitely. And we have to be. Without the use of tools, we'd still be stuck in the Serengeti, treed by lions and tigers. Because, as a species, we are physically weak (probably more today then 100,000 years ago, but still weak in comparison to an orangutan as to the amount we can lift), slow (The fastest man CAN outrun a horse, but no one could outrun a cheetah on the straight away, and can only tolerate a small range of temperatures. (Without clothes, we woul
Re: (Score:3)
Our achievements aren't invalidated simply because they require materials that we don't directly grow as part of our body. We are the fastest because we build jets and rockets. We are the strongest because we build bulldozers and forklifts. We are the deadliest predator because we can build rifles, fishing boats, and hydrogen bombs. Our brains, which allow us to build all of those amazing things, are part of our body as much as the cheetahs leg muscles are part of its body. We've externalized and expedited
Re:Intelligence a man made idea. (Score:4, Insightful)
BS. Take your basic household feline. It's tricked its owners into feeding, watering, and petting it. Hell, it has even tricked them into taking out the dooty. No living life form comes close to that kind of intelligence.
Re: (Score:2)
" few exceptions like tortoises" More like 50% exceptions if you ignore bacteria which I assume make up 99% of all life.
And I do not know about more of your other sentences. There are lots of things that animals can do that no amount of machines have since let us do. For example, their is no machine yet built, that can, with or without a human occupant, can stamper up a tree and jump for branch to branch. Hell, we can hardly make a machine that can walk, or even operate on anything other than a road or floo
Re: (Score:2)
And we have yet to spread our species to any other planetary bodies, while bacteria getting to earth either from Mars or other asteroids, is a reasonable theory.
We do not make particularly harmoniously and effective societies, and relative to our size our building endeavours are rather unimpressive. And relative to any size, our buildings are incredibly uncomfortable and inefficient.
Re: (Score:2)
Humanity is in a place where we're just smart enough to be able to cause things to happen, but not quite smart enough to know what the effects are.
Is that a mark of intelligence? I would argue it is not, until humanity as a whole realizes this.
Re: (Score:2)
Bacteria would out rank humans if we looked deeply at your posting. Obviously bacteria can travel at least as fast as humans as humans are always loaded with bacteria. Bacteria persist by stunning levels of reproduction. Bacteria can wipe out a human easily. Bacteria can thrive in areas that humans can never contact. Bacteria can alter their environment and also enjoy very few restraints upon their prosperity. Some can even reproduce in boiling water.
Re: (Score:2)
"So it isn't surprising if we want to find intelligent life outside of earth, then we need to change the rules again"
I think you are underestimating the human ability to deny facts that are staring them in the face. We did not need to redefine intelligence when we learned that tool use is rather common. Or that a .5 pound bird might be better at mathematics than a college student.
Re: (Score:2)
and yet all you can do is type it.
Re: (Score:2)
True,
But some concepts are based on more solid definitions. Where we can measure it the same way every time to make the definition.
Link to article (Score:4, Informative)
http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf [alexwg.org]
Am I missing something? (Score:3, Insightful)
This looks eerily like a physicist who has just opened a biology textbook and is now restating the idea that 'intelligence' is the product of an evolutionary selection process because it's a uniquely powerful solution to the class of problems that certain ecological niches pose and is now attempting to add equations....
Is there something that I'm missing, aside from the 'being alive means grabbing enough energy to keep your entropy below background levels' and the 'we suspect biological intelligence of having evolved because it provides a fitness advantage in certain ecological niches' elements?
Re:Am I missing something? (Score:5, Insightful)
This is what it seems to be from a quick read. It would also explain why he would publish an AI paper in a physics journal, rather than in, you know, an AI journal: probably because he was hoping to get clueless physicists who aren't familiar with existing AI work as the reviewers.
Which isn't to say that physicists can't make good contributions to AI; a number have. But the ones who have an impact and provide something new: 1) explain how it relates to existing approaches, and why it's superior; and 2) publish their work actually relevant journals with qualified peer-reviewers.
Re: (Score:2)
1) explain how it relates to existing approaches, and why it's superior
This. This is why we go to school and study for 20-something years before being accepted as an expert in a field.
If you don't know what's already out there, you'll likely be reinventing it.
Re:Am I missing something? (Score:4, Insightful)
I grew up right next to the Lawrence Livermore National Laboratory. My dad and the vast majority of my friends moms and dads worked there for a long time as physicists. Being around these people for 35 years has taught me something. They are morons. They know physics but literally nothing else, besides of course math.
Its one of those strange situations where they can be utterly brilliant in their singular field of study but absolutely incompetent at literally everything else. I've known guys with IQ's in the 160's that couldn't for the life of them live on their own for their inability to cook or clean or even drive a car. I know one of them that was 45 years old and had never had a drivers license. His wife drove him everywhere or he walked (occasionally the bus if the weather was poor). He didn't do this for ideological reasons like climate change blah blah, he did it because he couldn't drive. He failed the drivers test for years until he gave up trying.
Whenever a physicist starts talking about something other than physics, I typically roll my eyes and ignore them. It's just intellectual masturbation on their part.
Re: (Score:3, Interesting)
This. I've been a physics student for a third of my life and I've come to the conclusion that I cannot live with other physicists for precisely this reason. Poked my nose into the maths & compsci faculty for a bit, but they were no better.
In any case, in this concrete situation: the paper mentioned in TFA gives us not even one hint on how to construct an AI and is chock-full of absurd simplification of a complicated system.
Re: (Score:2)
I've been a physics student for a third of my life and I've come to the conclusion that I cannot live with other physicists for precisely this reason.
If I may offer a counter-example: during my time as a grad student in Physics at an anonymous Ivy League University whose name is a color, our department intramural softball team made it to the semi-finals, and I regularly played chamber music with other accomplished musicians within the department. (and, yeah, I played a lot of pinball too)
Bill Burr (Score:3)
.
We live in a society defined by division of labor. The physicist figured that out, as have many video
Re: (Score:2)
And in the process the "P-man" becomes a social pariah incapable of communicating with people. Loses out on the "simple" things in life, like backyard BBQ's, ball games, good conversations with friends.
Sorry but the isolated brilliant guy isn't a real person. That's Dr. House on TV and he wasn't even liked in fiction. The real thing is even worse. I'd highly recommend you change the paradigm and enjoy life's simple pleasures. Life is short after-all and you really only get the one crack at it.
Re: (Score:2)
.
By the way, P-man is not myself. I enjoy driving and have done so for over 40 years. In my household I am the one who shops, does laundry and cleans. My similarity to P-man is that I enjoy the company of my own thoughts -- and luckily this is not yet a crime, geek.
As to the "isolated bril
Re:Am I missing something? (Score:5, Funny)
I think the problem of uninformed physicists has been addressed by proper scientific research before:
http://www.smbc-comics.com/?id=2556 [smbc-comics.com]
Re:Am I missing something? (Score:5, Informative)
Yes, what you're missing is the entire point of the paper. Here's my attempt at a quick summary:
Suppose you are a hungry crow. You see a tasty morsel of food in a hollow log (that you can't directly reach), and a long stick on the ground. The paper poses an answer to the question: what general mechanism would let you "figure out" how to get the food?
Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer: "stick can reach food from entrance to log," "I can get stick if I go over there," "I can move stick to entrance of log," => "I can reach food." This paper, however, proposes a much more general and simple model: the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick (instead of the fewer states where the stick just sits in the same place on the ground), so it heads over towards the stick. Now it can reach a lot more states if it pokes the food out of the hole with the stick, so it does. And now, it can eat the tasty food.
The paper shows a few different examples where the single "maximize available future states" principle allows toy models to "solve" various problems and exhibit behavior associated with "cognition." This provides a very general mechanism for cognition driving a wide variety of behaviors, that doesn't require the thinking critter to have a giant "knowledge bank" from which to calculate complicated chains of logic before acting.
Re: (Score:2)
Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer... This paper, however, proposes a much more general and simple model... the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick... This provides a very general mechanism for cognition driving a wide variety of behaviors, that doesn't require the thinking critter to have a giant "knowledge bank" from which to calculate complicated chains of logic before acting.
There may be an interesting insight in there, but it doesn't seem to me to solve the problem. It seems to me that you haven't changed the mechanism, but rather the motivation for acting-- from "hunger" to "maximizing states". Actually, I'm not even sure you've changed the motivation, but rather reframed it.
It seems like you haven't removed the need for a "knowledge bank" or "chain of logic", unless you've described the mechanism for how "maximized future states" is converted into "crow behavior". Does t
Re: (Score:3)
The interesting thing about changing the motive from "hunger" to "maximizing states" is that (a) the same "maximizing states" motivation works to explain a variety of behaviors, instead of needing a distinct motive for each, and (b) "maximizing states" simultaneously provides a "motive" and a theoretical mechanism for achieving that motive --- "I'm hungry" doesn't directly help with solving the food-trapped-in-hole problem, while the simple "maximize states" motive (precisely mathematically formulated) actu
Re: (Score:2)
Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer: "stick can reach food from entrance to log," "I can get stick if I go over there," "I can move stick to entrance of log," => "I can reach food." This paper, however, proposes a much more general and simple model: the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick (instead of the fewer states where the stick just sits in the same place on the ground), so it heads over towards the stick. Now it can reach a lot more states if it pokes the food out of the hole with the stick, so it does. And now, it can eat the tasty food.
But the cow could reach even more states if it broke the stick into thousand little pieces and scattered them all over the place. No tasty food here.
Re: (Score:2)
And if the crow instinctually understood undergraduate level stat mech, it would know setting the stick on fire would provide even more state possibilities. The point is that, in a simple "toy model" of the world including a small number of objects that don't come into pieces, accessible state maximization produces "useful" results. Presumably, whatever mental model the crow has to understand/approximate/predict how the universe works is rather simplified compared to an overly-accurate model that calculates
Re: (Score:2)
It's just another sensational summary. Happens all the time.
Scientist: I discovered an interesting new feature of the McFluffington-Mickel babbletransform that could have implications in finding more selective drugs to target tumor cells.
Media: MIT SCIENTIST FINDS CURE FOR CANCER!
Sciensist: I said nothing of the sort!
Re: (Score:3)
Yes, it's a simplified toy model that may not perfectly describe the exact actions of a crow. However, within the model assumptions (which aren't unreasonable for real-world assumptions), the actions make sense. One component of the model is the time length tau that the critter "thinks ahead" to maximize entropy. Considering flying away to find food elsewhere requires taking more distant times into consideration; within a few-second "think ahead" horizon, there's nothing else for the crow to do than deal wi
Re: (Score:3)
Thanks for the additional references. Like the paper's authors, I'm a physicist, and personally ignorant of the status of cutting-edge cognitive science research, so I don't make any claims about whether this research is particularly novel or useful in the cognitive science field. But it was at least novel and interesting to me, and cool to see a "physicist's approach" with a simple implemented mathematical model generating "complex, intelligent" results.
Re: (Score:2)
The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence. They built some software to demonstrate this, but I can't tell if the source code was released (it seems like it wasn't, but I don't have a subscription to find ou
Re: (Score:2)
The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence.
Actually, the opposite: "intelligence" functions by seeking to maximize entropy. Note, however, we are talking about an approximate "macroscopic scale" entropy of large-scale objects in the system rather than the "microscopic" entropy of chemical reactions, so "intelligence" isn't about intentionally setting as many things as you can on fire ("microscopic" entropy maximization). So, the analogue statement to "all the gas molecules in a room won't pile up on one side" is "an intelligent critter won't want to
Re: (Score:2)
Re: (Score:2)
My "critter not going into a corner" example was based on the first toy model in the paper, a particle that drifts towards the center of a box when "driven" by entropy maximization. In some of the more "advanced" examples, there are more complex factors coming into play that may maximize entropy by "ending up in a corner," depending on how the problem is set up. However, if you read the paper (instead of just glancing at videos), the mathematical formalism that drives the model is all about maximizing entro
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The critter is assumed to have a certain capacity for expending energy to do work on their environment (parametrized by their "temperature" T_c) --- needing to expend energy is not a barrier. With the stick balanced up, you can quickly and easily swing the stick to many other combinations of position and velocity. When the stick is dangling down, it takes more time rocking back and forth to "swing it up" into many positions. If the critter was very strong (high T_c, able to exert much greater forces than gr
My somewhat pedantic but sincere question (Score:2)
Actually, the opposite: "intelligence" functions by seeking to maximize entropy.
Don't you mean that intelligence "functions by seeking to maximize the entropic gradient"?
Re: (Score:2)
A key component to the paper's model is that entropy maximization is not just through "local" maximization of the gradient, but total entropy maximization over an interval:
Inspired by recent developments [1–6] to naturally
generalize such biases so that they uniformly maximize
entropy production between the present and a future time
horizon, rather than just greedily maximizing instanta-
neous entropy production, we can also contemplate gener-
alized entropic forces over paths through configuration
space rather than just over the configuration space itself.
So, indeed, entropy maximization --- not just instantaneous entropy gradient maximization, which might "miss" solutions requiring passing through low-entropy-gradient regions to reach an even higher entropy final state --- is important. Of course, the result of maximizing entropy over a time interval is the maximization of the average gra
Re: (Score:2)
physicists do this a lot. I remember many years ago a physicist delving into philosophy, and getting nearly an entire Discover mag article devoted to him. Essentially took the abstract means of talking about physical phenomna, took out the equations, and then applied it to philosphy and logic and stuff.
hate when physicists begin delving outside their field. XKCD had it right.
Re:Am I missing something? (Score:5, Informative)
Suggesting that the purpose of intelligence in this man's random musings might be to increase the background levels of entropy for your own benefit.
That's close, I think. I am not a physicist and I skimmed the equations, but here's my take on what they're proposing. Physical systems have states, which can be described by a state vector. The state of these systems evolves according to some set of rules that describes how the state vector changes over time. They've built a simulator in which the probability of a certain state transition is computed by looking at how many different paths (in state space, i.e. future histories of the system) are possible from the new state, in such a way that the system tries to maximise the number of possibilities for the future. In one example, they have a particle that moves towards the centre of a box, because from there it can move in more directions than when it's close to a wall.
They then set up two simple models mimicking two basic intelligence tests, and find that their simulator solves them correctly. One is a cart with a pendulum suspended from it, which the system moves into an upright position because from there it's easiest (cheapest energetically, I gather) to reach any other given state. The other is an animal intelligence test, in which an animal is given some food in a space too small for it to reach, and a tool with which the food can be extracted. In their simulation, the "food" is indeed successfully moved out of the enclosed space, because it's easier to do various things with an object when it's close compared to when it's in a box. However, in neither case does the algorithm "know" the goal of the exercise. So they've shown that they've invented a search algorithm that can solve two particular problems, problems which are often considered tests of intelligence, without knowing the goal.
Then, they use this to support the hypothesis that intelligence essentially means maximising future possibilities. Another way of saying this, I think, is that an intelligent creature will seek to maximise the amount of power it has over its environment, and they've translated that concept into the language of physics. That's an intriguing concept, relating to the concept of liberty, power struggles between people at all scale levels, scientific and technological progress, and so on. I can't imagine this idea being new though. So it all hinges on to what extent this simulation adds anything new to that discussion.
On the face of it, not much. You might as well say that they've found two tests for which the solution happens to coincide with the state that maximises the number of possible future histories. The only surprising thing then is that their stochastically-greedy search algorithm (actually, without having looked at the details, I wouldn't be surprised if it turned out to be yet another variation of Metropolis-Hastings with a particular objective function) finds the global solution without getting stuck in a local minimum, which could be entirely down to coincidence. It's easy to think of another problem that their algorithm won't solve, for example if the goal would be to put the "food" into the box, rather than taking it out. Their algorithm will never do that, because that would increase the future effort necessary to do something with it. Of course, you might consider that pretty intelligent, and many young humans would certainly agree, although their parents might not. It would be interesting to see how many boxed objects you need before the algorithm considers it more efficient to leave them neatly packaged rather than randomly strewn about the floor, if that happens at all.
There's another issue in that the examples are laughably simple. While standing upright allows you to do more different things, no one spends their lives standing up, because it costs more energy to do that as a consequence of all sorts of random disturbances in the environment. The model ignores this completely. Similarly, you could
Make it work (Score:3)
Choice (Score:2, Interesting)
Intelligence, the ability to delve into the past and reach into the future, in order to craft the present and manipulate the probability of eventualities. The greater the ability the greater the intellect, the power of choice.
Re: (Score:2)
Maintaining disorder (Score:3)
It appears to me that the algorithm is trying to maintain entropy or disorder, or at least keep open as many pathways to various states of entropy as possible. In the physics simulations, such as balancing and using tools, this essentially means that it is simply trying to maximize potential energy (in the form of stored energy due to gravity or repulsive fields - gravity in the balancing examples, and repulsive fields in the "tools" example).
While this can be construed as "intelligence" in these very specific cases, I don't think it is nearly as generalized or multipurpose as the author makes it out to be.
Spherical cows! (Score:5, Funny)
You can tell this is a physicist's paper. It lacks spherical cows, but only because the toy models were set up in 2D. So, instead, we get a crow, chimpanzee, or elephant approximated by circular disks.
Intelligence is inherited (Score:2)
This is so sad (Score:2, Interesting)
The universe developed intelligence as a way of making entropy wind down faster ... which will destroy all intelligence ... which is a tragedy because the winding down was necessary to create us ... and the universe WANTED TO SEE US SUFFER.
Re: (Score:2)
That might explain why the Universe is trying to kill us. Those asteroids periodically buzzing the Earth were sent there, the Universe's aim is just a bit off. Sooner or later, it will get the target sighted in. Sometimes it is in the form of Gaea who periodically tosses an earthquake, or when she's really pissy, a super volcano...just for a little recreational resurfacing.
The Universe hates intelligence, we're all dead.
I'm not sleeping in.. (Score:3)
I'm maintaining the maximum number of possible outcomes for the day, in harmony with the laws of nature. :)
Silly paper that completely misses the point (Score:5, Informative)
Here's a review of this paper by a researcher who actually works in the field of AI and cognitive psychology:
Interdisciplinitis: Do entropic forces cause adaptive behavior? [wordpress.com]
Few choice quotes:
and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:
Re: (Score:3)
and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:
The authors were just trying to maximize the number of possible future states for their idea.
I have an untidy office (Score:2)
Lots of entropy here, more than most! Does that mean that I am super intelligent ?
maximizes number of future states (Score:2)
the article is self contradictory — it says "It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal".
this is not true, because it then goes on to say, "trying to capture as many future histories as possible".
so there IS a goal — it maximizes the number of future states — exactly the same way a negaMax search can maximize the mobility paramater in a chess engine search.
in other words,
Re: (Score:2)
Yes, the model "intelligence" does have the one goal of "future history maximization." The interesting thing is that this one particular goal can produce a variety of behaviors --- instead of requiring each behavior to be motivated by its own specific and non-generalizable goal. Instead of needing a brain with specific goals for "walk upright," "get tool to augment manipulative abilities," "cooperate with other to solve problem," plus a heap of other mechanisms to achieve said goals, the simple entropy-maxi
Procrastination (Score:3)
The premise of the claim is that procrastination is the ultimate goal of intelligence, with procrastination defined as keeping open the widest range of possible options by avoiding all actions that would decisively limit that range.
This would seem, even on the surface, to ignore the many situations where intelligent life must take the narrow path, sacrificing procrastination to the pursuit of a single goal. Once through a narrow path we may find a wide vista of prospects again before us. But without taking such narrow paths at significant times, by always hesitating at the crossroads for as long as possible, we may find ourselves with Robert Johnson, sinking down.
Also, the claim that the natural goal of choice is to maximize future choice is entirely circular. Like saying the goal of walking is to maximize future walking, the goal of eating to maximize future eating, there's something to it, but it's not quite true. Also, a great deal of research shows that people strive to avoid choice, for the most part.
Re: (Score:2)
The claim of the paper isn't that "entropy maximization" is the sole motivating factor for all behaviors. For example, after their toy model critter succeeds in knocking the "food" out of the hole using the "tool," some other guiding mechanism probably takes over to make it eat the tasty food (which decreases the accessible degrees of freedom in their simple model compared to keeping the food and tool nearby to toss about). So, indeed, there are plenty of actions that require different behavioral models to
There's an interesting trend here (Score:2)
Compare to Terrence Deacon's Incomplete Nature, which: "meticulously traces the emergence of this special causal capacity from simple thermodynamics to self-organizing dynamics to living and mental dynamics" (Amazon).
(Deacon's book is good, though has been criticized as drawing heavily from prior work: "This work has attracted controversy, as reviewers[2] have suggested that many of the ideas in it were first published by Alicia Juarrero in Dynamics of Action (1999, MIT Press) and by Evan Thompson in Mind i
Dense and jargony (Score:2)
First of all, the paper is steeped in jargon. Phrases such as (2nd para) "characterized by the formalism of" instead of "described by" obfuscate the meaning and confuse the reader.
Count the number of uses of the verb "to be" (is, are, "to be", were, &c). It's everywhere! Nothing runs or changes, everything "is running" or "is changing". Passive voice removes the actor in a paper describing - largely - actions.
Useless words and phrases litter the landscape, such as "To better understand", "for concretene
Summary of paper (Score:2)
Here is my summary of the paper, from the authors website [alexwg.org]
Entropy is usually defined as a function of the macroscopic state of a system at a given time. If we assume that the system evolves so as to move in the direction of maximum entropy at any time, then this defines some dynamics. What the authors propose is a foreward looking dynamic where the system moves in the direction that maximizes entropy and some future point. This automatically builds in forward looking (i.e. intelligent) behaviour into th
Re: (Score:2)
150 different components of intelligence
J.P.Guilford *classic*
CC.
Re: (Score:3)
I am always amused when I see these posts on $(any_topic) that reveal the inability of the poster to recognize things happen on a variety of levels.
In this case, intelligence refers not to the subset of humans capable of posting on slashdot, or playing music. Intelligence in this context refers to the evolution of a brain capable of making decisions based on stimuli as well as experience. An earthworm would qualify as intelligent. It takes a whole lot more steps to get from amino acid soup to an earthworm'
Re: (Score:2)
I think we both agree that their sample size of one planet is not statistically significant. If he wants to be taken seriously, this guy should be funding the hell out of SETI.
Hahahahahahaha, sorry, that last sentence was too hard to type without laughing.
Re: (Score:2)
Re: (Score:2)
We all saw Watson win at chess and Jeopardy, I donâ(TM)t think would do so well playing Texas holdâ(TM)em against some tournament champions.
Well, Watson will do better at poker after Number One explains the concept of bluffing to it/him.
Re: (Score:2)
The paper has nothing to do with the "design" and/or evolution of intelligence. It proposes a general mechanism by which "intelligent" brains may be able to "figure out" how to perform a wide variety of tasks (by maximizing future available states based on a simplified internal world model). Plain old evolutionary selective pressures would favor critters with brains good at carrying out this type of cognition.