Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Physicist Proposes New Way To Think About Intelligence 233

An anonymous reader writes "A single equation grounded in basic physics principles could describe intelligence and stimulate new insights in fields as diverse as finance and robotics, according to new research, reports Inside Science. Recent work in cosmology has suggested that universes that produce more entropy (or disorder) over their lifetimes tend to have more favorable properties for the existence of intelligent beings such as ourselves. A new study (pdf) in the journal Physical Review Letters led by Harvard and MIT physicist Alex Wissner-Gross suggests that this tentative connection between entropy production and intelligence may in fact go far deeper. In the new study, Dr. Wissner-Gross shows that remarkably sophisticated human-like "cognitive" behaviors such as upright walking, tool use, and even social cooperation (video) spontaneously result from a newly identified thermodynamic process that maximizes entropy production over periods of time much shorter than universe lifetimes, suggesting a potential cosmology-inspired path towards general artificial intelligence."
This discussion has been archived. No new comments can be posted.

Physicist Proposes New Way To Think About Intelligence

Comments Filter:
  • by Anonymous Coward on Monday April 22, 2013 @08:36AM (#43514883)

    How was the weather?

    • Why is this down modded?

      It is the correct response. insightful sarcasm aimed at this "scientist's" complete lack of any supporting evidence. because, of course, we have not discovered one, let alone the many, intelligence species in other universes. We do not even know if any other universes ever existed or ever will.

      • Respectfully asking, what's wrong with saying, "What if?" You are correct, we haven't discovered any of what you described. But what I fail to understand is why you are so quick and so adamant to cite what we don't know and imply that speculation is pointless. The impression I get from your post is that we're better off limiting ourself to what we do know--which eventually just leads us to an endless loop because we never move beyond what we don't know.
  • Relevant xkcd (Score:5, Insightful)

    by Karganeth ( 1017580 ) on Monday April 22, 2013 @08:40AM (#43514907)
    • From TFA:

      To the best of our knowledge, these tool use puzzle and social cooperation puzzle results represent the first successful completion of such standard animal cognition tests using only a simple physical process. The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior.

      So, yah, XKCD nailed it... clearly trying to maximize the overall diversity of accessible future paths of their worlds.

    • Re:Relevant SMBC (Score:5, Insightful)

      by mTor ( 18585 ) on Monday April 22, 2013 @10:10AM (#43515611)

      And here's a relevant SMBC:

      http://www.smbc-comics.com/?id=2556 [smbc-comics.com]

    • by epine ( 68316 )

      http://xkcd.com/793/ [xkcd.com]

      My field is <mate selection>, my complication is <social transactions in symbolic discourse>, my simple system is <you> and the only equation I need is <you're not getting any>. Thanks for offering to prime my pump with higher mathematics. But you know, if you'd like to collaborate on a section on this intriguing technique of speaking in angle brackets to deliver a clue where no clue has gone before, perhaps we should meet for coffee—if you can refrain you

  • nintendo! (Score:2, Interesting)

    by Anonymous Coward

    Interesting idea. http://techcrunch.com/2013/04/14/nes-robot/

    That guy took basically a random generator and 'picked' good results to build on. However the input is basically chaos.

    • by mikael ( 484 )

      But you do get self-organisation in nature: reaction-diffusion equations can create spots, stripes, tip-splitting and scroll waves patterns from initial random conditions.

  • by fche ( 36607 ) on Monday April 22, 2013 @08:47AM (#43514941)

    ... I burn stuff. Now I can feel smarter about it. Win!

    • by Hentes ( 2461350 )

      The point of the paper is that intelligent behaviour maximizes longterm, not immediate entropy gain.

      • by fche ( 36607 )

        (Isn't the heat-death of the universe a process that results in maximal long-term entropy growth?)

      • by femtobyte ( 710429 ) on Monday April 22, 2013 @10:41AM (#43515895)

        The point in the paper that addresses the "burn shit to be smart!" concept is that the "intelligence" is operating on a simplified, macroscopic model of the world, which doesn't pay attention to the microscopic entropy of chemical bonds (increased by setting stuff on fire). In this simplified "critter-scale" world, shorter-term entropy gain *is* the driving compulsion. The toy model "crow reaching food with a stick" example wasn't driven by the crow thinking "gee, if I don't eat now, I'll be dead next year, so I'd better do something about that." Instead, the problem was "solved" by the crow maximizing entropy a few seconds ahead --- e.g. it moves to reach the stick, because there are a lot more system states available if the stick can be manipulated instead of just lying in the same place on the ground. The "intelligent behavior" only needs to maximize entropy on the time-scale associated with completing the immediate task --- a few seconds --- rather than "long term" considerations about nutritional needs.

      • by gtall ( 79522 )

        Speak for yourself. When I barbecue a marshmallow, I rather enjoy the immediate entropy gain.

  • by jellomizer ( 103300 ) on Monday April 22, 2013 @08:49AM (#43514949)

    Intelligence was invented by man, as a way to make them seem better then other animals in the world.
    Then we further classified it down so we can rank people.

    So it isn't surprising if we want to find intelligent life outside of earth, then we need to change the rules again, as well we need to change the rules of what intelligence is by the fact we have created technology that emulates or exceeds us in many areas we use to classify intelligence.

    Intelligence is a man made measurement, I expect it will always be in flux. However you shouldn't dismiss or automatically accept as good ideas just because someone number that was granted by a fluctuating scale.

    • by Intrepid imaginaut ( 1970940 ) on Monday April 22, 2013 @09:38AM (#43515359)

      We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.

      • by femtobyte ( 710429 ) on Monday April 22, 2013 @09:45AM (#43515429)

        For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.

        -- Douglas Adams

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        We are better than other animals in the world. By any objective measure we can move faster, go higher, lift more weight, survive in more hostile environments, and a great deal more using our intelligence. There's no animal that can do something better than we can, with a few exceptions like tortoises with very long lifespans, but we'll get there too. Now whether or not that means we are more worthy in some objective way is a totally differerent question.

        Better? Not really, more resourceful? Yes, definitely. And we have to be. Without the use of tools, we'd still be stuck in the Serengeti, treed by lions and tigers. Because, as a species, we are physically weak (probably more today then 100,000 years ago, but still weak in comparison to an orangutan as to the amount we can lift), slow (The fastest man CAN outrun a horse, but no one could outrun a cheetah on the straight away, and can only tolerate a small range of temperatures. (Without clothes, we woul

        • by sqrt(2) ( 786011 )

          Our achievements aren't invalidated simply because they require materials that we don't directly grow as part of our body. We are the fastest because we build jets and rockets. We are the strongest because we build bulldozers and forklifts. We are the deadliest predator because we can build rifles, fishing boats, and hydrogen bombs. Our brains, which allow us to build all of those amazing things, are part of our body as much as the cheetahs leg muscles are part of its body. We've externalized and expedited

      • by gtall ( 79522 ) on Monday April 22, 2013 @10:46AM (#43515959)

        BS. Take your basic household feline. It's tricked its owners into feeding, watering, and petting it. Hell, it has even tricked them into taking out the dooty. No living life form comes close to that kind of intelligence.

      • " few exceptions like tortoises" More like 50% exceptions if you ignore bacteria which I assume make up 99% of all life.

        And I do not know about more of your other sentences. There are lots of things that animals can do that no amount of machines have since let us do. For example, their is no machine yet built, that can, with or without a human occupant, can stamper up a tree and jump for branch to branch. Hell, we can hardly make a machine that can walk, or even operate on anything other than a road or floo

      • And we have yet to spread our species to any other planetary bodies, while bacteria getting to earth either from Mars or other asteroids, is a reasonable theory.

        We do not make particularly harmoniously and effective societies, and relative to our size our building endeavours are rather unimpressive. And relative to any size, our buildings are incredibly uncomfortable and inefficient.

      • Humanity is in a place where we're just smart enough to be able to cause things to happen, but not quite smart enough to know what the effects are.

        Is that a mark of intelligence? I would argue it is not, until humanity as a whole realizes this.

      • by b4upoo ( 166390 )

        Bacteria would out rank humans if we looked deeply at your posting. Obviously bacteria can travel at least as fast as humans as humans are always loaded with bacteria. Bacteria persist by stunning levels of reproduction. Bacteria can wipe out a human easily. Bacteria can thrive in areas that humans can never contact. Bacteria can alter their environment and also enjoy very few restraints upon their prosperity. Some can even reproduce in boiling water.

    • "So it isn't surprising if we want to find intelligent life outside of earth, then we need to change the rules again"

      I think you are underestimating the human ability to deny facts that are staring them in the face. We did not need to redefine intelligence when we learned that tool use is rather common. Or that a .5 pound bird might be better at mathematics than a college student.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday April 22, 2013 @08:51AM (#43514973) Journal

    This looks eerily like a physicist who has just opened a biology textbook and is now restating the idea that 'intelligence' is the product of an evolutionary selection process because it's a uniquely powerful solution to the class of problems that certain ecological niches pose and is now attempting to add equations....

    Is there something that I'm missing, aside from the 'being alive means grabbing enough energy to keep your entropy below background levels' and the 'we suspect biological intelligence of having evolved because it provides a fitness advantage in certain ecological niches' elements?

    • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Monday April 22, 2013 @09:03AM (#43515071)

      This is what it seems to be from a quick read. It would also explain why he would publish an AI paper in a physics journal, rather than in, you know, an AI journal: probably because he was hoping to get clueless physicists who aren't familiar with existing AI work as the reviewers.

      Which isn't to say that physicists can't make good contributions to AI; a number have. But the ones who have an impact and provide something new: 1) explain how it relates to existing approaches, and why it's superior; and 2) publish their work actually relevant journals with qualified peer-reviewers.

      • 1) explain how it relates to existing approaches, and why it's superior

        This. This is why we go to school and study for 20-something years before being accepted as an expert in a field.

        If you don't know what's already out there, you'll likely be reinventing it.

    • by geek ( 5680 ) on Monday April 22, 2013 @09:18AM (#43515195)

      I grew up right next to the Lawrence Livermore National Laboratory. My dad and the vast majority of my friends moms and dads worked there for a long time as physicists. Being around these people for 35 years has taught me something. They are morons. They know physics but literally nothing else, besides of course math.

      Its one of those strange situations where they can be utterly brilliant in their singular field of study but absolutely incompetent at literally everything else. I've known guys with IQ's in the 160's that couldn't for the life of them live on their own for their inability to cook or clean or even drive a car. I know one of them that was 45 years old and had never had a drivers license. His wife drove him everywhere or he walked (occasionally the bus if the weather was poor). He didn't do this for ideological reasons like climate change blah blah, he did it because he couldn't drive. He failed the drivers test for years until he gave up trying.

      Whenever a physicist starts talking about something other than physics, I typically roll my eyes and ignore them. It's just intellectual masturbation on their part.

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        This. I've been a physics student for a third of my life and I've come to the conclusion that I cannot live with other physicists for precisely this reason. Poked my nose into the maths & compsci faculty for a bit, but they were no better.
        In any case, in this concrete situation: the paper mentioned in TFA gives us not even one hint on how to construct an AI and is chock-full of absurd simplification of a complicated system.

        • I've been a physics student for a third of my life and I've come to the conclusion that I cannot live with other physicists for precisely this reason.

          If I may offer a counter-example: during my time as a grad student in Physics at an anonymous Ivy League University whose name is a color, our department intramural softball team made it to the semi-finals, and I regularly played chamber music with other accomplished musicians within the department. (and, yeah, I played a lot of pinball too)

      • Bill Burr, on one of his MMPC, talked about trying to learn Spanish. At first he was cursing that it was his third try and what was wrong with him. Then later he said that it just came down to the fact that he didn't really need to learn it. Europeans need to know multiple languages. Americans don't. Doesn't mean we are stupid, incompetent, etc. Affects whether we learn language number two, though.
        .

        We live in a society defined by division of labor. The physicist figured that out, as have many video

        • by geek ( 5680 )

          And in the process the "P-man" becomes a social pariah incapable of communicating with people. Loses out on the "simple" things in life, like backyard BBQ's, ball games, good conversations with friends.

          Sorry but the isolated brilliant guy isn't a real person. That's Dr. House on TV and he wasn't even liked in fiction. The real thing is even worse. I'd highly recommend you change the paradigm and enjoy life's simple pleasures. Life is short after-all and you really only get the one crack at it.

          • Enjoying walks and simplifying one's life have nothing to do with becoming a social pariah [thefreedictionary.com]. Next you will be ordering everyone around, telling them to socialize, drink beer or whatever it is you think is "healthy".
            .

            By the way, P-man is not myself. I enjoy driving and have done so for over 40 years. In my household I am the one who shops, does laundry and cleans. My similarity to P-man is that I enjoy the company of my own thoughts -- and luckily this is not yet a crime, geek.

            As to the "isolated bril

    • by Anonymous Coward on Monday April 22, 2013 @09:19AM (#43515201)

      I think the problem of uninformed physicists has been addressed by proper scientific research before:

      http://www.smbc-comics.com/?id=2556 [smbc-comics.com]

    • by femtobyte ( 710429 ) on Monday April 22, 2013 @09:43AM (#43515407)

      Yes, what you're missing is the entire point of the paper. Here's my attempt at a quick summary:
      Suppose you are a hungry crow. You see a tasty morsel of food in a hollow log (that you can't directly reach), and a long stick on the ground. The paper poses an answer to the question: what general mechanism would let you "figure out" how to get the food?

      Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer: "stick can reach food from entrance to log," "I can get stick if I go over there," "I can move stick to entrance of log," => "I can reach food." This paper, however, proposes a much more general and simple model: the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick (instead of the fewer states where the stick just sits in the same place on the ground), so it heads over towards the stick. Now it can reach a lot more states if it pokes the food out of the hole with the stick, so it does. And now, it can eat the tasty food.

      The paper shows a few different examples where the single "maximize available future states" principle allows toy models to "solve" various problems and exhibit behavior associated with "cognition." This provides a very general mechanism for cognition driving a wide variety of behaviors, that doesn't require the thinking critter to have a giant "knowledge bank" from which to calculate complicated chains of logic before acting.

      • Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer... This paper, however, proposes a much more general and simple model... the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick... This provides a very general mechanism for cognition driving a wide variety of behaviors, that doesn't require the thinking critter to have a giant "knowledge bank" from which to calculate complicated chains of logic before acting.

        There may be an interesting insight in there, but it doesn't seem to me to solve the problem. It seems to me that you haven't changed the mechanism, but rather the motivation for acting-- from "hunger" to "maximizing states". Actually, I'm not even sure you've changed the motivation, but rather reframed it.

        It seems like you haven't removed the need for a "knowledge bank" or "chain of logic", unless you've described the mechanism for how "maximized future states" is converted into "crow behavior". Does t

        • The interesting thing about changing the motive from "hunger" to "maximizing states" is that (a) the same "maximizing states" motivation works to explain a variety of behaviors, instead of needing a distinct motive for each, and (b) "maximizing states" simultaneously provides a "motive" and a theoretical mechanism for achieving that motive --- "I'm hungry" doesn't directly help with solving the food-trapped-in-hole problem, while the simple "maximize states" motive (precisely mathematically formulated) actu

      • by hweimer ( 709734 )

        Many cognitive models might approach this by assuming the crow has a big table of "knowledge" that it can logically manipulate to deduce an answer: "stick can reach food from entrance to log," "I can get stick if I go over there," "I can move stick to entrance of log," => "I can reach food." This paper, however, proposes a much more general and simple model: the crow lives by the rule "I'll do whatever will maximize the number of different world states my world can be in 5 seconds from now." By this principle, the crow can reach a lot more states if it can move the stick (instead of the fewer states where the stick just sits in the same place on the ground), so it heads over towards the stick. Now it can reach a lot more states if it pokes the food out of the hole with the stick, so it does. And now, it can eat the tasty food.

        But the cow could reach even more states if it broke the stick into thousand little pieces and scattered them all over the place. No tasty food here.

        • And if the crow instinctually understood undergraduate level stat mech, it would know setting the stick on fire would provide even more state possibilities. The point is that, in a simple "toy model" of the world including a small number of objects that don't come into pieces, accessible state maximization produces "useful" results. Presumably, whatever mental model the crow has to understand/approximate/predict how the universe works is rather simplified compared to an overly-accurate model that calculates

    • More strictly speaking, they are talking about the idea of 'will' (that is my understanding). How does the computer, or a human, decide what to do, or indeed choose to do anything? Why do humans care at all, and how can we make computers care?

      The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence. They built some software to demonstrate this, but I can't tell if the source code was released (it seems like it wasn't, but I don't have a subscription to find ou
      • The idea is that the urge to resist entropy yields a competitive advantage and leads to intelligence.

        Actually, the opposite: "intelligence" functions by seeking to maximize entropy. Note, however, we are talking about an approximate "macroscopic scale" entropy of large-scale objects in the system rather than the "microscopic" entropy of chemical reactions, so "intelligence" isn't about intentionally setting as many things as you can on fire ("microscopic" entropy maximization). So, the analogue statement to "all the gas molecules in a room won't pile up on one side" is "an intelligent critter won't want to

        • I'm not sure that's right, if you watch the video in the summary, all the examples tend towards more order. They actually have an example where two critters end up in a corner.
          • My "critter not going into a corner" example was based on the first toy model in the paper, a particle that drifts towards the center of a box when "driven" by entropy maximization. In some of the more "advanced" examples, there are more complex factors coming into play that may maximize entropy by "ending up in a corner," depending on how the problem is set up. However, if you read the paper (instead of just glancing at videos), the mathematical formalism that drives the model is all about maximizing entro

            • Ah, you're right, good call.
            • It's hard for me to imagine that balancing a stick maximizes entropy. It requires constant energy input to keep it there. How does that work?
              • The critter is assumed to have a certain capacity for expending energy to do work on their environment (parametrized by their "temperature" T_c) --- needing to expend energy is not a barrier. With the stick balanced up, you can quickly and easily swing the stick to many other combinations of position and velocity. When the stick is dangling down, it takes more time rocking back and forth to "swing it up" into many positions. If the critter was very strong (high T_c, able to exert much greater forces than gr

        • Actually, the opposite: "intelligence" functions by seeking to maximize entropy.

          Don't you mean that intelligence "functions by seeking to maximize the entropic gradient"?

          • A key component to the paper's model is that entropy maximization is not just through "local" maximization of the gradient, but total entropy maximization over an interval:

            Inspired by recent developments [1–6] to naturally
            generalize such biases so that they uniformly maximize
            entropy production between the present and a future time
            horizon, rather than just greedily maximizing instanta-
            neous entropy production, we can also contemplate gener-
            alized entropic forces over paths through configuration
            space rather than just over the configuration space itself.

            So, indeed, entropy maximization --- not just instantaneous entropy gradient maximization, which might "miss" solutions requiring passing through low-entropy-gradient regions to reach an even higher entropy final state --- is important. Of course, the result of maximizing entropy over a time interval is the maximization of the average gra

    • by dywolf ( 2673597 )

      physicists do this a lot. I remember many years ago a physicist delving into philosophy, and getting nearly an entire Discover mag article devoted to him. Essentially took the abstract means of talking about physical phenomna, took out the equations, and then applied it to philosphy and logic and stuff.

      hate when physicists begin delving outside their field. XKCD had it right.

  • Choice (Score:2, Interesting)

    by rtb61 ( 674572 )

    Intelligence, the ability to delve into the past and reach into the future, in order to craft the present and manipulate the probability of eventualities. The greater the ability the greater the intellect, the power of choice.

    • by dargaud ( 518470 )
      I like the following definition of intelligence which is short and goes way beyond 'the ability to do maths': "the ability to reach a correct solution given incomplete information".
  • by Dan East ( 318230 ) on Monday April 22, 2013 @08:58AM (#43515027) Journal

    It appears to me that the algorithm is trying to maintain entropy or disorder, or at least keep open as many pathways to various states of entropy as possible. In the physics simulations, such as balancing and using tools, this essentially means that it is simply trying to maximize potential energy (in the form of stored energy due to gravity or repulsive fields - gravity in the balancing examples, and repulsive fields in the "tools" example).

    While this can be construed as "intelligence" in these very specific cases, I don't think it is nearly as generalized or multipurpose as the author makes it out to be.

  • by femtobyte ( 710429 ) on Monday April 22, 2013 @09:04AM (#43515081)

    You can tell this is a physicist's paper. It lacks spherical cows, but only because the toy models were set up in 2D. So, instead, we get a crow, chimpanzee, or elephant approximated by circular disks.

  • Just like any other physical trait.
  • This is so sad (Score:2, Interesting)

    by rpresser ( 610529 )

    The universe developed intelligence as a way of making entropy wind down faster ... which will destroy all intelligence ... which is a tragedy because the winding down was necessary to create us ... and the universe WANTED TO SEE US SUFFER.

    • by gtall ( 79522 )

      That might explain why the Universe is trying to kill us. Those asteroids periodically buzzing the Earth were sent there, the Universe's aim is just a bit off. Sooner or later, it will get the target sighted in. Sometimes it is in the form of Gaea who periodically tosses an earthquake, or when she's really pissy, a super volcano...just for a little recreational resurfacing.

      The Universe hates intelligence, we're all dead.

  • by xtal ( 49134 ) on Monday April 22, 2013 @09:54AM (#43515475)

    I'm maintaining the maximum number of possible outcomes for the day, in harmony with the laws of nature. :)

  • by mTor ( 18585 ) on Monday April 22, 2013 @10:06AM (#43515577)

    Here's a review of this paper by a researcher who actually works in the field of AI and cognitive psychology:

    Interdisciplinitis: Do entropic forces cause adaptive behavior? [wordpress.com]

    Few choice quotes:

    Physicists are notorious for infecting other disciplines. Sometimes this can be extremely rewarding, but most of the time it is silly. I've already featured an example where one of the founders of algorithmic information theory completely missed the point of Darwinism; researchers working in statistical mechanics and information theory seem particularly susceptible to interdisciplinitis. The disease is not new, it formed an abscess shortly after Shannon (1948) founded information theory. The clarity of Shannon's work allowed a metaphorical connections between entropy and pretty much anything. Researchers were quick to swell around the idea, publishing countless papers on âoeInformation theory of Xâ where X is your favorite field deemed in need of a more thorough mathematical grounding.

    and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:

    By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others' in your field (i.e. other physicists) think you are addressing the right questions. If your results get accepted then you also have more impact since they appear in a journal that your target audience reads, instead of one your field focuses on. Lastly, it is a show of respect for the existing work done in your target field. Since the goal is to set up a fruitful collaboration between disciplines, it is important to avoid E.O. Wilson's mistake of treating researchers in other fields as expendable or irrelevant.

    • and after he explains what the paper's about and how utterly empty it is, he offers some advice to authors:

      By publishing in a journal specific to the field you are trying to make an impact on, you get feedback on if you are addressing the right questions for your target field instead of simply if others' in your field (i.e. other physicists) think you are addressing the right questions.

      The authors were just trying to maximize the number of possible future states for their idea.

  • Lots of entropy here, more than most! Does that mean that I am super intelligent ?

  • the article is self contradictory — it says "It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal".

    this is not true, because it then goes on to say, "trying to capture as many future histories as possible".

    so there IS a goal — it maximizes the number of future states — exactly the same way a negaMax search can maximize the mobility paramater in a chess engine search.

    in other words,

    • Yes, the model "intelligence" does have the one goal of "future history maximization." The interesting thing is that this one particular goal can produce a variety of behaviors --- instead of requiring each behavior to be motivated by its own specific and non-generalizable goal. Instead of needing a brain with specific goals for "walk upright," "get tool to augment manipulative abilities," "cooperate with other to solve problem," plus a heap of other mechanisms to achieve said goals, the simple entropy-maxi

  • by wytcld ( 179112 ) on Monday April 22, 2013 @11:05AM (#43516149) Homepage

    The premise of the claim is that procrastination is the ultimate goal of intelligence, with procrastination defined as keeping open the widest range of possible options by avoiding all actions that would decisively limit that range.

    This would seem, even on the surface, to ignore the many situations where intelligent life must take the narrow path, sacrificing procrastination to the pursuit of a single goal. Once through a narrow path we may find a wide vista of prospects again before us. But without taking such narrow paths at significant times, by always hesitating at the crossroads for as long as possible, we may find ourselves with Robert Johnson, sinking down.

    Also, the claim that the natural goal of choice is to maximize future choice is entirely circular. Like saying the goal of walking is to maximize future walking, the goal of eating to maximize future eating, there's something to it, but it's not quite true. Also, a great deal of research shows that people strive to avoid choice, for the most part.

    • The claim of the paper isn't that "entropy maximization" is the sole motivating factor for all behaviors. For example, after their toy model critter succeeds in knocking the "food" out of the hole using the "tool," some other guiding mechanism probably takes over to make it eat the tasty food (which decreases the accessible degrees of freedom in their simple model compared to keeping the food and tool nearby to toss about). So, indeed, there are plenty of actions that require different behavioral models to

  • Compare to Terrence Deacon's Incomplete Nature, which: "meticulously traces the emergence of this special causal capacity from simple thermodynamics to self-organizing dynamics to living and mental dynamics" (Amazon).

    (Deacon's book is good, though has been criticized as drawing heavily from prior work: "This work has attracted controversy, as reviewers[2] have suggested that many of the ideas in it were first published by Alicia Juarrero in Dynamics of Action (1999, MIT Press) and by Evan Thompson in Mind i

  • First of all, the paper is steeped in jargon. Phrases such as (2nd para) "characterized by the formalism of" instead of "described by" obfuscate the meaning and confuse the reader.

    Count the number of uses of the verb "to be" (is, are, "to be", were, &c). It's everywhere! Nothing runs or changes, everything "is running" or "is changing". Passive voice removes the actor in a paper describing - largely - actions.

    Useless words and phrases litter the landscape, such as "To better understand", "for concretene

  • Here is my summary of the paper, from the authors website [alexwg.org]

    Entropy is usually defined as a function of the macroscopic state of a system at a given time. If we assume that the system evolves so as to move in the direction of maximum entropy at any time, then this defines some dynamics. What the authors propose is a foreward looking dynamic where the system moves in the direction that maximizes entropy and some future point. This automatically builds in forward looking (i.e. intelligent) behaviour into th

You know you've landed gear-up when it takes full power to taxi.

Working...