Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News Science

The Human Mind is a Bayes Logic Machine 98

lexxyz writes "Apparently the human mind can predict the distribution type for a given sample of results. A study found in The Economist has shown that a group of minds working on single pieces of data, can together generate the statistical model used to represent a given sample. Note that it takes a group of people to be able to accurately predict the behaviour of something, not a single individual"
This discussion has been archived. No new comments can be posted.

The Human Mind is a Bayes Logic Machine

Comments Filter:
  • by yagu ( 721525 ) * <`yayagu' `at' `gmail.com'> on Thursday February 02, 2006 @09:32PM (#14631624) Journal

    From the fine article:

    They suggest that the Bayesian capacity to draw strong inferences from sparse data could be crucial to the way the mind perceives the world, plans actions, comprehends and learns language, reasons from correlation to causation, and even understands the goals and beliefs of other minds.
    Phew! Once I read that, I realized I didn't have to read the rest of the article having now taken a large enough "sparse" sample.

    An added benefit, I already know what all of the posts are going to say, including this one!

  • A study found in The Economist has shown that a group of minds working on single pieces of data, can together generate the statistical model used to represent a given sample. Note that it takes a group of people to be able to accurately predict the behaviour of something, not a single individual.

    hugh?
  • So,,, (Score:3, Funny)

    by Eightyford ( 893696 ) on Thursday February 02, 2006 @09:45PM (#14631694) Homepage
    The key to successful Bayesian reasoning is not in having an extensive, unbiased sample, which is the eternal worry of frequentists, but rather in having an appropriate "prior", as it is known to the cognoscenti. This prior is an assumption about the way the world works--in essence, a hypothesis about reality--that can be expressed as a mathematical probability distribution of the frequency with which events of a particular magnitude happen.

    So is this more evidence that creativity and regular intelligence do not get along too well?
  • Working "together"? (Score:5, Informative)

    by GuyMannDude ( 574364 ) on Thursday February 02, 2006 @09:49PM (#14631720) Journal

    A study found in The Economist has shown that a group of minds working on single pieces of data, can together generate the statistical model used to represent a given sample. Note that it takes a group of people to be able to accurately predict the behaviour of something, not a single individual.

    Well, that's a somehwat misleading summary. These people were not knowingly collaborating. Each person would have had to answer the questions independently (not knowing what the other respondants' answers were) in order for Bayes to be applicable. Each person's response counts as a piece of evidence or clue in inferring the underlying probability distribution. Their answers are combined using Bayes's rule by an external third party (the researchers). So, yes, this technically counts as a group of minds working together, but I think the way it this summary was worded might give people the wrong impression.

    Think about it this way: if you lock a bunch of people in a room toegther and have them come up with an answer, the "strong" personalities in the room are likely to have a heavy influence on the "weaker" ones. People who aren't really firm in their opinions are going to influenced -- whether they realize it or not -- by people who sound confident. The article makes a big to-do about the fact that Bayesian techniques allow you to get good answers with a small number of people working on the problem. But the key is that those people have to be working independently because it's going to be damn difficult to identify and subtract out the cross-correlation of members influencing each other.

    I'm making (what I hope to be) an important point. I think business people who read this article or even slashdotters who read the above summary may get the impression that small meetings are a great way to arrive at strikingly effective solutions. That's not what Bayes techinques are about. If you want to put a small group of people to work on a problem, you'd better separate them , otherwise Bayes's rule is not strictly applicable.

    GMD

    • If you want to put a small group of people to work on a problem, you'd better separate them, otherwise Bayes's rule is not strictly applicable.

      I think that is a really good point.

      No doubt, someone is hurredly working out a new astrology^H^H^H^H^H^H^H^H^Hpersonality inventory to sell to Human Resources departments.

      From TFA:

      And "forever" is not a mathematically tractable quantity, so Dr Griffiths and Dr Tenenbaum abandoned their analysis of this set of data.

      They should have thought of that before the

      • "Some 52% of people predicted that a marriage would last forever when told how long it had already lasted. As the authors report, "this accurately reflects the proportion of marriages that end in divorce", so the participants had clearly got the right idea. But they had got the detail wrong. Even the best marriages do not last forever. Somebody dies. And "forever" is not a mathematically tractable quantity, so Dr Griffiths and Dr Tenenbaum abandoned their analysis of this set of data."

        Perhaps it wasn't a fo
        • "If you don't settle your statistical methods before starting to analyze the data, then it ain't science."

          You misunderstand the nature of Bayesian statistics. The data and the initial prior determine the analysis, the analysis generates a prediction, which becomes the new prior. It not only tests hypotheses but generates new hypotheses. You can construct an accurate Bayesian model from nearly any initial prior given sufficient data.

          Actually, I don't know anything about Bayesian statistics. However, fr

          • Nonsense (Score:3, Informative)

            by Anonymous Coward

            The Scientific Method requires that the hypothesis makes the predictions before they are tested.

            The "Scientific Method" [dharma-haven.org] is a myth perpetrated by elementary school science textbooks. Actual, practicing scientists (of which I am one) do not adhere to any cookbook "method", and in particular hypotheses (let alone their predictions) do not always precede data. In fact, it is quite common for it to be the other way around, especially when you don't know much about the system being studied (exploratory data ana

            • Actual, practicing scientists (of which I am one) do not adhere to any cookbook "method", and in particular hypotheses (let alone their predictions) do not always precede data.

              I guess that's kinda my point.

              In fact, in this case the analysis method was decided upon before the data

              I didn't get that from The Economist article.

              (not that it has to be);

              That's where we disagree.

              it's just that the data collection method was screwed: it allowed respondents to give non-numerical answers ("infinity") when

    • Hmm, this might be useful with Juries ... http://www.imdb.com/title/tt0050083 [imdb.com]
    • Think about it this way: if you lock a bunch of people in a room toegther and have them come up with an answer, the "strong" personalities in the room are likely to have a heavy influence on the "weaker" ones. People who aren't really firm in their opinions are going to influenced -- whether they realize it or not -- by people who sound confident.

      That's exactly what happens in every jury room, focus group, and committee meeting on the planet or any other place were a group of humans are expected to come

      • Think about it this way: if you lock a bunch of people in a room toegther and have them come up with an answer, the "strong" personalities in the room are likely to have a heavy influence on the "weaker" ones. People who aren't really firm in their opinions are going to influenced -- whether they realize it or not -- by people who sound confident.

        If you also think:
        Polls make people vote for either the party that leads in the polls or the biggest polled opponent party.

        or, to say it in another way.

        If you thin
    • Nice summary and point; thanks. :)
  • Imagine... (Score:3, Funny)

    by virtualXTC ( 609488 ) on Thursday February 02, 2006 @09:50PM (#14631726) Homepage
    WOW!!

    Imagine a beowulf cluster of these!!!
  • by the_humeister ( 922869 ) on Thursday February 02, 2006 @10:07PM (#14631822)
    I call prior art on psychohistory
  • by Anonymous Coward
    "Imagination is more important than knowledge."
  • by postbigbang ( 761081 ) on Thursday February 02, 2006 @10:32PM (#14631957)
    It's not all about data and results. It's also about pre-formed boundaries, or domains within which answers usually (and some might say 'logically') fall.

    This is one of those elementary, goosey sorts of tomes (if you RTFA) where a bunch of nerds go around with a bad hypthosis and come to an 'enlightened' conclusion.

    Consider the techniques that surround Wolfram's expostuations-- that the world is algorhmic, and language ill-describes these algorithms, loosely defining them as processes. These setup boundaries within which we derive domains where answers must lay.

    Proving that with just a few data points within a tight algorithm that you'll get the right answer is just hilarious-- of course you will. The domain fits, and so the answer must. The domain gets defined by a number of experience points as hidden references that allow the frequentists to get magic (e.g. hidden and historical) inferences to the answer. This is where the phenomenon of the trick question makes us all so frustrated.

    My point? Inference has predefined boundaries, and so of course Bayesian logic doesn't require a bunch of data to lead to a correct conclusion because the boundaries are already so tightened that only those that randomly guess, and don't use historical data points (e.g. their freaking memories) are going to blow the answers.

    Sigh.
    • "Proving that with just a few data points within a tight algorithm that you'll get the right answer is just hilarious-- of course you will. The domain fits, and so the answer must. The domain gets defined by a number of experience points as hidden references that allow the frequentists to get magic (e.g. hidden and historical) inferences to the answer. This is where the phenomenon of the trick question makes us all so frustrated."

      So how long does my group have to camp this spot before my production unit can
    • Inference has predefined boundaries, and so of course Bayesian logic doesn't require a bunch of data to lead to a correct conclusion because the boundaries are already so tightened that only those that randomly guess, and don't use historical data points (e.g. their freaking memories) are going to blow the answers.

      I'm not sure I agree with you. What is remarkable about these experiments is not that the population gets the right answer, where right answer here is a number, like 42. It's that the populati
      • And so, the number 42 as an example. Life is a dice roll.

        Consider the homeopaths that believe that water has the 'signature' or inference of ingredients mixed in it.... tinctures as it were. This is memory long after the connection.... an impression. If you have enough impressions, you're allowed boundaries to be defined. Humans have thousands of impressions daily from the moment of sentience/self-awareness. These impressions, cause and effects, algorithms tried, tested, discarded, accepted, rejected, faith
  • Overhyped (Score:1, Insightful)

    by Anonymous Coward
    The headline claims that "The Human Mind is a Bayes Logic Machine"

    While the article concludes:

    How the priors are themselves constructed in the mind has yet to be investigated in detail. Obviously they are learned by experience, but the exact process is not properly understood. Indeed, some people suspect that the parsimony of Bayesian reasoning leads occasionally to it going spectacularly awry, with whatever process it is that forms the priors getting further and further off-track rather than converging on

  • ...is to pose the unanswered questions to this collective Bayesian mind:

    Given how old the universe is, how long does the universe have until the universe collapses back on itself? Hey, at least we'll have the probability of the right answer!
  • by BobGregg ( 89162 ) on Thursday February 02, 2006 @11:34PM (#14632294) Homepage
    Back in 1995, when I was at Carnegie Mellon, a researcher did a project in the planetarium at the Carnegie science museum. He had programmed a "joystick" to receive reflections from a set of reflective paddles held by the people in the audience. Each paddle had two different sides (red and green); depending on which side you held up, a different signal got sent back to the main processor (positive or negative, respectively). The overall "direction" taken by the game was determined by the sum of the responses - so if everyone held up "red", it as a 100% positive; but if everyone held up "green", it was 100% negative; and so on, with straight linear interpretation.

    The first game was Pong. Up and down were controlled directly, if cumulatively, by the audience. You would think that control would be spotty, and that controls would overshoot. Instead, the audience was INCREDIBLY accurate in its overall response; even when the game got very fast, the audience played very, very well against the computer.

    There were several games presented, but the last was a flight simulator, flying a plane through a set of rings. The left half of the audience controlled up and down; the right half controlled left and right. Again, you would think this would be nearly impossible to control - but the audience never missed a single ring, even when the game got fast.

    Individually, it's doubtful that many members of the audience could have played any of the games as well as we saw the group play cumulatively. It was a clear and very effective demonstration that there was some sort of statistical model at play in the interplay of all those minds.
  • by SiliconEntity ( 448450 ) on Friday February 03, 2006 @12:29AM (#14632582)
    Here is the paper:

    http://web.mit.edu/cocosci/Papers/prediction10.pdf [mit.edu]

    It begins:

    If you were assessing the prospects of a 60-year-old man, how much longer would
    you expect him to live? If you were an executive evaluating the performance of a movie
    that had made 40 million dollars at the box office so far, what would you estimate for its
    total gross?


    These questions have specific "right" answers, which can be achieved based on having the proper mental model for how lifespans and movie grosses are distributed. See how good a job you could do, without peeking, just based on your prior knowledge about the world.
  • Alright, I understand that there are circumstances in which the errors of the individuals in a population average towards zero, but it's clearly not a very broadly applicable effect. It is interesting to consider the question, "what are the structural requirements on problems, which allow error-cancellation to be applied to refine the result", and consider how this might be applied to political organization, to cancel out errors such as the current POTUS.
    • I have thought about this as well. One conclusion with which I am very uncomfortable is that the democracy IS correct. But then I remember that money, media and deception can cause problems in democracy. It would be interesting to see how more "noise" and an inaccurate representation of the situation affects these types of solutions.
    • I think the explanation is simple: For a Bayesian logic machine it's the same as for any logic machine (i.e. computer):

      Rule 1: If you put garbage in, you get garbage out.

      People are fed a lot of garbage all the time. Feeding garbage to politicians is called lobbying, while feeding garbage to the public is done by the mass media.

      Of course, Bayesian inference depends very much on a reasonable prior. If your prior tells you that something is impossible, then no amount of evidence can change this, except if ther
      • by Anonymous Coward
        For example, imagine a bayesian spam filter would come with a pre-installed distribution which says that every mail where the word Viagra appears has a 99.99999999% probability of being spam. A pharmacist would then have a very hard time to convince that filter that the Viagra orders he gets are no spam.

        Not at all. Bayesian spam filters typically work by forming a weighted combination of the spam probabilities of all the words in the email, not just one word. Even if "viagra" by itself has a 99.99999999% p
      • > If your prior tells you that something is impossible, then no amount of evidence can change this

        My Prior tells me that, with the Ori, anything is possible!

        Sorry, I'm most of the way through the first page and I haven't seen a single SG:SG1 reference, despite the word "Prior" coming up so many times.
    • Groupthink and incorrect data. The experiments in the article were conducted upon individuals, who were given accurate, impartial information and asked to extrapolate results. In such a situation, human intelligence works very well.

      Democracy involves giving groups of people information of varying accuracy. People thus make their decisions based on what other people think, and upon incorrect and subjective data. Unsurprisingly, this works out less well.
    • In order to explain the relative failure of democracy, I'd say that it's like, put a whole look of people on a boat, right in the middle of some ocean they don't know, and let them vote on which direction to go to reach the closest land, them not knowing where they at, and giving them misleading clues about where the closest land might be.

      Maybe I could have found a better example, but the main idea is there, the people on the boat hardly could get to vote for the right direction(s) (many paths lead to Rome,

  • The news piece is just plain wrong in the intro. Frequentist interpretations of probability are widely discredited and have been for quite some time (on the order of 50 years I believe). The modern interpretation of probability comes out of normative decision theory, which is based on subjective probabilities - i.e., your beliefs about the world (see here [wikipedia.org] for some general background). Your beliefs should be coherent, which means consistent with observable facts, but they are not objective.

    To understand

    • One other interpretation of probability is that it represents real propensities in the world, I think this is particularly relevant in the case of quantum physics. This rests on the idea that causality includes stochastic determination, so that when you say there is a 10% chance of rain tomorrow, you are actually talking about some real propensity of the world.

      I also don't think your example is a good way of debunking frequentist interpretations. A frequentist would argue that the weather forecast should
      • Yes, I conceed that I may have overstated the situation and implied that subjective probability interpretations have universal support. But to my knowledge it is by far the best of the widely known interpretations.

        I am confident that frequentist interpretations have been bassed by. Although this view still is taught (i.e., intro to probability usually involves your counting colored marbles in an urn or somesuch), it is more for its conceptual simplicity than a deeply-held conviction that it is the best

  • Look at the citation on the frequency distribution graphs. They source the illustration to Wikipedia. I don't think I've ever seen a respectable publication cite Wikipedia as a source for a story about something OTHER than Wikipedia itself. Granted, they didn't cite a "fact" but rather used a graphic, but that's still something of a vote of confidence form the Economist.
    • Perhaps they needed a quick pic, and instead of making their own, someone just searched Google Images for one in the public domain. Lots of people do this, it does not mean they approve or support the source, it just means they are too lazy to check.

              -dZ.
  • After reading the article and, of course, ignoring the complex math behind the entire thesis, I have to say that it seems so obvious.

    >> "That might explain the emergence of superstitious behaviour, with an accidental correlation or two being misinterpreted by the brain as causal. A frequentist way of doing things would reduce the risk of that happening. But by the time the frequentist had enough data to draw a conclusion, he might already be dead."

    The human mind has to perform an unthinkable (pun full
    • but it offers the somewhat counter-intuitive notion that accurate predictions can be made based on very little information, and gathering extensive amounts of data first might not always be the best approach.


      I don't know if you looked at the article or not, I only did very briefly, but I think there is more data than you realize. Sure the question provides too little data, but people's past experiences provide a huge amount of data for them to draw inferences from. This data is already gathered and interp
    • I think you need to also factor in the fact that both the thinking itself, and the perception of the outcome of an event, are both filtered through a given individual's biases and prejudices. Mind you, I'm not discussing simply the traditional use of the word "prejudice," but also (as easy examples) anyone who's a conspiracy theorist adherent, or dogmatic democrat/republican, and so forth. As a previous /. story (last week?) pointed out, domgatic political adherents actually stop thinking about, and inste
  • Well, how many more "studies" do we need to come to the conclusion that the brain is a database engine that applies statistical rules to the queries it processes? all the brain does is actually pattern matching on the input, then produces an output, then the whole experience is fed back again to the brain etc.

    Scientists know this for years! I once saw in the show "Incredible But True" an artificial mini brain based on a neural network that could optically recognize persons.

    Of course there is a reason machin
  • For more on this topic, check out The Wisdom of Crowds [wikipedia.org] by James Surowiecki [randomhouse.com].

    Surowiecki gives many examples of how aggregated knowledge of a lot of fools usually beats the experts. The research cited in TFA begins to explain the mechanism by which that works.
  • by radtea ( 464814 ) on Friday February 03, 2006 @12:03PM (#14635226)

    One of the fundamental modern Bayesian papers is Jaynes' "How Does the Brain Do Plausible Reasoning?", which can be found on the web along with lots of other interesting things. [wustl.edu] Jaynes' conclusion is that we must be Bayesians under the skull. It's a compelling paper, even now.

    These experimental results are exactly what Jaynes theory predicts, which is a very nice confirmation of his work. But they are not the "discovery" of anything--they are empirical confirmation of something we already knew. When light-bending by gravity was measured it was not a discovery, it was the confirmation of a theoretical prediction. This is the same.

  • by Stephen Samuel ( 106962 ) <samuel@nOSPAm.bcgreen.com> on Friday February 03, 2006 @12:19PM (#14635365) Homepage Journal
    That would explain why humans are so wired-up to seek explanations of how things work. The explanations feeds our baysien rule pool and .....

    Ugh! There I go again.

    • This goes into some thoughts I have had as to why our short-term memory decreases with age. Short-term memory is used to remember patterns and data required to learn and make rules about one's environment, but is no longer needed once we get to the auto-pilot of old age.
  • To prove the point, they actually did such a reversal in the case of telephone-queue waiting times. Traditionally, these have been assumed to follow a Poisson distribution, but some recent research suggests they actually follow a power law.

    Has there ever been clearer proof that statisticians are completely and utterly inept? How much data must there be on call waiting times? How many billions of samples from waiting time distributions must there be? The systems are computerised and the data is trivial to

Fast, cheap, good: pick two.

Working...