Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Math Medicine The Almighty Buck

Psychologists Don't Know Math 566

stupefaction writes "The New York Times reports that an economist has exposed a mathematical fallacy at the heart of the experimental backing for the psychological theory of cognitive dissonance. The mistake is the same one that mathematicians both amateur and professional have made over the Monty Hall problem. From the article: "Like Monty Hall's choice of which door to open to reveal a goat, the monkey's choice of red over blue discloses information that changes the odds." The reporter John Tierney invites readers to comment on the goats-and-car paradox as well as on three other probabilistic brain-teasers."
This discussion has been archived. No new comments can be posted.

Psychologists Don't Know Math

Comments Filter:
  • Seems to make sense (Score:5, Interesting)

    by 26199 ( 577806 ) * on Thursday April 10, 2008 @05:41PM (#23030026) Homepage

    The psychologists were claiming that if you choose X over Y then you are more likely to choose Z over Y because your *choice* causes bias against Y. (This fits the observed data).

    The new suggestion is that if you choose X over Y then you are more likely to choose Z over Y because the choice indicates prior bias against Y. The important part being that this holds even if the bias against Y is so small that it is hard to detect. The only thing required is that there is a fixed "preferred order" of the three.

    At least, that's what I understand from the article. Given the field, I also understand that I am most probably wrong :)

  • by ZombieRoboNinja ( 905329 ) on Thursday April 10, 2008 @05:42PM (#23030032)
    Marilyn vos Savant explained the problem in Parade magazine, and a whole bunch of math professors wrote in to tell her that she was wrong... turns out it's kind of a bad idea to play "gotcha" with someone who has an IQ of 228.
  • Indeed (Score:5, Interesting)

    by sustik ( 90111 ) on Thursday April 10, 2008 @06:03PM (#23030204)
    This reminds me the story my high school teacher told me:

    Some researchers involved in pchycology (social behaviour etc.) came to high schools and drew up the friendship graph of the class. (Maybe school works differently where you live, we had a class of size 30-40 students attending exactly the same lectures.)

    They assumed friendship to be mutual (if not, than it was not considered friendship). One clever cookie made the observation that almost always there is a group of 6 students who all friends to each other (a clique), or alternatively a group of 4 students, who do not like each other.

    There were excited discussions among the researchers what social forces are the reason that one of the above situations always seemed to occur.

    They were somewhat disillusioned when our math teacher explained them Ramsey's theorem. Since R(6, 4) is between 35 and 41, indeed one can expect either a frienship or hateship clique to appear with quite high probability... (This does not mean that properties of the frienship graph worth not examining, but one needs to know the math to do it properly.)
  • by yuna49 ( 905461 ) on Thursday April 10, 2008 @06:11PM (#23030294)
    Indeed. There's considerable evidence in favor of reductions in cognitive dissonance as a motivating psychological force from other types of studies and other disciplines. For instance, in my field of political science, the evidence is pretty overwhelming that citizens systematically misperceive candidates' positions to make them more similar to the citizens' own preferences. Voters often engage in "projection," believing that candidates' they prefer hold positions like the voters' own, even when those aren't the positions the candidates actually hold. The opposite process also occurs, where voters believe that candidates they dislike hold positions those voters dislike regardless of the candidates' true preferences. My own dissertation research on voters for the British Liberal Party in the 1960's and 1970's also confirmed these hypotheses.
  • by DynaSoar ( 714234 ) on Thursday April 10, 2008 @06:25PM (#23030408) Journal
    TFA has been adequately refuted, so I'll forego more on that. And despite the inflammatory nature of the title and claims here, it is unfortunately too correct too often.

    I've been told by "superiors" to perform certain analyses because "everyone does", and they gave me references which supposedly showed these were proper. When I looked these up, the authors not only made no claims supporting their necessity, but both stated that the researcher should know enough about what they're doing to know what analyses to perform. I took my instructions to the statistics consultant for our department, and without showing him the references he made the same claims as both authors, contradicting the rationale given by those who gave me the instructions. I've seen many cases of psychologists performing statistical analyses based on their knowledge of how to use SPSS et al., rather than any fundamental grasp of the maths required by the design. Perhaps the most egregious error is their faith in fMRI analyses via statistical probability mapping, when the correction factor required by the 10^4 to 10^5 simultaneous T-tests makes any one result within the traditional collective p > .05 significance level to have an individual p value in the 10^-6 to 10^-9 range. That's a hell of a requirement for a single test, and very unlikely to actually exist. "Figure the odds" applies, and they don't seem to grasp that they don't grasp it.

    On the other hand, some of us can apply such analyses as tensor calculus and Gabor transforms to dendritic electrical fields, showing where each of those are correct and where each fail, and can correctly apply nonlinear, N-dimensional statistical testing of time/frequency maps produced by continuous wavelet transform. But of those of us who can do these things, I know of none who learned of them, much less how, within the confines of a psychology department. (Well, except for the Gabor stuff, as used and taught by Karl Pribram, that being the only case I know of).

    "Everything I Needed To Know I Learned At The Santa Fe Institute". No, not everything, but that'd make a hell of a book.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Thursday April 10, 2008 @06:34PM (#23030492) Homepage Journal
    There are several problems with all of this. The original experiment does not appear to have any control group, it is unclear if the population sampled was genuinely random, the size of group tested seems to have been extremely small for a meaningful statistical study, and (perhaps most important of all), it assumes that mammalian vision is uniform greyscale AND that the candy was monochromatic.

    (That last pair of points are important. Monkeys do not see all colours with equal clarity. Neither do humans, which is why monitors actually have more real-estate set aside for blue than for anything else. Complicating things, colours are usually the product of mixing. They are not "pure". We don't know what the monkeys saw, therefore cannot tell if their decision was influenced by their ability to even see the treats.)

    Personally, I have developed a skepticism of such observational science. Too many possible explanations, yes, but more importatly too little experimentation to eliminate alternatives. If an explanation is put forward and then acted upon, especially in an area like psychology where those being acted upon are likely vulnerable groups, it's important to make sure the explanation is likely to be correct. Likely to be possible isn't good enough.

    What would I suggest? Well, in the 1950s through to the last few years, options have been limited. These days, though, you can take fMRIs, MRIs and CAT scanners into the field. During the Chernobyl accident, it was fairly standard procedure for MRIs on trucks to be used to scan farm animals for contamination. See the brain in action as it makes the choices. See when the choice is made and which neural pathways were involved. Much better than speculating about what's going on. If you want more data, scientists decoded the optic fibre transmissions of cats ten years ago, or thereabouts. We can literally see if that plays a part in the decision.

    You still end up doing statistics, sure, but with far more numbers that have far more meaning behind them and far less room for interpretation.

  • by MrNaz ( 730548 ) * on Thursday April 10, 2008 @07:24PM (#23030928) Homepage
    Psychologists don't know math, nor do they know anything about psychology.

    Personally, I have nothing but disdain for the psychological fraternity, I've never once seen a solution to a non-trivial problem other than "treat depression with Valium". Psychology and psychiatry in my books are just the business of carrying out chemical lobotomies and charging exorbitant fees for it.

    That's my view anyway, and I'm aware that its a sweeping generalisation that is probably not universally true.
  • by a whoabot ( 706122 ) on Thursday April 10, 2008 @07:31PM (#23030984)
    If anyone wants some interesting stuff to read about irrational choices that humans have near-pathologies in that they constantly make them across the board, read some Tversky and Kahneman, or Robyn Dawes. Here's an example about a Tversky and Kahneman experiment characterised by Dawes in Rational Choice in an Uncertain World:

    "...Tversky and Kahneman offered each subject a bet. They would roll a fair die with four green (G) and two red (R) ones, and the subject made a choice between betting that the sequence RGRRR or the sequence GRGRRR would occur. This bet cost the subject nothing, and if the chosen sequence occurred the subject received $25. ... When subjects were asked to make a hypothetical choice for the $25 payoff, two-thirds choice the second sequence. When Kahneman and Tversky actually rolled the die and offered to pay the $25 in hard cash, two-thirds [the same proportion] chose the second."

    So two-thirds of people will choose the option that is necessarily less likely, and can easily seen upon simple reflection to be less likely, because RGRRR occurs in GRGRRR, but GRGRRR also has an additional G, which has a 1/3 chance of not showing up there. The reason people choose this way is because they simple don't think straight: They know that G's are more likely than R's in the example, so they just choose the sequence with more G's. This is the compound probability fallacy as psychologists call it: People decide between options by comparing something like the averages of the possibilities of the individual events in the sequence([P1 + P2 ... + Pn]/n versus [P'1 + P'2 ... + P'n]/n), instead of comparing the actual probabilities of the sequences themselves ([P1 * P2 ... * Pn] versus [P'1 * P'2 ... * P'n]). This fallacy occurs constantly in people.

    The same work I quoted has a nice bit about Bertrand Russell and how he wrote in his journal how his doctor gave him advice on another sort irrational-reasoning pathology people have: That of confusing inverse probabilities: "But [my doctor] didn't say what proportion of the total population are insane and drunken respectively, so that his argument is formally worthless." Dawes writes after this account: "As head of a department with a clinical psychology program, one of my goals was to help students learn to think like Russell."

    Anyway, to note, Robin Dawes, psychologist: Probably knows math, then again he has a "Doctorate in Mathematical Psychology" as it's called.
  • by yuna49 ( 905461 ) on Thursday April 10, 2008 @07:56PM (#23031202)
    An excellent question!

    The relationship is obviously bi-directional. Determining the direction of causality is thus a difficult matter, and one that preoccupied folks in my discipline for quite some time. One method is to use an "instrumental variables" approach (see any advanced econometrics text for details), but perhaps a more accessible answer comes from my own research.

    The Liberal Party was often seen as "between" the Conservative and Labour monoliths. I focused my attention on the preferences of voters who switched from one of the major parties to the Liberals between elections. (We have "panel" surveys where the same people are interviewed over time which helps to eliminate problems of misperceived past voting behavior.) Now it turns out that voters who switched to Liberals usually saw them as taking positions in opposition to the party from which the switchers came. Sometimes those views were, in fact, contrary to espoused Liberal positions. For instance, on the question of entry into the European Economic Community, the forerunner of today's EC, former Conservative voters who supported entry were more likely to switch to the Liberals, while former Labour voters who opposed entry made the same switch. This pattern recurred across a number of issues. The most parsimonious explanation is that voters who disagreed with their normal party for whatever reasons were more likely to defect to the Liberals, using them as a instrument to express displeasure regardless of the Liberals' true position. (In the case of the EC the Liberals were consistently pro-Market; the other parties tended to waver.) Voting Liberal was "easier" than moving all the way over the opposition major party. That meant that voters would tend to "project" their own views on the Liberals rather than being persuaded to support the Liberals because of agreement with that party's positions.

    Most of the traditional literature on American voting behavior focuses on the role of "party identification" as a primary determinant of issue opinions rather than the other way round. Voters often seem not to tote up the various stances of parties and candidates as a method of determining which party to support. Many people have Democratic or Republican partisanships because of family and social factors. People "inherit" partisanships from their parents or adapt to conform to the social roles they adopt in adulthood. These prior partisan dispositions then color their interpretations of events and campaign issues.

    Let me tell you a story about my grandmother. She emigrated from Ireland in the late 19th century and lived outside Boston for the rest of her life. Despite the fact that most Irish Catholics living around Boston voted Democrat in her lifetime, she was a stolid Republican for the entire time I knew her. Her Republicanism wasn't based on support for that party's positions; it originated in the 1928 Presidential election when the Catholic (and "Wet") Al Smith ran as the Democratic candidate. Smith lost that year because anti-Catholic "Drys" in the Southern states defected to the Republicans. My grandmother felt that the Democrats failed to work hard enough for Smith because of his Catholicism, and so she started voting Republican. She was unfazed by the rather substantial evidence that showed that the Democrats in this period supported policy positions much closer to her own views. By the way, after Kennedy was shot in 1963 she claimed she had voted for JFK in the 1960 election, but we all knew she'd voted for Nixon.

  • Re:Hmmm.... (Score:3, Interesting)

    by jorghis ( 1000092 ) on Thursday April 10, 2008 @08:16PM (#23031352)
    I watched 21 and I am pretty sure that they didnt botch the Monty Hall problem. It seemed weird that it would be in a senior level math course at a top notch engineering school, but the way they described it was mathematically correct.
  • Re:Inaccurate? (Score:5, Interesting)

    by jpfed ( 1095443 ) <jerry...federspiel@@@gmail...com> on Thursday April 10, 2008 @08:23PM (#23031428)
    The percentages were a joke. But I do mean in all seriousness to suggest that psychology researchers are often averse to math and tolerate math errors in papers. Psychology is often only quantitative insofar as there are certain numerical rituals associated with null hypothesis significance testing that researchers must use to be accepted by other researchers.

    It's kind of depressing, so I try to make light of it when I can.
  • Re:Hmmm.... (Score:5, Interesting)

    by dbIII ( 701233 ) on Thursday April 10, 2008 @08:29PM (#23031468)
    I find it even funnier that it is an economist that is saying it. Admittedly some economists are really mathematicians that have wandered in to try to bring some professionalism to a bunch of fortune tellers but in general economists have a bad reputation every time there is an attempt to assert itself as a science. Years ago when I had the misfortune to do an engineering economics subject I was astounded to find that the university level economics text we were using had one version of the compound interest formula for every variable - it was assumed that economics students could not do introductory algebra.
  • by CastrTroy ( 595695 ) on Thursday April 10, 2008 @09:36PM (#23031836)
    Makes me think of probably the most famous psychologist, Dr. Phil. He got hugely popular pretty much just telling people what they needed to hear. Nothing he says is profound or thought provoking. He basically just tells you the way it is. Most people aren't really to do that, so I applaud him for doing that. However, I don't think that he's really doing that much that anybody else couldn't do.
  • And true... (Score:4, Interesting)

    by mutube ( 981006 ) on Thursday April 10, 2008 @09:59PM (#23031966) Homepage
    My experience at a the University of Edinburgh ("a good uni") was that Psychologists really don't know math. I spent ~6 months being subjected to lectures on statistical theory about chi-squared and normal distribution that frankly didn't make any sense: "Why do we add +1 here?" "Because it works"

    Seriously.

    At the end of the course we were given a summary lecture that (shock horror, ladies fainting at the back) gave us a FORMULA that explained the whole point of what we'd been taught. I wasn't the only person who, at this point, suddenly realised wtf they had been blabbering on for the past 2 months... and more to the point, how much crap they'd been talking. Psychologists were taking formulae based on reason and using them to support conjecture. That's not inflammatory, it's fact.
  • Re:Inaccurate? (Score:3, Interesting)

    by shermozle ( 126249 ) on Thursday April 10, 2008 @10:01PM (#23031984) Homepage
    I had a somewhat heated discussion with someone who called herself a psychologist but hadn't studied statistics. To my thinking, statistics is central to psychology being called a science. Without statistics you're trading in conjecture and anecdote. When I said psychology without stats isn't science, it didn't go down too well.
  • by PRMan ( 959735 ) on Thursday April 10, 2008 @10:09PM (#23032024)

    There is another problem.

    Not to brag, but I have very acute taste buds. So much so, that when I was in high school, I would put M&Ms in my mouth with my eyes closed and be able to tell which color it was with nearly 100% accuracy.

    The reason I could do this is that the dyes actually taste different.

    • Dark Brown - Lots of red dye
    • Tan - Anyone could taste these
    • Orange - Red dye and yellow dye
    • Yellow - Yellow dye only
    • Green - Blue dye and yellow dye

    Now that they have added pure red and pure blue (not to mention make-your-own-color), I can no longer perform this amazing feat. Still, the point is that most animals have a more advanced sense of smell and taste than humans and it is quite possible that they simply prefer the smell or taste of one dye over another. Or maybe they hate one color because it smells or tastes bad to them.

  • Re:Hmmm.... (Score:3, Interesting)

    by Brandybuck ( 704397 ) on Thursday April 10, 2008 @11:14PM (#23032408) Homepage Journal

    but in general economists have a bad reputation every time there is an attempt to assert itself as a science.


    True. But not all schools of economics try to make themselves a science. It's a difference in methodology. The Austrian School is a notable example, because they specifically reject scientific positivism. The Neoclassicists are obsessed with deriving mathematical formulas, and the Monetarists are obsessed with scientific predictability.

    I sympathize with the Austrians, but realize that the Neoclassicists have a point. It's like population biology. You can derive population curves all you want, but you will still not be able to exactly predict the population of caribou. There are simply far too many variables to account for, plus a whole truckload of randomness. In addition, as Hayek noted, it is impossible to know many of these variables before hand.

    But some of the formulas are very useful. The demand curve, for example. As long as you know that its a simplified representation of an ever-changing aggregate of a single price in a continuum of prices, then it is useful. But once you start treating it as an objective reality, then you start making horrible economic mistakes (Marx and Keynes spring to mind).

    Austrians do tend to be, more than other economists, political ideologues. But their rejection of mathematics in favor of individual human action has led to two of the greatest insights in economics: marginal utility and time preference.
  • Re:Hmmm.... (Score:5, Interesting)

    by wildsurf ( 535389 ) on Thursday April 10, 2008 @11:25PM (#23032490) Homepage
    Here's an even better problem:

    Suppose Monty Hall gives you a choice of two envelopes. Each envelope contains a check, and one of them is written for TWICE the amount of the other. So you pick an envelope.

    Now, Monty gives you the chance to switch envelopes. (Assume Monty always gives you the chance to switch.) Logically, since your envelope contains X, the other envelope can contain either 0.5X or 2X, with 50% probability... So the expected value of switching envelopes is 50% (0.5X + 2X), or 1.25X. So, you should switch.

    But here's the tricky part: Monty now gives you the chance to switch back! Since your new envelope contains Y, then by the same logic as above, the expected value of switching back is 1.25Y... So you should switch back. Right?

    Clearly, something is wrong with this chain of thinking. Can you figure out what it is?
  • base 9.1 (Score:2, Interesting)

    by jschen ( 1249578 ) on Thursday April 10, 2008 @11:37PM (#23032576)

    Evidently, psychologists prefer base 9.1. It's not that they are bad at math, but that the world at large doesn't understand base 9.1. As for why psychologists prefer base 9.1, I haven't the faintest clue. I'm not a psychologist. I'm one of those bad at math organic chemists.

    99 (base 9.1) + 10 (base 9.1) = [(9 * 9.1^1) + (9 * 9.1^0)] + [1 * 9.1^1] (base 10) = 100 (base 10)

  • Re:Hmmm.... (Score:3, Interesting)

    by wildsurf ( 535389 ) on Friday April 11, 2008 @12:54AM (#23032926) Homepage
    Yes, but you're missing the REAL puzzle, which is that even the FIRST switch (with the calculated 1.25x expected return) doesn't gain any information. By symmetry, the expected return on the initial switch MUST be exactly 1.0x, yet the simple math says 1.25x. Where does the math/logic go wrong?
  • Re:Hmmm.... (Score:2, Interesting)

    by Refenestrator ( 1060918 ) on Friday April 11, 2008 @01:32AM (#23033082)

    The amount of money in the envelope you choose and the probability of winning by switching are not independent. As it turns out, although the percentages gained or lost are different, the actual dollar amount is the same on average. This is true no matter what probability distribution Monty chooses.

    For example, suppose he uniformly chooses one of $100, $200 and $400 for one envelope, and puts double that amount in another envelope. Suppose you choose an envelope and then switch. The envelope you initially choose has a 1/6 chance of being $100 (gain $100 by switching), a 1/3 chance of $200 (gain $50 on average by switching), 1/3 chance of $400 (gain $100 on average by switching), and a 1/6 chance of $800 (lose $400 by switching). Overall, you gain (1/6 * $100 + 1/3 * $50 + 1/3 * $100 - 1/6 * $400) = $0 on average by switching.

  • Re:Inaccurate? (Score:3, Interesting)

    by 19061969 ( 939279 ) on Friday April 11, 2008 @01:39AM (#23033122)

    I haven't found this in my experience but then it might depend upon what kind of circles you move in. If it's the "lower end" of the scale in terms of ability, then yes I would agree but the same probably goes for any subject. Otherwise, I would disagree. I majored in psych and did a PhD in the topic. My external examiner for my viva was a Cambridge statistician and ex-math olympian (Alan Dix if you're curious) who focused his interests in human-computer interaction I think because of the challenges he got in the field that math or stats didn't provide.

    I have found though that when it comes to "certain numerical rituals associated with null hypothesis significance testing", I have found far more problems in the hard sciences: biologists turning data into z-scores for comparison and then performing inferential analyses on those normalized data is one example.

    Perhaps it might have been more accurate to say that "poor / less able psychology researchers are often averse to math and tolerate math errors in papers"

  • by rohan972 ( 880586 ) on Friday April 11, 2008 @03:18AM (#23033552)
    Neurotics build castles in the air,
    Psychotics live in them,
    Psychologists collect the rent.

  • by why-is-it ( 318134 ) on Friday April 11, 2008 @08:53AM (#23035052) Homepage Journal

    I had a somewhat heated discussion with someone who called herself a psychologist but hadn't studied statistics. To my thinking, statistics is central to psychology being called a science. Without statistics you're trading in conjecture and anecdote. When I said psychology without stats isn't science, it didn't go down too well.

    When I took my degree (double major: CS and Psych) all psychology undergrads were required to take courses in statistics and scientific methodology. I find it hard to believe that someone with a degree in psychology from an accredited university never studied any stats.

    That said, there are many sub-disciplines in psychology. I studied cognitive psychology, and there was a fair bit of maths involved. Someone who wanted to be a clinical psychologist would not need be devoted to statistics, just as I was not devoted to learning how to help clients via talk therapy and the medical model.

    I went to a conference where the cognitive psychologists and clinical psychologists reviewed the same case study and made suggestions on how to help a client who was an alcoholic and suffered from bouts of severe depression. The clinicians believed that they needed to identify and resolve the root cause of the depression in order to end the alcohol dependency, whereas the cognitives believed that the client needed to stop drinking first, because alcohol is a depressant.

    At the time, I could not help but recall the story of the Petit Prince, and the episode in which he met the drunkard.

If you have a procedure with 10 parameters, you probably missed some.

Working...