The Fallacy of Hard Tests 404
Al Feldzamen writes in with a blog post on the fallacious math behind many specialist examinations. "'The test was very hard,' the medical specialist said. 'Only 35 percent passed.' 'How did they grade it?' I asked. 'Multiple choice,' he said. 'They count the number right.' As a former mathematician, I immediately knew the test results were meaningless. It was typical of the very hard test, like bar exams or medical license exams, where very often the well-qualified and knowledgeable fail the exam. But that's because the exam itself is a fraud."
Worthless (Score:4, Insightful)
Re:Worthless (Score:5, Insightful)
1 for right answer.
-1/4 for wrong answer.
0 for no answer.
Done.
Re:Worthless (Score:5, Funny)
Re:Worthless (Score:5, Funny)
Re:Worthless (Score:5, Funny)
Re:Worthless (Score:5, Funny)
Well, as a (happily) married man and considering the 6 point odd billion global population, I'd say it isn't quite impossible to score with a girl. You just have to learn that there is no correct answer, especially on multiple choice. ;)
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
Women will date, dream of, and marry men. They do none of those to boys.
Re: (Score:3, Insightful)
If you fail people for being less than perfect, they won't LEARN anything. This is how you teach people HOW to learn.
Re:Worthless (Score:4, Funny)
So it's rather like darts? You start with a constant number of points, say 300, and then with each answer you give points are subtracted from your total score. The game ends when you inevitably reach 0.
Re:Worthless (Score:5, Funny)
Re:Worthless (Score:5, Insightful)
-1/4 for wrong answer.
0 for no answer.
Re: (Score:2, Insightful)
+1 for right (patient lives)
0 for no answer (she knows she doesn't know and maybe consults with a colleague),
-1e38 for wrong (patient dies)
be more appropriate weightings?
Many medical specialists could use a tuneup on the difference between confidence and arrogance...
--
phunctor
Re:Worthless (Score:5, Insightful)
No. Everyone makes mistakes sometimes; a doctor who concentrates all his efforts into avoiding them will end up sending all his patients to see one expert or another. Not only does this overload the experts (who are supposed to see only a tiny subset of the patients, after all), but it also means it takes longer to get diagnosed. And in the long run, it means that only risk-takers will become doctors in the first place, shich is not good for anyone.
The worst case is if the experts will also start doing this: trying to offload the patient - and therefore the risk - to someone else as soon as possible. That will lead to the people with actual serious illnesses dying, since no one will actually diagnose them in their hurry to send them to someone else before they have a chance to die on them.
So no, your weightings are not appropriate. You can't assign virtually infinite negative weight to failure and expect anyone to try - at least anyone you want performing medicine.
Re: (Score:3, Interesting)
Have you ever seen that episode of Scrubs where they take that wealthy hospital donor to every department to try to figure out what is wrong with them, but no one kn
Re:Worthless (Score:5, Insightful)
When I took my board exams I studied old exams for weeks. The information in the exams wasn't really stuff directly from the curriculum; we covered the material but the focus was slightly different. In any case large portions of the information required to be regurgitated for the exam could be classified as "background" - stuff you need to be aware of but doesn't directly affect you in your daily work.
The exam WAS multiple choice and I credit test-taking skills as much as my education for passing on the first try. Logic and the process of elimination can increase your odds to about 50/50 in most cases.
Re:Worthless (Score:5, Insightful)
People will also want the doctor to do "something" even if nothing is wrong because they don't want to feel dumb for going when nothing was wrong. They want to justify that something was actually wrong so they don't feel foolish. Add to that the fact that most people have to pay some as a co-pay for a doctor's office visit and "want to get their money's worth."
So sometimes picking "no action" can be very hard to do.
Re:Worthless (Score:5, Interesting)
Another thing -- most people want to feel like the doctor at least LOOKED for something. One doctor I went to recently made me wait 40 mins to see him and then looked at me for like 30 seconds and prescribed something. Yes, that makes sense if you know what it is straight off and know what to do about it, but you might just wanna look for other things that I
Re:Worthless (Score:4, Insightful)
Here in Austrlia where we have paid sick leave for permanent employees, but typically companies require that you present a doctor's certificate to prove you were sick. So even when you know that you only have a head cold and should be home in bed staying warm and keeping your fluids up, you have to track down and wait in the doctor's office for them to write on a bit of paper that you really are too sick to go to work and that you should be home in bed...
On the flip side, my husband was mis-diagnosed by a number of doctors for over 15 years - he had severe sleep apnea to the point where he was having fits and seizures, memory loss and paranoia. I look like I am finally getting a diagnosis after 20 years of intrusive tests for why I have near constant nausea, indigestion and vomiting.
If the doctors didn't have to sausage factory process all the people who *know* what's wrong and what they have to do, they would probably have more time to spend with people who actually need help.
Re: (Score:3, Insightful)
I don't expect my doctor to actually *do* anything curatively speaking, i just expect him to be on my side when I have to tell my job I'm out for a few days getting over a cold.
Re:Worthless (Score:4, Insightful)
I can only suppose that there are times when doing nothing beats doing something. But you seem to be saying that, because such situations do occur, then it would be healthy to severely punish medical errors to the point where most doctors' first instinct is to do nothing, run another test, etc. Even though there may be times when that state of affairs would help certain patients, on the balance I think it would make medical care worse.
Re:Worthless (Score:4, Informative)
I should immediately point out that IANAD but I hope to play one in front of an admissions committee soon, so I may be talking out of my rear. However, the above seems to be the sentiment of most doctors I've spoken to. I just got done with the MCAT recently, so this topic is a bit close to my heart! An interesting site with a good take on the situation is here [pandabearmd.com].
Re:Worthless (Score:4, Funny)
I don't know how freely the doctors admit mistakes, but my friend tells me about his colleagues' mistakes every once in a while, so they aren't exactly secrets.
Re:Worthless (Score:5, Insightful)
Just for an example, say we were doing a geography test on the states of the united states and their associated capitals. I know 1/2 of them, and another guy knows 1/4 of them. Now, each question is a 4 part multi-choice simple question: State X, which is it's capital? A, B, C, or D. The thing is, even for those I don't know, 1/2 the potential answers (on average) I can eliminate as I know them, while the other guy, on average, can only eliminate 1/4 of them. So, I would get 50% on knowing the answers, and about 1/2 of the remaining on guesses. The other guy would get 1/4 on knowing them, and only 1/3 of the rest on guesses. And that's just the basic mathematic flaw in his reasoning.
Re:Worthless (Score:5, Informative)
His argument is that the harder the test the less relevant knowledge of the actual answers to the questions posed on the test are to determining your relative score. As a result, on a very hard test, two test takers with vastly different levels of knowledge of the correct answers to the test questions do not on average end up with scores that reflect that difference.
The "educated guess" does not contradict that argument. Again, the harder the test then the smaller the difference between the number of potentially correct answers you can eliminate versus the number that he can eliminate will be. With a sufficiently hard test, "educated guessing" makes no difference whatsoever.
So basically with a multiple choice, count only the correct answers test, increasing the difficulty is not an effective means of increasing the likelihood of the test to accurately filter out candidates with lesser knowledge of the subject matter covered by the test. Increasing the difficulty only increases the degree to which randomness has an impact on the results.
This is true, well known, and not very controversial. However, you would of course need to examine the specific tests in question to determine whether they are effective. They may have other features to help mitigate this effect. Also, his analysis is purely mathematical. It doesn't take into account the likelihood of a challenging test to create social pressure that influences people to self-filter. It could be argued that most of these tests are not testing the takers knowledge of the material so much as they are testing the takers ability to study and react to the pressure that the tests provide.
Re: (Score:3, Insightful)
Mmm. I'm not sure that would be a desireable feature; that'd bias the test situation in favour of arrogant idiots. For some professions confidence may be more desireable than knowledge (marketing?), but for a doctor I think one would prefer someone being reluctantly right than someone being confidently wrong.
Re:Worthless (Score:5, Interesting)
Actually, the problem here is his example is a total worst case scenario and doesn't tell us what the 'Pass' level is. The tests mentioned are not relative knowledge tests, they are pass/fail tests, in other words, I don't care how much Joe knows compared to Bill, all I care about is does Joe demonstrate the necessary level of knowledge to pass. In that case, assuming the test maker has the slightest clue, in the example the pass mark would likely be at 75%+ (you need about 1/2 right legit, and 1/2 of the remaining right on guesses or better) meaning that it's difficulty is fine as it has correctly blocked both people as they didn't show the necessary level of knowledge.
He might have a point IF he qualified this to scaled result tests (ie the top X people will pass regardless of their scores, only relative position counts), but he didn't. But, even in that case he'd have to analyze the distribution of all testees, not just 2. Once again, his math does work and doesn't support the argument.
Re:Worthless (Score:5, Interesting)
No, just no. The point of the tests are to determine who is over a particular threshold of knowledge and who isn't. The method being called a fraud fails to accurately do that. Since randomness has a proven substantial impact on those tests that threshold becomes blurred. To make matters worse, the harder the test the MORE randomness affects score. As a result the test results are meaningless at any scale. His examples where simplified to illustrate the essential math behind them, he does not need more than 2 people to compare since the math is equally applicable no matter how many are tested. He also does not need to set a scale because the math is equally applicable to any bar you might set.
The point of the article was to illustrate that these hard tests are meant to establish a minimum required level of knowledge, however due to the nature of counting only correct answers, randomness incurs a great penalty to the accuracy of the attempted measurement of knowledgege. He is suggesting, and rightly so, that a test that instead occurs an effective 0 net effect of guessing would much more accurately measure the knowledge of the participants by reducing the effects of guessing to nearly 0
.What this really comes down to is accuracy and precision. We assume that a test score can be equated to a measurement of knowledge, and for your benefit (it's completely irrelevant) we'll assume that a passing test is 60%.
The article's math indeed illustrates this point very clearly. The unspoken point is that in tests such as these, designed to set standards to be met, it is a fraud to use a test with low accuracy at measuring actual knowledge. The precision gained by penalizing guessing allows the test to be much more fair in it's administration.
http://en.wikipedia.org/wiki/Accuracy [wikipedia.org]Re: (Score:3, Informative)
Statistics show that this would be very unlikely for 5 tests with questions pulled from a common pool.
The odds of WAGing a multiple choice test is 25% per question. When distributed over a hundred questions, it's very unlikely that random guessing will score above 30% or below 20%, and that's for guessing the ent
Re: (Score:3, Interesting)
A normal person may score "wildly differently" on a 300-question exam from one attempt to the next, but the variance will be based more on differences in preparation, physical and mental comfort, stress, and how much sleep he got the night before.
The art
Re: (Score:3, Insightful)
True, but the article's math does nothing to support that case as the difficulty of the test does NOTHING to hurt or help this. The test format in that case is the problem, and his example again doesn't help because this test wouldn't be used for those guys who get 55% on the test, it would be looking for those in the 75%+ range (pass could even be set at 90% maybe even 100%, he never sets it) and this on a tri
Comment removed (Score:5, Interesting)
Measurement and item response theory (Score:4, Interesting)
"A multiple choice question might only have one right answer and its point value is the exact same as that of something much easier (especially, when on the harder on, the wrong choice might even be 'righter' than the correct choice on the easy question) -- but thats why there is an entire field of psychometrics out there to ensure that these sorts of exams are doing what they say they are."
Seems to me like that is more an example of psychometricians being forced to accept a less than valid form of test scoring. The proper way to do things has to incorporate Rasch's principle that the likelihood that a given test-taker will give the correct answer (on a question that is valid for the quantity it being used to measure) depends on the product of the easiness of the question and the ability of the test-taker. For that matter, lumped scores (pass-fail, ranking, or absolute) on professional proficiency exams - which by their nature must test disparate quantities with various non-linear contributions to professional qualification - cannot properly be interpreted as measurements of anything without a well-thought out unified criterion that describes the contributions and dependencies of the various quantities measured by the questions to the overall measurement of professional competence.
The problem there (Score:5, Interesting)
Let's say it's 20 questions, 4 possible answers each. He'll know 5 of those, has to guess 15. There's even a 1 in billion chance that he'll get all 20 right. (4^15 = 2^30 = approx 1 billion.) If you gave that test in China, by now you'd have at least one guy who pulled exactly that stunt.
There's also the issue of how well those questions fit your and his domain of knowledge. Let's say you can't possibly test _all_ the questions, because that's usually the case. You can do it for state capitals, but you can't possibly cover a whole domain like medicine or law.
There are 50 states, you know 25, the other guy knows, say 12 (rounded down), so it's not impossible that the 20 questions are all from the 25 you don't know, but include all 12 that guy knows. In fact, assuming a very very very large domain (much larger than 50, anyway), there's about 1 in a million chance that all 20 questions will be from the 50% you don't know.
Now when testing states that doesn't have a higher moral, because (at least theoretically) all states are equally important. In other domains, like medicine, law, even CS, that's not the case: stuff ranges from vital basics to pure trivia that noone gives a damn about. (Or not for the scope of the problem at hand: e.g., if I'm hiring a Java programmer, asking questions about COBOL would be just trivia.)
And a lot of "hard tests" are "hard" just by including inordinate amounts of stuff that's unimportant trivia. E.g., if I'm giving a test for a unix admin job, I can make it arbitrarily "hard" by including such trivia as "in which directory is Mozilla installed under SuSE Linux?" It's stuff that won't actually affect your ability to admin a unix box in any form or shape. The fact that SuSE does install some programs in different directories is just trivia.
(And if that sounds like an convoluted imaginary example, let's say that some "hard" certification exams ask just that: where is program X installed in distribution Y? And at least one version of Sun's Java certification asked such idiotically stupid trivia as in which package is class X, or whether class Y is final. Who cares about that trivia? It's less than half a second to get any IDE to fill in the package for you. E.g., in Eclipse it just takes a CTRL+SPACE.)
And in view of that previous point, including trivia in an exam just to make it "hard" is outright counter-productive. There is a non-null chance that you'll pass someone who memorized all the trivia, but doesn't know the basics.
Not all knowledge is created equal, and that's one point that many "hard" exams and certifications miss. If a lawyer doesn't know the intricacies of Melchett vs The Vatican, who cares? In the unlikely situation that they need it, they can google it. If they don't understand Habeas Corpus, on the other hand, they're just unfit to be a lawyer at all. Cramming trivia into an exam can get you just that kind of screwed up situation: you passed someone who happened to know that Melchett vs The Vatican is actually a gag question, and that case name appears in Stephen Fry's "The Letter", yet flunked someone with a solid grasp of the the basics and who knows how to extrapolate from there and where to get more information when he needs it.
Rewarding random guesswork is worse. Probably the most important thing one should know is what he _doesn't_ know, so he can research it instead of taking a dumb uninformed guess. Most RL problems aren't neatly organized into 4 possible answers, so it can be a monumental waste of time to just take wild guesses and see if it works. I've seen entirely too many people wasting time trying wrong guess after wrong guess, instead of just doing some research. E.g., I've actually witnessed a guy trying every single bloody combination between *, & and nothing in front of every single variable in a C function, because he never understood how poin
Re:The problem there (Score:4, Insightful)
if I'm hiring a Java programmer, asking questions about COBOL would be just trivia.)
If you ask that sort of question to a prospective programmer, you'll find out more about the person's technical depth, which may be of value. The guy who 'learned Java' because he read it somewhere or an 'advisor' told him it was a way to 'get ahead' is gonna be mister lightweight who is looking for a 'career,' not somebody who is a practitioner who takes a broad approach.
Further, it will help sort the candidates out. The ones who contrive 'fake' knowledge of COBOL can be rooted out and eliminated. Those who are willing to say 'I am not sure I know, but that's an interesting queston' get points, those who automatically start thinking about where to find the answer get even more points.
And, of course, the question will help to sift out anybody with actual COBOL knowledge, because anybody with skill in COBOL who is applying for a Java position is obviously an unstable nut.
Re: (Score:2, Insightful)
1. What number am I thinking of?
a) cheese
b) galaxy
c) 3
d) 1
A person who knows (literally) nothing has a 1 in 4 chance of getting it right. A person who knows what a number is has a 1 in 2 chance. You stick one hundred questions on a test and someone who is versed in the material will score bette
Re:Worthless (Score:4, Funny)
Re: (Score:3, Insightful)
Cartel Tests (Score:3, Interesting)
Re: (Score:2, Insightful)
Has the author of this blog got any scientific results to back up his claims? The NY State Bar has a statistical analysis of who passed its bar exam. http://www.nybarexam.org/NCBEREP.htm [nybarexam.org]
like bar exams or medical license exams, where very often the well-qualified and knowledgeable fail the exam.
IMHO there are only two reasons why the well-qualified and knowledgeable fail such exams.* They didn't study or they studied the wrong materials. We have all had that one exam we did REALLY poorly on and we would like to blame someone other than ourselves for our bad grade. This post
Sometimes guessing is a good thing (Score:3, Interesting)
So tests that force students to do a lot of guessing may still be good tools for evaluating their professional qualifications.
A doctor or lawyer who can guess right may be superior to one who plods to the right answer only after many expensive lab tests or hours of legal research. That's not to say that
The real problem with multiple choice tests (Score:2)
A close second was counting neg
When I was a boy... (Score:4, Insightful)
Yuck (Score:2)
His point about only counting the correct answers is rather silly. In a test where each question is either right or wrong, counting the wrong answers into the score does not add any information (you can tell how many are wrong if you know how many are right). The only thing it does is change the scaling of the resulti
There may be unanswered questions (Score:4, Interesting)
That's why the articles talks about only counting right ones. In order to avoid guessing, there should be a difference between picking a wrong answer and not picking an answer at all.
Re: (Score:2, Funny)
Re: (Score:2)
Re:There may be unanswered questions (Score:4, Insightful)
Fate, it seems, is not without a sense of irony.
Re: (Score:2)
Re: Yuck (Score:3, Insightful)
With 4 choices for each question on a 100 question test, the average student (student A) who knows 50% of the answers will get at least 62 correct if they guess entirely at random when they don't know the answer (50 plus 50/4 correct guesses). The average student who knows only 25% of the material (student B) will get at least 44 correct using th
Re: (Score:2, Insightful)
Re: (Score:2)
You're wrong. There are three ways you can handle a question: answer correctly, answer wrongly, not answer.
The fact that tests count only correct answers means the subtle difference between not answering, and answering incorrectly is lost.
Take for an ex
Re: (Score:2)
But most tests are like you say - and Jack can take advantage of his ability to know that he doesn't know (which proves he knows something!).
The fallacy of penalizing guessing (Score:2)
Basically, multiple choice tests wh
Statistical exam using Multiple choice (Score:2)
Each question had 5 options, and only one was correct. A correct answer gave 5 points, an incorrect answer gave -1 point.
Now, as the smart reader can guess, 4 x -1 + 5 = 1, so guessing still pays off... especially if one or more of the questions are very unlikely to be correct.
Did the teacher design this test incorrectly, since guessing was rewarded? Well, actually, the only test of real-life application of st
Re: (Score:2)
Education in taking the test (Score:5, Insightful)
Re: (Score:2)
My "favorite" multi-choice exams at school (or 'multi-guess' as we called them) were the ones where getting the first question in a series of several wrong, would doom you to getting all of the questions in the series wrong because the answers were all dependent on calculations from the answer of the first question! Of course, being m
Not Worthless (Score:3, Insightful)
Though some of his logic was overblown (see the comments made directly on his blog), I think his larger point has some merit. In fields which require lots of studying before beginning as a professional, such as medicine and law, you always hear that you have to be absolutely brilliant to 'get in'. The fact of the matter is that this is not the case: you should be darn smart, but you needn't be the best student in the world to be successful as a doctor. Many of the students who go to law or medical school (I'd guess most) are completely qualified for positions in their respective fields, but by the same token, are not necessarily any more qualified than their peers: they've all studied the same material, had the same experience in the lab, and know the whole picture within a reasonable approximation of each other.
Yet to maintain the level of exclusivity that these careers have, there must be some way to select a subset of the candidates to proceed, and at this point, there are few distinguishing features among them. Some will be far and away brilliant, and will easily get a career regardless; but the majority can't be differentiated from one another. So, how should it be decided who is a doctor and who isn't? By making a test that's so hard it amounts to a randomising function, and then selecting a subset of top scorers to pass. Passing doesn't mean one is inherently more qualified; it just means one guessed better on that day. This also explains why people can pass on their second or third try: they are no better than their competitors the next time around, but eventually one will guess luckily, and get in. It'd be interesting to do some statistical analysis on how many tries it takes people to 'pass' a particular exam, and see if the results fit probabilistic models: If the results of such analysis fit too well, the test is too hard, whereas if they deviate greatly from probabilistic expectations, then the test is more likely to be an actual test of one's knowledge.
To be sure, there will be some individuals who can pass based entirely on their knowledge, just as there will be some individuals who simply aren't cut out for life as a lawyer that will fail the exam. But ultimately, it allows the higher-ups to select candidates for job positions based on the single indisputable criterion of the candidate having passed an exam, thus avoiding any messy issues when someone complains about them choosing a particular candidate in lieu of one better qualified.
Time for a terrible analogy, since it's 0300 here: Really hard exams are the bouncers at the door to the club of medical careers.
I had a teacher... (Score:5, Funny)
I got an A, with an average of 58% in the class.
For the 2-hour final, he got up at the 1-hour point, and yelled: "The test is over. All pencils down." We just sat there dumbfounded for about 10 seconds, and then he said, "Just kidding. I always wanted to do that."
Ya, a real great pal there!
Worst teacher I had in college. He didn't last long
Evil profs rock. (Score:3, Interesting)
Re:I had a teacher... (Score:4, Insightful)
I practically meditate before a final exam on how to make the environment as comfortable as possible, clearly explain in advance what the procedures will be like, and keep everything in the same rhythm as all my prior tests. Just freaking out students in a final exam because you're a sadist is utterly unacceptable. Jesus.
My experience (Score:5, Interesting)
If you ticked "confident" and you were wrong, -2
If you ticked "confident and you were right, +2
If you ticked "unsure" and you were wrong, -0
If you ticked "unsure" and you were right, +1
I guess the point is that it's advantageous to guess, but only if you choose the lesser-scoring option.
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Well, that explains it. (Score:5, Funny)
News for nerds?: yes[ ] no[x]
Stuff that matters?: yes[ ] no[x]
Clearly the editorial process is fraudulent - as this is a multiple choice, it is obvious that guessing tends to count much more than knowledge.
From this we can conclude one of two things:
1) Zonk is bad at guessing
2) The author is speaking out of his ass
Tempting as it is, I am going to stick with 2... But I could, of course, be guessing.
Mutliple choice is bad to test knowledge anyway (Score:3, Insightful)
What? (Score:2)
I'm confused (or he is
Re: (Score:2)
I'll tell you about hard tests... (Score:2)
He provided copies of all former tests, along with answers and how to solve them, to the local copy store for students to buy (amazingly, this prof DIDN'T try to take you on them, the only
Re: (Score:2)
I find Mr. Feldzamen's post hard to believe. (Score:5, Interesting)
Re:I find Mr. Feldzamen's post hard to believe. (Score:5, Insightful)
It doesn't actually suggest anything other than 50% of people that apply pass. I can design an exam which is very easy; I then say that only 50% will pass. It could be that the "cut" is anyone who scored 9+ out of ten will pass and everyone else fails. Or I could flip a coin. The pass rate is no guide to how hard an exam is nor how good a test of the candidates' abilities. It might be both hard and rigorous, but you can't infer that just from the pass rate.
TWW
Re:I find Mr. Feldzamen's post hard to believe. (Score:5, Informative)
Disturbing (Score:5, Insightful)
Like I said, disturbing.
Re: (Score:3, Insightful)
It's done the exact same way for engineers as doctors and lawyers; what they're talking about here is the professional licensing exam, not the exams given in school. The exams in law school (and I believe medical school) tend not to be multiple choice.
Law school exams, for example, tend to revolve around very long, very hard, very convoluted essays. They
Re: (Score:3, Informative)
Choices (Score:3, Funny)
A) Perform a colonoscopy
B) Perform open heart surgery
C) Tickle him
D) Fart
E) Refer him to Cowboy Neil
I'm going to Mexico for my next check up. At least you'll get tequila first....
He is totally and completely wrong. (Score:5, Interesting)
Ugh. I just wrote a pretty polite reply at his page after skimming his idiotic article. Now that I've read it, I'm actually angry.
This guy knows NOTHING about testing. Nothing. He isn't even to the level of Classical Testing Theory (CTT), which is really not much more than means and Pearson correlations, and is nowhere near how high-stakes (and even medium- and low-stakes, increasingly) multiple choice (MC) tests work now, and how they have worked for many many years.
IAAP (I am a psychometrician). A big part of what I do for a living is design a particular MC test, pilot the items, and interpret the results. But I don't just count up the correct items and give you the percentage. Why? Because that would be insane. You can guess on those.
Oh, but he says this:
But suppose the grading attempts to adjust for guessing. There is no way of knowing what is in the mind of the test-taker, so the customary is to subtract, from the number correct, some fraction of the number wrong.
--Which is just fine until I tell you I have NEVER heard of dealing with guessing that way on a professional-level test.
As a general rule, we don't do any easy mathematics. At all.
Here is part of the output for a test I'm working on right now:
This is generated by RUMM2020, a tool for Rasch analysis. The Rasch model was developed in the 60s as an ideal model of item response. These are the stats on 3 items of this test. The two most important columns are Location and Probability.
The location is the item difficulty. Given the sample's performance on this item, and given their ability, how hard is this item? Item 35 is quite difficult; item 36, quite easy.
The probability is the p value for the chi square. Basically, if it's 0.05 or below, that item is operating significantly (statistically significantly, that is) outside of the model. It displays poor "fit." we generally toss these items before going on to the next step (ideally, these are weeded out during pilot testing, before the test goes live--in this case, it is an experimental test of a construct I'm not even sure exists anymore, but I digress). If an item has poor fit with the model, it is too much of a loose cannon, and its results cannot be trusted. This is what the benighted blogger (is there any other kind?) was whining about. That item is hard not because it is good, but because it is evidently stupid. The responses are all over the place, which means people were probably just guessing. Out it goes before it ruins any examinees' lives.
The next step is to get person locations. In the case of people, these numbers indicate the person's ability. This is calculated by looking at their performance on the items, given their difficulty (Which is calculated based on people's performance on them! Incestuous! But given a large enough sample, it all works out to a fine enough grain to be useful). Here is the output for the people:
So, the first person didn't do so hot; the last did pretty well (these usually top out at 3ish). As you can see in "DataPts," there were 125 items on this test. I started with 160. Do you hear that, Mr. Unexpected "Truths?" We have your back! We're not just handing you a naked score based on our crap items. WE PULL THE CRAP ITEMS.
That location score will usually be rescaled to something prettier, since no one would really like to see something like
Re:He is totally and completely wrong. (Score:4, Interesting)
I just want to point out that I never saw anyone use it (or anything else similarly complex) in universities, even on math departments. Typically grading the test is just the matter of summing up right answers (perhaps with partial credit) and then chopping the distribution into three parts As, Bs and Cs. A good reason for that is people perceive the grading being fair when they can predict how much answering a particular question will benefit them.
Re:He is totally and completely wrong. (Score:4, Informative)
Well, a poorly-written item will always be out-fitting. If the answer doesn't match the question, then everyone will have to guess. If everyone has to guess, the information curve (a great graph I'd love to show you, but can't here, and need to go to bed) will be about flat. There should be a big hump that shows that it gives us a lot of information about people a certain number of standard deviations above or below the mean. Questions like you describe won't have that.
Also, I wrote about this in another comment, but a lot of the items you get on high-stakes tests don't really go toward your score. They are actually pilot items that the company is trying out a few thousand times to see if they work correctly before they start contributing to anyone's score.
I don't know any cheap item-writers, though. Everyone I know in this field has at least a master's degree, and most have PhDs. We don't come cheap.
As for being a good or bad item writer, there are just a handful of rules to follow to avoid the big blunders. After that, it's all about taking them for a spin and seeing how they handle. As I've said a few times now, there's no telling what will happen when you release these things into the wild. I've written items that I doted on and cared for and nurtured and cuddled and put my all into, fully expecting that they would grow up to be model items, ones that the other items would look up to and aspire to becoming, only to be totally and utterly betrayed by them in real-world piloting, my time and devotion wasted, finally having to drag them out back and shoot them in the back of the head. On the other hand, there are sometimes items you add to a section last-minute, just trying to get the number up for piloting or whatever, and find that you have written some ridiculously wonderful item purely by accident.
It gets easier with practice, though. To be fair, I'm not a very good item-writer. But that is why I, especially, need the stats.
The worst multiple choice, ever (Score:3, Interesting)
Yes, No, Sometimes, Maybe, Unknown
Then, he had questions like 1. Some scientists believe than P=NP?
To which, of course, you could argue ANY answer is correct.
That being said, this blog post comes across as the usual whining we've all done or had to put up with through the years. No testing methodology is perfect, and everyone tests different on different kinds of tests. Fact is, though, they're pretty damn good. It's a common belief that millions of people who are otherwise idiots are graduating with great grades, while millions of geniuses can't test well - but that's horseshit. The majority of people manage to test at their level of understanding. The fact that people actually notice the odd idiot who guesses well is the exception that proves the rule.
Medical Specialist Exams have an Oral Component (Score:4, Informative)
Also the US has a strange system of certifying specialists. After completing residency (usually based on putting in your hours) you can practice medicine under the application 'board-eligible.' Once you've passed your exams, then you can be called 'board certified.'
In Canada, you can't practice at all unless you pass your board (Royal College) exams. The exams are reputedly harder in Canada as well (from those I know who have written both).
I want to know what my students know (Score:4, Interesting)
I give multiple choice exams with between 100 and 200 questions, and 4 possible answers.
Wach correct answer is worth 2 points; they need to answer 50 correctly to get 100.
They don't HAVE to answer any question, or any number of questions. If they can answer 30 questions, they can get a D. Any question answered incorrectly is -1 point. This serves two purposes.
It prevents guessing, and it forces the student to consider whether they actually know the answer, or just think they do.
I typically give 4 of these per semester. After the first one I usually get several complaints because they're not used to testing in this way. After the second I usually get one or two stating they can't break the habit of answering every question. After the final, I get many compliments and high marks on my evaluations, and the students tell me they are much more confident in what they've learned than from any other class. I've had occasion to run across previous students from years past, and they claim they still remember more from my class than from others.
I've had administrators forbid me to do it this way. I did it anyway. When they saw the results, they relented, and many suggested the process to others.
Re:warning moronic blog post linked (Score:5, Insightful)
I won't claim his post is correct or not, but he claims the technology behind such tests is wrong and lets less educated people pass through with guessing, whle more educated people try to pass without guessing and fail.
People see the tests produce poor selection, and make the tests harder and harder in attempt to remedy this (but they won't since it's the technology of a test that's wrong).
Then you come here and support his opinion 1:1 by claiming tests are too easy (i.e. should be harder) and idiots pass through.
Ironic, isn't it.
Re: (Score:2)
all multi choice questions suck , bad design (Score:2)
If the question said, pick the 'odd ones out' each worth n% its better.
There is no wrong or right unless the answer says so. But did the person designing the questions have a degree in writing/psychology/reading aswell?
Its easy to know who is a rope learner, vs a true genius, even Hawking flunked a lot at school.
Re: (Score:3, Informative)
Re: (Score:2, Insightful)
Re: (Score:2)
Re:Check the statistics not the mathematics! (Score:5, Funny)
See, I KNEW they were good for something. Let me guess, the reason you opted for MBAs over mice is that there is far less protests when you do cruel medical experiments on the MBA students than with mice, correct?
Re: (Score:2)
English courses are the strangest. I took a literature course and our final was a multiple choice test on several books we had to read. I felt like I was in 4th grade all over again. However, my personal experience has been that En
Re: (Score:2)
I went to a very small college for undergrad, so my experience is different, but I've had friends who went to huge state unis, and describe many classes where there was virtually no interaction with the professor besides multiple-choice tests. They are heavily used because they can be easily graded via automated systems. (Fill-in-the-bubble, aka "Scantron" forms, usually.) All quizzes, tests
Re: (Score:2)
Re: (Score:2)
Anyway what he wanted to do was weight the subtraction factor so that if you have no knowledge and just guess everything you should get on average a score of zero. For true or false this subtraction factor is 1.
T
Re: (Score:2)
Especially as one of the key elements of a properly written test is the wording of the questions - which is generally specifically written to confuse the "guesser". Very few people will actually (as the article presupposes) simply randomly choose an answer, most will attempt to read the question as a guide for t
That's not the only mistake (Score:2)
So it's not just a "Typo" that distracts, it supports a completely faulty conclusion.
One is left wondering what kind of mathematics background the author had. Also, noting the dittography earlier
Re: (Score:2)
And, and...ohhhh!...Shiny!