Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Math Science

Crackpot Scandal In Mathematics 219

ocean_soul writes "It is well known among scientists that the impact factor of a scientific journal is not always a good indicator of the quality of the papers in the journal. An extreme example of this was recently uncovered in mathematics. The scandal is about one El Naschie, editor in chief of the 'scientific' journal Chaos, Solitons and Fractals, published by Elsevier. This is one of the highest impact factor journals in mathematics, but the quality of the papers in it is extremely poor. The journal has also published 322 papers with El Naschie as (co-)author, five of them in the latest issue. Like many crackpots, El Nashie has a kind of cult around him, with another journal devoted to praising his greatness. There was also a discussion about the Wikipedia entry for El Naschie, which was supposedly written by one of his followers. When it was deleted by Wikipedia, they even threatened legal actions (which never materialized)."
This discussion has been archived. No new comments can be posted.

Crackpot Scandal In Mathematics

Comments Filter:
  • I don't get it (Score:3, Insightful)

    by tsstahl ( 812393 ) on Tuesday December 23, 2008 @05:31PM (#26216319)
    In the immortal words of Tom Hanks, I don't get it.

    If the guy is a well known crackpot, what harm is happening? Obviously, I am not a citizen of this sub-world and could use the enlightenment.
  • Re:I don't get it (Score:5, Insightful)

    by ocean_soul ( 1019086 ) <tobias.verhulst@nOSpAM.gmx.com> on Tuesday December 23, 2008 @05:45PM (#26216461)
    In fact, the crackpotness of El Nachies' papers is obvious even to most grad students (you should read some, they are in fact rather funny). The bigger problem is that, by repeatedly citing his own articles, his journal gets a high impact factor. People who have absolutely no clue about math, like the ones who decide on the funding, conclude from the high impact factor that the papers in this journal must be of high quality.
  • Re:Prejudice (Score:3, Insightful)

    by Bill, Shooter of Bul ( 629286 ) on Tuesday December 23, 2008 @05:56PM (#26216597) Journal
    Science is built on reputation. If you have a good reputation, then people take your work seriously. If one of your publications is a hoax or fraud, your career is over. This smells like yesterday's fish's sweaty socks.

    Hell, expect Blogoavich to issue a statement tomorrow disavowing any association with this guy or his publications. It smells that bad.
  • by Pinckney ( 1098477 ) on Tuesday December 23, 2008 @05:57PM (#26216603)

    The summary claims Chaos, Solitons and Fractals, has a high impact factor. The blog linked to, however, does not assert this, and I see no source for it. He does also co-edit the International Journal of Nonlinear Sciences and Numerical Simulation, which the blog asserts "flaunt its high 'impact factor'." The link to the IJNSNS praising him is broken, so I can't confirm that.

    It looks to me like some crackpot got a journal. However, it doesn't seem particularly devastating. Nobody has based work on his articles purely on the basis of the "Impact Factor." I don't think anyone else is taking him seriously. At worst, libraries have paid to subscribe.

  • Re:I don't get it (Score:5, Insightful)

    by Daniel Dvorkin ( 106857 ) * on Tuesday December 23, 2008 @06:02PM (#26216669) Homepage Journal

    Oh, snap!

    Seriously? There's a lot of high-quality CS research out there in the journals and conference papers; of course there's also a lot of crap. But I'd say most of the crap comes from wishful thinking rather than pure crackpottery. If nothing else, if you try to implement something that doesn't work, you'll know immediately -- thus CS at least potentially has a built-in reality check that pure math lacks. I rather suspect that whether or not a CS journal demands working code from its authors is a strong predictor for the quality of the articles which appear in that journal.

  • Re:I don't get it (Score:5, Insightful)

    by BitZtream ( 692029 ) on Tuesday December 23, 2008 @06:06PM (#26216719)

    Much like anyone with a working knowledge of CS probably has the ability to verify the CS research, math is a rather logical science which is often pretty easy to verify. Sure sure, there are things that are hard to confirm based on the amount of calculations that must be performed and irrational numbers and all that (infinity is a bitch to test), but those things exist in CS as well.

    Its silly to some how imply they are vastly different from each other, they are in fact almost identical to each other.

  • Re:I don't get it (Score:4, Insightful)

    by Daniel Dvorkin ( 106857 ) * on Tuesday December 23, 2008 @06:13PM (#26216787) Homepage Journal

    Researchers in just about every field build on layers of other researchers' work. There simply isn't time to go back and verify every result in the reference tree of every article you site -- if you did that, you'd never get any original work done! Creating code that compiles and executes properly doesn't guarantee that everything you've based that code on is correct, of course, but it's a good sign. I'm not aware of any equivalent reality check in pure math. Now, I know relatively little about the field (applied CS and statistics is my game, specifically bioinformatics) so I'll happily accept a correction on this point.

  • Re:I don't get it (Score:5, Insightful)

    by dujenwook ( 834549 ) on Tuesday December 23, 2008 @06:16PM (#26216833)
    Yea, I read through a bit of the cited paper and got a few good laughs out of it. Maybe he's being published more for the humorous aspect of it all than for the actual information.
  • by __roo ( 86767 ) on Tuesday December 23, 2008 @06:23PM (#26216899) Homepage

    This turns out to be a problem space with some really interesting conclusions. I spent some time over the last few years working with researchers from MIT, UCSD and NBER to come up with ways to analyze this sort of problem. They were focused specifically on medical publications and researchers in the roster of the Association of American Medical Colleges. They identified a set of well-known "superstar" researchers, and traced the network of their colleagues, as well as the second-degree social network of their colleagues' colleagues among other "superstars".

    I built a bunch of software to help them analyze this data, which we released as GPL'd open source projects (Publication Harvester [stellman-greene.com] and SC/Gen and SocialNetworking [stellman-greene.com]). I've gotten e-mail from a few other bibliometric researchers who have also used it. Basically, the software automatically downloads publication citations from PubMed for a set of "superstar" researchers, looks for their colleagues, and then downloads their colleagues' publication citations, generating reports that can be fed into a statistical package.

    They ended up coming up with some interesting results. Here's a Google Scholar search [google.com] that shows some of the papers that came out of the study. They did end up weighting their results using journal impact factors, but the actual network of colleague publications served as an important way to remove the extraneous data.

  • by Anonymous Coward on Tuesday December 23, 2008 @06:27PM (#26216935)

    The one thing that separates crackpots from "Real Scientists" is who gets grant money.

    No, the one thing that separates crackpots from real scientists is whether their predictions stand up to experimental verification.

  • by rm999 ( 775449 ) on Tuesday December 23, 2008 @06:38PM (#26217071)

    I hate to say this because I realize how naive it is, but who cares about the quality of journals? Perhaps It's because I'm interested in a more applied field, but I judge papers by their results, generality, accuracy, clarity, and sometimes author - not what journal happened to publish them.

    IMO most journals have been killing themselves off in the recent past. While running themselves as businesses may have worked when they served a useful purpose, all they do nowadays is impede openness and transparency. Want to read Professor A's conclusions? You better pay 100 dollars to some publisher owned by a huge conglomerate, because they own that paper (which was often written with a grant funded by tax-payer funds.) This is unacceptable in the internet age.

    IMO, all self-respecting researchers should avoid submitting to journals that do not freely provide all content online.

  • mixture of people (Score:3, Insightful)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Tuesday December 23, 2008 @06:42PM (#26217109)

    The people who most directly care about especially quick-to-skim summaries of quality (like impact factor) are people judging the output of professors. If you're not familiar with a sub-field, how do you separate the professor who's published 20 lame papers in questionable venues from the professor who's published 20 high-quality papers in the top journals of his field? You look at some sort of rating for the venues he's published in.

    For reading papers, I agree it's not quite as relevant. I still do do a first pass of filtering by using my subjective views of publication quality, though. I'm more likely to give some surprising-sounding claim a thorough evaluation if it was published in a reputable journal than if it was published as a PDF on the internet, or in some obscure conference. You can't read everything, and the well-known conferences and journals in my area provide one level of vetting that I can rely on.

  • by sharkette66 ( 256044 ) on Tuesday December 23, 2008 @06:49PM (#26217207)

    When the first questionable but exciting buzzwords come to life, just explain away the doubters with more buzzwords that sound even better!!

  • Elsevier, just like other large commercial publishers of scientific journals, offers libraries a significant discount if they subscribe to their whole catalog.
    By including crappy, useless and inexpensive (for them) journals, they can siphon more money out of universities and into the pockets of their shareholders, as is their god-given duty as capitalists.

  • Re:I don't get it (Score:2, Insightful)

    by jman11 ( 248563 ) on Tuesday December 23, 2008 @08:55PM (#26218361)

    I think you're over estimating the problem here. Sure every researcher doesn't go through every step of every proof. However a lot of researchers go through a lot of the steps (and previously went through the intermediate steps) and a few researchers go through them all.

    Sanity is preserved by writing the proof down in a way that people can understand. Otherwise research papers in mathematics would all be about 5 lines long.

    The sanity check provided by successful compilation is a lot less meaningful. It merely says the syntax of the code is correct, nothing more.

  • Re:I don't get it (Score:3, Insightful)

    by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Tuesday December 23, 2008 @11:40PM (#26219403)

    Look up metamath, coq, and mizar. I suppose your definition of "genuine" problems may influence your opinion of these projects. coq was recently used to verify a completely formal proof of the four coloring problem, which I would classify as a genuine mathematical problem considering that no traditional proof has yet been found.

  • by ErkDemon ( 1202789 ) on Wednesday December 24, 2008 @12:31AM (#26219717) Homepage
    The academic publication system arguably pioneered many of the SEO techniques - self-linking, linking to your mates, mutual cross-linking networks, adding lots of outgoing high-value links to your material to improve index rankings, and so on.

    If you're a new researcher in an obscure field, one of the best ways to advance is to assemble a group of researchers interested in similar topics, hold a conference so everyone can get to know each other, publish the conference proceedings, and then you all publish like crazy citing each other's papers - this then bootstraps the whole group's rankings.

    Another thing you see happening is that journals are wary of single-author papers, so it's kinda accepted that if you write a paper all by yourself, you invite a few mates to be co-authors, to make the thing look more legitimate and to improve its statistical ranking (with three authors rather than one, a paper has three times as many incoming and outgoing author links, and correspondingly greater connectivity).

    If you see a paper published in a major journal, with fifty authors, you know that each one of those fifty people probably can't personally vouch for every detail in the paper. If they could, you wouldn't need fifty of them.

    You also know that when you go through the most cited authors in physics, and find that someone appears to named as a co-author on ~300 successfully-published papers a year, that that person is probably a head of department, and may not have actually read all the papers with their name on, let alone been involved in any of the scientific work.

  • Re:I don't get it (Score:5, Insightful)

    by ErkDemon ( 1202789 ) on Wednesday December 24, 2008 @01:21AM (#26219977) Homepage
    Well, there's been some research suggesting that some authors may not necessarily have bothered reading all of the material that they cite. It's easy to cut and paste items from the citation list of an earlier peer-reviewed paper, and just assume that the contents of those papers are what the earlier author has said.

    I guess that the way to test this would be to get a non-existent paper listed in Physics Abstracts, and cited in one or two major papers, and then see how many subsequent papers simply add the citation to their own list.

  • by lysergic.acid ( 845423 ) on Wednesday December 24, 2008 @01:21AM (#26219983) Homepage

    which is why academic publishing is seriously screwed up. the public pays taxes to fund most academic research, but then researchers have to pay journal publishers in order to get their papers published. and in return, the publishers retain the copyright to all public research, keeping it out of the hands of the tax payers who funded it (and charging Universities up the ass to have access to their own research).

    people used to justify this commingling of academia with commercial interests by the peer-review process involved in journal publication, but the peer-review process provided by academic journals clearly isn't working here. at this point, it would be far better for Universities to publish their own research papers, allowing public research to be made freely available to students, researchers, and anyone else who might be interested in it.

    research papers could be published in online databases where they would be archived for easy public access. it's easy enough for independent writers to self-publish and distribute their writings online. so it should be no problem for Universities to do the same. the peer-review process of papers submitted for publication could be handled either by the University itself, or different Universities could get together and form an agreement whereupon they would review one another's papers for free. this would keep academic research purely non-commercial and eliminate potential conflicts of interest.

    eliminating/bypassing commercial publishing houses would also mean that societally beneficial projects like Google Book Search wouldn't be stonewalled by greed-driven publishers, and public good could be placed before corporate interests for once. Wikipedia is nice and all, but serious research would greatly benefit from all academic research being made freely available in a searchable online database for all to access. after all, public research isn't very useful if no one has access to it.

Scientists will study your brain to learn more about your distant cousin, Man.

Working...