Crackpot Scandal In Mathematics 219
ocean_soul writes "It is well known among scientists that the impact factor of a scientific journal is not always a good indicator of the quality of the papers in the journal. An extreme example of this was recently uncovered in mathematics. The scandal is about one El Naschie, editor in chief of the 'scientific' journal Chaos, Solitons and Fractals, published by Elsevier. This is one of the highest impact factor journals in mathematics, but the quality of the papers in it is extremely poor. The journal has also published 322 papers with El Naschie as (co-)author, five of them in the latest issue. Like many crackpots, El Nashie has a kind of cult around him, with another journal devoted to praising his greatness. There was also a discussion about the Wikipedia entry for El Naschie, which was supposedly written by one of his followers. When it was deleted by Wikipedia, they even threatened legal actions (which never materialized)."
I don't get it (Score:3, Insightful)
If the guy is a well known crackpot, what harm is happening? Obviously, I am not a citizen of this sub-world and could use the enlightenment.
Re:I don't get it (Score:5, Interesting)
The harm, I think, is that he's not a well-enough-known crackpot; a respectable publisher (Elsevier) has given him a journal as his own private playground. This makes it more difficult for non-crackpots trying to enter the field (e.g. grad students) to sort the wheat from the chaff. It also allows other crackpots to come off as more credible by citing crackpot articles which have a veneer of respectability. Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.
Re:I don't get it (Score:5, Insightful)
Re:I don't get it (Score:5, Insightful)
It's a scam by Elsevier WRT bundling subscription (Score:5, Insightful)
Elsevier, just like other large commercial publishers of scientific journals, offers libraries a significant discount if they subscribe to their whole catalog.
By including crappy, useless and inexpensive (for them) journals, they can siphon more money out of universities and into the pockets of their shareholders, as is their god-given duty as capitalists.
Re:It's a scam by Elsevier WRT bundling subscripti (Score:5, Insightful)
which is why academic publishing is seriously screwed up. the public pays taxes to fund most academic research, but then researchers have to pay journal publishers in order to get their papers published. and in return, the publishers retain the copyright to all public research, keeping it out of the hands of the tax payers who funded it (and charging Universities up the ass to have access to their own research).
people used to justify this commingling of academia with commercial interests by the peer-review process involved in journal publication, but the peer-review process provided by academic journals clearly isn't working here. at this point, it would be far better for Universities to publish their own research papers, allowing public research to be made freely available to students, researchers, and anyone else who might be interested in it.
research papers could be published in online databases where they would be archived for easy public access. it's easy enough for independent writers to self-publish and distribute their writings online. so it should be no problem for Universities to do the same. the peer-review process of papers submitted for publication could be handled either by the University itself, or different Universities could get together and form an agreement whereupon they would review one another's papers for free. this would keep academic research purely non-commercial and eliminate potential conflicts of interest.
eliminating/bypassing commercial publishing houses would also mean that societally beneficial projects like Google Book Search wouldn't be stonewalled by greed-driven publishers, and public good could be placed before corporate interests for once. Wikipedia is nice and all, but serious research would greatly benefit from all academic research being made freely available in a searchable online database for all to access. after all, public research isn't very useful if no one has access to it.
Re: (Score:2)
cool. ibiblio.org [ibiblio.org] is more of a digital archive & online library provided freely in the spirit of open information exchange (they're part of the University of North Carolina, i believe), but they also maintain a collection of open access journals and allow users to submit their own research papers to the collection.
hopefully these kinds of open access archives will catch on at more universities and convince academia that commercial journals aren't necessary.
Re:I don't get it (Score:5, Funny)
Re: (Score:2)
Time for a duel between El Nachies and SCIgen [mit.edu]?
Re: (Score:2)
And another article on the problem of where to publish the article describing that measure.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I thought self-citations don't count for impact-factor ratings... if they are then it seems like a fairly obvious solution to the problem.
Re: (Score:2)
Since google similarly uses links to pages to compute their pagerank, they combat this problem constantly. People do all kinds of stuff, from buying or swiping the registration of a reputable domain name, to posting spam on forums hosted at .gov domains, to setting up complicated interwoven sets of cross-linking domains to fake "grass-roots" popularity.
It would be great if google could reveal more
Re: (Score:2)
So, he is good at gaming the "pagerank" of the scientific community.
Maybe we should let Google recalculate the Impact Factor and come up with a better formula.
Re: (Score:2)
Imagine if a computer science "journal" based on Hollywood's portrayal of how computers work were being published by the ACM, and you have some idea of how big a problem this is.
Hmm... so you're saying majority of the CS papers out there -don't- do this?
Re:I don't get it (Score:5, Insightful)
Oh, snap!
Seriously? There's a lot of high-quality CS research out there in the journals and conference papers; of course there's also a lot of crap. But I'd say most of the crap comes from wishful thinking rather than pure crackpottery. If nothing else, if you try to implement something that doesn't work, you'll know immediately -- thus CS at least potentially has a built-in reality check that pure math lacks. I rather suspect that whether or not a CS journal demands working code from its authors is a strong predictor for the quality of the articles which appear in that journal.
Re:I don't get it (Score:5, Insightful)
Much like anyone with a working knowledge of CS probably has the ability to verify the CS research, math is a rather logical science which is often pretty easy to verify. Sure sure, there are things that are hard to confirm based on the amount of calculations that must be performed and irrational numbers and all that (infinity is a bitch to test), but those things exist in CS as well.
Its silly to some how imply they are vastly different from each other, they are in fact almost identical to each other.
Re:I don't get it (Score:4, Insightful)
Researchers in just about every field build on layers of other researchers' work. There simply isn't time to go back and verify every result in the reference tree of every article you site -- if you did that, you'd never get any original work done! Creating code that compiles and executes properly doesn't guarantee that everything you've based that code on is correct, of course, but it's a good sign. I'm not aware of any equivalent reality check in pure math. Now, I know relatively little about the field (applied CS and statistics is my game, specifically bioinformatics) so I'll happily accept a correction on this point.
Re: (Score:2, Insightful)
I think you're over estimating the problem here. Sure every researcher doesn't go through every step of every proof. However a lot of researchers go through a lot of the steps (and previously went through the intermediate steps) and a few researchers go through them all.
Sanity is preserved by writing the proof down in a way that people can understand. Otherwise research papers in mathematics would all be about 5 lines long.
The sanity check provided by successful compilation is a lot less meaningful. It
Re: (Score:2)
That's what those 7-10 years of study you did are for. By the time you reach the cutting edge of research, you had plenty of time to take courses on all relevant fields and learn every important result that gets used. If you try to contribute to the cutting edge without knowing 95% or more of the results referenced in a typical paper on y
Re: (Score:2)
Actually,it can take researchers YEARS to debug a very complex proof of something like the Riemann hypothesis. To begin to understand the proof, even a Riemann scholar might need to first learn entire fields of mathematics.
I've heard it said of de Branges at Purdue that there are few people capable or willing to confirm his work because of the difficulty involved.
Re: (Score:2)
Re: (Score:2)
> CS at least potentially has a built-in reality check that pure math lacks
The truth is that in many parts of CS it's easy to publish a tissue of lies because there is no policy of publishing source code with algorithms, you just publish timings for the one case that worked and graphs of the output for the trivial case that nobody else cares about. (I'm sure lots of people in graphics will be nodding their heads at this right now...) Mathematicians, on the other hand, are expected to provide proofs, and
Re: (Score:2)
Your suspision is unfounded, for example how does one create working code to demonstrate the halting problem?
"If nothing else, if you try to implement something that doesn't work, you'll know immediately"
The only thing you will "know immediately" is that your code doesn't work and you will now have your own version of the haltin
Re: (Score:2, Interesting)
Formal proof languages cannot proof anything like what is proven in modern mathematics.
Formal proofs are a bit of fun for the CS people to play with but nowhere near able to handle genuine problems in mathematics.
It's like trying to describe to someone a book by describing which pixels are black and white on the page. Sure it can be done, but it's going to look like gibberish and give no useful picture until someone puts the pixels together to make the page.
Re: (Score:3, Insightful)
Look up metamath, coq, and mizar. I suppose your definition of "genuine" problems may influence your opinion of these projects. coq was recently used to verify a completely formal proof of the four coloring problem, which I would classify as a genuine mathematical problem considering that no traditional proof has yet been found.
Re: (Score:2)
"Is a false statement" is a false statement. Now go prove it.
Re: (Score:3, Funny)
10 Words (Score:2)
Re: (Score:2, Offtopic)
Acadamia never ceases to amaze me. Note, spelled wrong on purpose BECAUSE OF ALL THE NUTS!
Re:I don't get it (Score:5, Interesting)
Glad to be of service.
You realize, of course, that the only reason I was able to use a computer analogy is that we're talking about pure math. If we had been talking about CS, I'd have had to go with a car analogy right off the bat.
Re: (Score:2)
Acadamia never ceases to amaze me. Note, spelled wrong on purpose BECAUSE OF ALL THE NUTS!
That could be a funny joke, but you really need to work on your delivery.
Re: (Score:2)
Re:I don't get it (Score:5, Interesting)
And it gets worse when money becomes involved. Pseudoscientists and crackpots often try to find "investors" for their schemes, and even a layman who performs due diligence can be fooled when publishers like Elsevier become enablers for pseudoscience. When the paper shows up in an INSPEC or Web of Science search, how is the person being scammed supposed to know that the paper isn't really legitimate?
Many "free energy" scam artists already have patents for their nonsensical inventions, thanks to the laxity of the USPTO. It'll get worse unless these "pseudo-journals" are exposed and publicized to the greater science and engineering community, as well as the public at large. I had never heard of El Naschie before today, because I'm not a mathematician; thanks to this article, more people like me will now keep an eye out for his future "work".
Re: (Score:2)
Ah, so his math is wrong, and because the point of a peer-reviewed journal is being missed, the bad papers continue with being consider correct.
That's pretty crapulous.
Re: (Score:2)
Well, they're not really considered correct as such. You see, no one actually reads papers anymore.
Re:I don't get it (Score:5, Insightful)
I guess that the way to test this would be to get a non-existent paper listed in Physics Abstracts, and cited in one or two major papers, and then see how many subsequent papers simply add the citation to their own list.
There are no total guarantees. (Score:2)
You never do know for sure, even with major journals:
Problematic physics experiments [wikipedia.org].
If someone submits an paper on experimental physics, the journal referees typically aren't in a position to say that the experiment really happened the way that the authors say. As long as the claimed results are roughly in line with what people expect, and nothing see
Elsevier seems particularly prone to being "gamed" (Score:3, Interesting)
they were heavily taken by "cold fusion researchers," a canard in three dimensions if ever I heard one, 20 years back. perhaps they occupy the same place in scientific literature as S&P and Moody's does in careful review of bonding and finance? down Illinois' way, they call it "pay to play."
Re: (Score:3, Interesting)
As a published scientific author-
Elsevier sucks!
They have bought important 'name' journals and charge for everything (including your pre-prints) that they possibly can. Many reputable departments are boycotting their publications now.
They even bought me a nice dinner once in Tokyo. I guess that was a sign they were making too much money and had no idea how to spend it.
Mathematicians should use more car analogies (Score:3, Funny)
Mathematicians use all sorts of funky ancient Greek symbols to express their thoughts. It's like trying to read an APL program.
If mathematicians could represent their concepts in car analogies, maybe ordinary folks would be able to understand what all the fuss is about.
At least, here, on Slashdot, where the car analogy is the lingua franca.
And the mathematicians might have some fun with it. How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car an
Re: (Score:3, Funny)
And the mathematicians might have some fun with it. How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?
Oh that one is too easy - all you have to do is imagine driving on the Cross-Bronx Expressway during rush hour and you have the concept down pact!
The infinite-dimensional corresponds to the amount of time it takes to get to your destination, separable Hilbert spaces are where you are and the space just ahead of the car in the other lane moving faster than you which you can never seem to reach unless you go under the separable space under the truck, and isomorphism is what you think of the other testoster
Re: (Score:3, Funny)
First, assume a perfectly spherical car of uniform density...
Mart
Re: (Score:2)
Okay, Car analogies:
Topological Hairy Ball Theorem: It is impossible to drive your SUV in a path that covers the whole planet without crossing your own tracks.
Fixed point theorem: If you drive from LA to san fransisco all of one day, and back all of the next along the same path, you are guareteed to hit at least one spot at the exact same time you hit that same spot on the way up.
Combinatorics: You are tired of carhenge stealing your glory and want to create a car-amid (car-pyramid). if you want your carami
Re: (Score:2)
How would you express the concept of isomorphic, infinite-dimensional, separable Hilbert spaces with a car analogy?
Ok. So you've got this car with a one cylinder engine you see, and you put it into drive. Now nano-seconds before the first cylinder fires, you take a positional mark of exactly where the car is as well as the height of piston in the cylinder. Then it fires and moves forward and you make another mark for the car's position and the bottom travel of the piston. Then you jam it into reverse and ta
I get all my science from timecube.com (Score:5, Funny)
Yeah, you really have to be careful out there... that's why I get all my astronomy and mathematical insight (as well as web design hints) from http://www.timecube.com/ [timecube.com]
And if it ain't there, then I just look it up on wikipedia
Re: (Score:2)
For everything else - history, geography, astronomy, you name it - just pop on over to http://www.truthism.com/ [truthism.com].
Re: (Score:2)
"First and foremost, this website is not a hoax or joke"
That's the warning there in BIG RED LIGHTS
Re: (Score:2)
Re: (Score:2)
YOU ARE EDUCATED STUPID
Err... (Score:5, Funny)
Ohhh! Right right! This is the article. Slashdot is now a primary source!
Re: (Score:3, Informative)
Of the 31 papers not written by El Naschie in the most recent issue of Chaos, Solitons and Fractals, at least 11 are related to his theories and include 58 citations of his work in the journal.
And it's actually a theoretical-physics journal, with a relatively high impact factor of 3.025 for 2007.
Re: (Score:3, Interesting)
On a related note, in some fields there is a greater tendency to cite. I would consider an IF of 3 relatively low in biology for example, but it's decent in bioinformatics. The IF is for granting agencies who'd rather judge your work by the journal it's in, rather than actually reading the article or looking up its citation count in Google Scholar (if it's been around a while).
Incidentally, I've noticed that good open access journals in biology/bioinformatics are getting better IFs these days, so that mo
Re:Err... (Score:4, Informative)
Where's the article?
Second link in the summary:
http://golem.ph.utexas.edu/category/2008/11/the_case_of_m_s_el_naschie.html [utexas.edu]
Re:Err... (Score:4, Informative)
Second link in the summary
I really hate summaries that link bomb you and don't give you any clue which one is the main article.
Impact Factor (Score:2, Interesting)
--
JimFive
Re:Impact Factor (Score:4, Interesting)
Excluding references to the same journal is too harsh a criterion, since a lot of high quality papers get published in high quality journals. What should be perhaps excluded, though, is self-citation (whether to your own articles in the same or a different journal). Also, papers published in a journal by a journal editor shouldn't count.
Re: (Score:3, Interesting)
A principal supervisor o
Re: (Score:2)
If you still want to keep self-references, we could calculate modified impact factor as
modified impact factor = num
How did he get the high impact factor? (Score:5, Informative)
How did El Naschie game the system?
According to Elsevier, his impact factor is 3.025 [elsevier.com], which does seem high compared to Elsevier titles like Advances in Applied Mathematics (founded by Gian-Carlo Rota, who was a respectable mathematician).
It's clear from the samples that El Naschie's articles are complete garbage, and I'm sure no respectable mathematician would want to publish in what's effectively a crackpot's vanity press. This is obviously the scientific journal version of Googlebombing.
So how did he pull this off? Is he citing himself, and if so, where?
Re:How did he get the high impact factor? (Score:4, Informative)
Pick any of this recent papers and chances are good that most of the citations are to his own past papers. So, yes, that's how he's pulling it off: he cites himself ten times or so in each of his papers, and because he writes half the papers in each issue, that inflates the impact factor.
that seems odd, if so (Score:3, Informative)
Citation indices like citeseer [psu.edu] distinguish self-citations from non-self-citations; if you pick some random paper that has both, you'll see a tally like "81 citations -- 7 self". Does Thomson Scientific not actually bother to do that in computing its impact factor?
Re: (Score:2)
So just add "nofollow" to self-references
Problem solved.
Re: (Score:2)
No, then it takes 2 people. Or, if you check that case too, 3 people. (...)
Re: (Score:2)
How did El Naschie game the system?
He got posted on slashdot!
In all seriousness, he's clearly churning out self-referential articles, which probably accounts for half of his references. The other half are by his "students". I remember that the Church of Scientology was doing this with websites, to mess with search engine ranks, which is very similar to Impact Factor.
On an aside... is El Naschie related to Dirty Sanchez?
a perennial problem in bibliometrics (Score:5, Interesting)
If you want to automatically determine what constitutes a good journal purely from data, the definition is something like: is frequently cited by other good journals. Obviously, there's a circularity there. Various techniques attempt to mitigate it, but none are perfect, and indeed most are rather simplistic and easy to game. It's basically hard to distinguish, purely from citation data, a vibrant community of legitimate research from a vibrant community of crackpots.
In real life, most academics get around the circularity problem by starting with a set of "known good" journals that are determined by consensus in the field rather than algorithms (though this may sometimes be controversial). That lets them take into account more subjective things such as status of a research community (crackpots or not?). For example, as the linked article points out, the Annals of Mathematics is generally accepted as a top-quality venue for mathematics.
If you wanted, you could then construct an Annals-centric view of mathematical impact automatically by seeing how frequently other journals are cited by papers in Annals. This is what happens informally as journals gain and lose reputation: a promising new venue often first comes to a community's attention because its articles begin to be cited in "known good" journals.
But just taking all journals with no starting point, and attempting to extract from the citation graph which ones are "good" purely from the links, is doomed to failure, because there just isn't enough information in there to make the distinctions people want to make.
Re: (Score:2)
The one thing that separates crackpots from "Real Scientists" is who gets grant money. Look at String Theory or Post-modernism. Prestigious journals in both endeavors where both hoaxed. Post-modernism by the infamous Sokal affair (http://en.wikipedia.org/wiki/Sokal_affair) and String Theory by the Bogandov Affair (http://en.wikipedia.org/wiki/Bogdanov_Affair). There are also a lot of dubious things going on in the softer sciences that are heavily politicized. Meanwhile a lot of good fundamental physics
Re: (Score:3, Interesting)
So I guess the string theorists are all crackpots? I might even agree with you :).
Re: (Score:2)
Re: (Score:2)
So maybe we need a Bayesian Impact Factor (BIF)? Start with some distribution for journal reputation (say, the results of a survey of university faculty and other researchers working in the area) as the prior, and then calculate a posterior based on observed citation data.
Re:a perennial problem in bibliometrics (Score:5, Insightful)
This turns out to be a problem space with some really interesting conclusions. I spent some time over the last few years working with researchers from MIT, UCSD and NBER to come up with ways to analyze this sort of problem. They were focused specifically on medical publications and researchers in the roster of the Association of American Medical Colleges. They identified a set of well-known "superstar" researchers, and traced the network of their colleagues, as well as the second-degree social network of their colleagues' colleagues among other "superstars".
I built a bunch of software to help them analyze this data, which we released as GPL'd open source projects (Publication Harvester [stellman-greene.com] and SC/Gen and SocialNetworking [stellman-greene.com]). I've gotten e-mail from a few other bibliometric researchers who have also used it. Basically, the software automatically downloads publication citations from PubMed for a set of "superstar" researchers, looks for their colleagues, and then downloads their colleagues' publication citations, generating reports that can be fed into a statistical package.
They ended up coming up with some interesting results. Here's a Google Scholar search [google.com] that shows some of the papers that came out of the study. They did end up weighting their results using journal impact factors, but the actual network of colleague publications served as an important way to remove the extraneous data.
Re: (Score:3, Insightful)
I hate to say this because I realize how naive it is, but who cares about the quality of journals? Perhaps It's because I'm interested in a more applied field, but I judge papers by their results, generality, accuracy, clarity, and sometimes author - not what journal happened to publish them.
IMO most journals have been killing themselves off in the recent past. While running themselves as businesses may have worked when they served a useful purpose, all they do nowadays is impede openness and transparency.
mixture of people (Score:3, Insightful)
The people who most directly care about especially quick-to-skim summaries of quality (like impact factor) are people judging the output of professors. If you're not familiar with a sub-field, how do you separate the professor who's published 20 lame papers in questionable venues from the professor who's published 20 high-quality papers in the top journals of his field? You look at some sort of rating for the venues he's published in.
For reading papers, I agree it's not quite as relevant. I still do do a fi
yeah (Score:2)
That always struck me as somewhat funny about the term "impact factor". Having an impact is in normal speech an impact on something. These factors seem to be trying to avoid the question of what you're measuring an impact on by choosing something really broad, like "impact on the advance of science". But it shouldn't be a surprise that that's more or less unmeasurable.
Impact Factors are a Joke (Score:2, Informative)
The problem with impact factors is that they don't measure the quality of the papers, they just measure the number of times they're referenced. The thought is that the number of times a paper is referenced is proportional to the quality. Sort of like the concept behind Google Page Rank - more inward pointing links means that the site is "better". ... Except that relying solely on incoming links doesn't work to well if people start to game the system. Google, who made it's name with the power of Page Rank, h
Is it really a high impact factor journal? (Score:4, Insightful)
The summary claims Chaos, Solitons and Fractals, has a high impact factor. The blog linked to, however, does not assert this, and I see no source for it. He does also co-edit the International Journal of Nonlinear Sciences and Numerical Simulation, which the blog asserts "flaunt its high 'impact factor'." The link to the IJNSNS praising him is broken, so I can't confirm that.
It looks to me like some crackpot got a journal. However, it doesn't seem particularly devastating. Nobody has based work on his articles purely on the basis of the "Impact Factor." I don't think anyone else is taking him seriously. At worst, libraries have paid to subscribe.
Re: (Score:2)
Re: (Score:2)
> At worst, libraries have paid to subscribe.
You got to the heart of the matter. Ultimately this is the primary complaint that Baez is directing at Elsevier. There's also the issue that it makes a bit of a mockery of the publication process and suggests some things need improving. But Baez is a long-time campaigner against high journal prices and I think that was one of the reasons he felt so strongly about this issue. Elsevier distribute this journal as part of a larger package that libraries pay for an
Not exactly newest news... (Score:2, Interesting)
Slashdot is a bit late in reporting these news... I tried to submit them earlier [slashdot.org] when the news was fresher.
The problem at heart is that one of the biggest and evillest academic publishers, Elsevier, has been supporting a crackpot.
This shows that Elsevier isn't doing enough to promote the quality of research, and worse, libraries are paying huge fees with tax money for worthless journals. The problem here is bundling; university libraries have to buy in bundles journals, one of which may contain crackpo
EL Naschie Affair (Score:4, Interesting)
Re:EL Naschie Affair (Score:5, Informative)
The Bogdanov affair is a little different. I did PhD research in theoretical physics but I was a bit unsure about the work of the Bogdanovs. There were bits of it that I could nitpick at and say it was definitely mistaken, but overall it was a little tricky to judge the bigger ideas without being a specialist in their particular subfield. The Bogdanovs had some smart people fooled. It's a very good hoax.
El Naschie's writing looks like nonsense even to non-specialists (though I guess you still need a degree in mathematics or physics). There's no way it could fool even beginners in the areas his work covers. That makes it all the more astonishing that he survived with Elsevier for so long. Apathy I guess.
*shakes head sadly* (Score:5, Funny)
Alas, something I discovered to my sorrow over the years is that sufficiently specialized math is indistinguishable from gobbledygook (and vice versa).
In other news... (Score:2)
...there's a such thing as an "impact factor."
Job security through obscurity in mathematics (Score:5, Interesting)
People used to say about a mathematician or physicist that "what he is doing is so important that only a few people in the world can understand what he is talking about."
In a few cases it was actually true.
Also, there were mathematicians who believed that the highest form of mathematics was work that had no practical application. There was a story that the inventor of matrix theory expressed pride that he had invented a form of mathematics with absolutely no practical use. Little did he know how extensively his work could be used. He would have been appalled.
There still seems to be a feeling that the less people are able to understand a paper in a math journal, the more important the paper is likely to be.
At one time I was a subscriber to the Annals of Mathematical Statistics. Papers in math journals usually assume that you know every paper previously written by the author and the others in the field. There is often very little introductory material and no tutorial material in these papers. Even if you have a general understanding of the topic, you can't follow the papers because they are written very concisely, and assume that nothing needs to be explained if it was ever published anywhere else. You may have to backtrack for years of someone's papers and still not be able to understand the paper you are trying to read.
This is probably a combined consequence of "publish or perish" in academia and page limits in journals. It is often hard to tell if a given paper makes any sense or is useful.
I guess you could call it job security through obscurity.
Re: (Score:2)
Re: (Score:2)
I feel I have to defend the practice of skipping introductory material in mathematical papers. If every technical paper had a substantial tutorial section, then it would be twice as big or more. And what would be the
How insulting! (Score:2, Funny)
Buzzword Ponzi Scheme (Score:2, Insightful)
When the first questionable but exciting buzzwords come to life, just explain away the doubters with more buzzwords that sound even better!!
Deleting from the library would help (Score:2)
Crackpot Crackkettle Crackblack (Score:3, Funny)
It does not help the cause of the sole source of criticism, a math blog from U. Texas, to have ostensibly technical criticism asserting incorrectness but admitting ignorance (see second link in summary). The author takes issue with some points in an article of which he has some experience. However, he points out several things that he has no knowledge of and admits as much. He then asserts from this admitted position of ignorance that the material with which has is not familiar is somehow fraudulent. To make that claim valid the author would have to be able to determine that with certainty, but he can't.
This technical criticism is produced in support of a posting elsewhere in the same blog, the author of which makes the same sort of assertions, and likewise fails to support most of them. In fact he can produce partial support for only one, and then claims support from others which is not produced. Some of this supposedly comes from his own administration which he admits does not support his work pursuing the matter.
I take no position with regards to the central issue. I've seen a couple journals with very incestuous editorial policies and staffs. It makes it hard for others to get published. However, the situation evolved into this because those people did a lot of work with each other, not because any of it was fraudulent, so this can happen in an absence of any wrong doing.
Claims of wrong doing are extremely serious, as the occurrence of such things are. Such claims should be supportable. The claims made in TFA that are supportable are not of evidence wrong doing, and claims of wrong doing are unsupported, and by admission, unsupportable by those making them. As far as I can tell this is a single blog's flamefest with more crackpot value than what they claim is due their target.
In short, the accusers appear to be embedded in at least as much pot and crack as they accuse others of, failing utterly to differentiate themselves from kettle. They may have a valid point, but they fail to show it, instead making themselves look all the worse through the use of reciprocal psychoceramics.
Re: (Score:2)
Above posted modded "interesting" should be modded "funny". For those that didn't get the joke, it appears to be an ironic attempt to apply the "El Naschie" technique in a Slashdot comment. He's basically throwing around a lot of unfounded complicated nonsense signifying nothing in an attempt to get modded up. Much as "El Naschie" does the same in his papers when really it's just, IMHO, a stream of conciousness drooling on the keyboard crackpot spew.
Re: (Score:2)
You may be interested to know that the blog's author [ucr.edu] invented the Crackpot Index [ucr.edu].
Not a whole lot to see here... (Score:2)
Basically what's being said here is that the academic publication system is vulnerable to the sorts of SEO attacks that briefly caused search engines to be befuddled by sites full of interlinked pages full of nonsense text and viagra ads. The academic publication system just moves a little slower, so it's going to take them a little longer to update things.
Re: (Score:2)
What do you mean "briefly"? Wikipedia is still the top hit for most Google searches!
I wouldn't really say this affects academic publishing as a whole, though - these "impact scores" are pretty much an academic exercise, nobody really pays attention to them (unless they happen to coincide with pre-conceived opinions).
Journal SEO (Re:Not a whole lot to see here...) (Score:3, Insightful)
If you're a new researcher in an obscure field, one of the best ways to advance is to assemble a group of researchers interested in similar topics, hold a conference so everyone can get to know each other, publish the conference proceedings, and then you all publish
Research Paper generator (Score:2)
Looks like this El Naschie (is he a Mexican wrestler?) is using the Mathematics equivalent of SCIgen - An Automatic CS Paper Generator [mit.edu].
Re: (Score:3, Insightful)
Hell, expect Blogoavich to issue a statement tomorrow disavowing any association with this guy or his publications. It smells that bad.
One company, 2000 journals ... (Score:2)
Well-known scientific publishers tend to be well-known because of a few stand-out "gems" in their inventory that get all the attention. Most people never see the rest of the iceberg.
This is a commercial publisher who put put out 2000 different journals. That's an awful lot of iceberg.