Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

New Ideas for Scientific Publishing Online 133

Albert Hybl is an Associate Professor of Biophysics at the University of Maryland who believes traditional scientific publications are often controlled by editorial cliques that don't necessarily select the best or most original articles available. He says that Open Source, Internet-based scientific journals are the wave of the future. Jon Katz touched on this subject last month. Today Professor Hybl tells us exactly how "Slashdot-style" scientific e-publishing could gradually replace the old way, even though, he says, "this will put a lot of Journal publishers out of business, and they're going to do a lot of kicking and screaming before they go."

Open Source Scientific Publication

Just as Gutenberg is credited for creating the movable type printing press, Harold Varmus, director of the National Institutes of Health, may be regarded as the creator of the E-Library.

The printing press provided a means to distribute multiple copies of political pamphlets, advertising posters, legal documents, novels -- and Journals containing scientific discoveries.

Scientific Societies use these Journals to disseminate research findings. The expense of Society membership and the additional fees for subscription to multiple Journals has gone out-of-bounds for both individual investigators and institutional libraries. It is just not feasible to expect a scientist/researcher or a member of the public to be able to subscribe to all of them or even to a few specialized Journals.

PubMed provides a powerful search engine for locating biomedical articles. PubMed searches are superior to browsing the specialty Journals. However, once you have used PubMed to find an article of interest you must then locate a library housing the Journal in which it appears, that has a Xerox machine with which to make copies of the sought-after articles.

Harold Varmus proposes that an open access e-repository be established to maintain permanent on-line and downloadable archives of scientific literature. The most obvious advantage of this to the researcher is immediate access to any published report via a hyperlink from the PubMed database. E-reports can also contain more information than print Journals, including larger data sets in various formats, pictures with greater detail, or even movies. Many of the costs associated with the publication of a Journal are avoided. Cited literature [footnotes] can also be hyperlinks, which simplifies in-depth background analysis for serious researchers.

Harold Varmus's proposal describes two methods for submission of a new report that could operate side-by-side. The first is to use the established editorial boards, and the second would be through a publicly available preprint repository.

European backers of Varmus's Proposal tend to favor the first, "closed" method of submission. Their claim is that by sticking to the traditional method there is less chance that the database would be flooded by poor quality reports. A subliminal reason for their desire to maintain editorial control might be that delayed publication gives the group that reviews the data extra time to analyze and extract ideas for future research before it is made available to the world.

With the second submission method, each submitted report would only need to be given a cursory review to eliminate voodoo science (SPAM for health care scams or unhealthy foods, etc.) before it was placed, unedited and unreviewed, into the preprint repository, where any interested party could read it. Each "preprint" report could be given a version number like most Open Source Software projects use. Perhaps the "development" version could show editorial strike-outs and new text in different colors from the original. The next higher, "stable" version would be the reviewed, edited, author-corrected copy. Still higher versions might contain supplementary information. Even after they are published, the lower versions should be archived and accessible for historical use.

The Harold Varmus Proposal would require an article to obtain two favorable reviews, perhaps from members of established editorial boards, before it was transfered from the preprint repository to the general repository. Varmus also touches on the possibility of more open reviewing "in which critiques of the scientific reports are accessible and signed." (Today, most scientific papers are reviewed anonymously.) I suggest that, in addition to solicited reviews, signed, unsolicited reviews should also be considered.

Summary

The electronic submission, publication, editing, indexing, archiving, retrieval and utilization of scientific reports, abstracts, and data is taking a significant turn, and Varmus's Proposal may help make that a turn for the better.

I, like Varmus, believe that since most scientific research is funded by the public, the public should have free access to it from open E-libraries via the Internet, and that the only person who should be allowed to claim "ownership" of a scholarly article is the person who wrote it.

No Scientific Society or Journal Publisher should be allowed to hold a copyright on scientific knowledge. The researcher is the only one who has the right to claim, "I discovered it and I reported it."

This discussion has been archived. No new comments can be posted.

Open Source / Open Science

Comments Filter:
  • by Anonymous Coward
    First, I would like to say that I love the idea, but...

    ...for us non-tenured folk, it would be risky to squander our resources on an internet journal. It takes quite a few articles in respected peer review journals to obtain tenure, and until an online journal gains an adequate level of respect (e.g. like an APA or Psychonomics journal, for my field at least), I'm not about to risk wasting any of my precious articles.

    Of course, once I get tenure, I'll be happy to participate. But until then, sorry folks!
  • > Wide distribution of the paper materials make ex-post facto modifications (i.e. rewriting history) much more difficult and almost impossible to hide.

    Digital signature systems (i.e. MD5) make rewriting quite difficult. Add in a protocol to go through a trusted server which adds a timestamp and such, and you can get as secure as you want/need.

  • People who don't care for the Slashdot system have less of a tendency to read Slashdot or to become moderators or to use their moderation points...
  • Of course, there is no real need to have individual journals, if the database can be searched flexibly.

    For the pull model of reading papers, you are correct. However, journals also provide a useful push model: you get shown things you wouldn't otherwise have looked up. This helps people follow developments outside of a subspecialty.

  • This is an idea whose time should have come a few years ago and indeed which has been anticipated by the journal publishers since they became aware of the Internet. I worked for one of the largest scientific publishers early in my career. From the sidelines in IT support, my impression was that the business "strategists" (I use the word loosely) were deeply worried that the Internet would allow the scientific community to bypass them and were desperately scrambling to find a way of harnessing the net that would still keep them in the loop. This took the form of multiple electronic publishing projects with few overarching goals or standards, offering little added value to customers and heading nowhere.

    The community has the tools to take on the role of the publishers themselves - they're the same ones the publishers are using (LaTeX, SGML, Perl, Apache...). What they lack so far is the organisational infrastructure to recreate the peer review and editorial process (I assume that these are still required). It is interesting to finally hear rumbles of discontent, motivation and progress.

    Should they succeed in the DIY approach, it sounds as if the publishers will have no one to blame but themselves - a situation analogous to the record companies vs. the online world. Scientific publishing has been a licence to print money for years: Robert Maxwell funded the expansion of his empire through revenue from subscriptions to Pergamon Press journals (one of the few industries where customers pay a year in advance of product delivery, which occurs in installments!).

    Ade_
    /

  • White people are savages.

    They eat with swords.
  • It was lead, not iron. Gutenberg was a jeweler by trade and was familiar with the many fun uses of soft metals. By using lead the printer could maintain a large, cheap set of type and easily refresh or modify his selection through the use of a few wooden forms in which new type was cast.

    jim
  • As much as I respect Eric's work in promoting nanotech these ideas all came from Ted Nelson's work, specifically the Xanadu project. Read Literary Machines for an example of a good flight of fancy on how such a system should look, but completely ignore it when it comes to actually implementing code. As the guy who fronts the box and bandwidth for crit.org I also need to point out that such a system is inherently broken because it does not place the burden of paying for the annotation upon the person doing the markup.

    jim
  • You neglect to mention the chief benefits of the journal system, namely that articles undergo some kind of review in order that the journal maintains its reputation for quality journalism. While I'm not claiming that such a thing is impossible for a startup e-journal, it would face all the usual challenges to any startup in establishing itself. For most of us, we wouldn't understand articles outside our field and it would be difficult to try and differentiate the genuine science from the crackpots. This would pose difficulties to your system of public review. I certainly think that a public database of scientific articles would be a great boon (we could actually use the Internet for the purpose it was designed, porn^H^H^H^H^H^H, scientific research). But, we shouldn't discard the benefits of peer review and accountability of a editorial body.
    --sam
  • > I agree that peer-review is vital to filter
    > out crackpots and commercial propaganda. But
    > rather than the traditional editorial board
    > approach, why not slashdot style moderation?

    What about INcredible papers? Quite a lot of things were generally rejected at first (e.g. discovery of the first platypus, RADAR, motor vehicles) because they were so new to the scientific community at large.

    That's where SlashDot-ish review would really help, because as well as the "accepted by the establishment" peer review winners, you could make available the "really controversial and hated by peers but otherwise generally liked" and similar categories.
  • Now there's an education in a nutshell!

    Now, who's dumb enough to say something like "but that doesn't happen in my discipline"? G'wan, I dare you! (-:
  • Who cares if there's too much information provided? It would be silly to read all of it - or to throw out the baby with the bathwater.

    In the "real" world (-: is the "real" computer world real? your call :-) you have newgroups, email lists, chat channels and such like, representing your raw material. It would not be difficult to maintain a list of "recognised" people and ratings for them, like the "posting history" on SlashDot and DejaNews.

    The next layer down is moderated groups, lists and channels. Still pretty much freeforall, but some sense prevails. SlashDot and FreshMeat are examples of this level. Real-world science conferences fall somewhere near this level. Consider how much a scientist spends participating in a conference. Why do they do it?

    Finally, you have peer-review sites and news/list/channel digests where experts consult all of the above plus their own knowledge and experience to produce more-or-less formal peer-reviewed output. This is, more or less, the computer equivalent of a journal.

    What open source science would offer is essentially free, instant and potentially filtered access at any level of this brouhaha, in parallel with the existing buddy networks in email and snail-mail. Remember that email and usenet largely exist because of ArpaNet and similar educational networks.

    Nobody is forced to read a word of the public-access low-quality crud. But it is always available if you're desparate! There is no paper equivalent of this at all, and when you're desparate or in a hurry, it could be a Godsend.

    If you're not desperate, read the moderated stuff, but do it instantly, not for $hundreds/$thousands per year, and not with the author paying $100-$700 per page for the privilege of seeing it printed.

    For example, Joe Random Poster's magnetic antigravity machine may not have worked, but in the process he may have solved a magnetics problem in a useful way. Sometimes a complete neophyte in your area of expertise is exactly the person you need to ask about a problem, because you can't see the forest for the trees. Finally, as is the case with the 3M PostIt Note inventor, sometimes people invent things because they don't know that what they're trying is impossible - and how many such inventions would be seeded from a public knowledge repository?
  • I almost posted my own comment, but "Stupendous Man" expressed well most of what I was going to say. So, I'll just add my two cents here. FYI, Stupendous got off easy on the publication charges, my last paper was ca. $5,000. The one before that was $4,000.

    The problem of cliques in scientific publishing is severe. Oceanography in academia, for example, lags other sciences (to the point of being laughable) because little clubs of tired, nasty, old men control at least one major American journal. Anything new is beaten down if it doesn't fit in the dogma of the 1960s.

    But, you must balance control by cliques against crack-pot (and dishonest) science, of which there is plenty (www.quackwatch.com catalogs fad, quack medicine, for example).

    As one who has published a variety of scientific papers, I'm forever in the debt of the nice, astute reviewers who catch logical flaws, misstatements, and other screw ups before the world sees the paper.

    So, what paradigm protects against quackery and fraud, while breaking the cliques, and catches embarassing goofs? I like the idea of a "dynamic" paper that the authors change (or not) as comments come in (the ./ paradigm). But, when do you cut it off and what is the final version for citations? And, in ten years, how do you retrive it?
  • My wife is a biochemist, I wouldn't know if the field is rife with these kinds of problems but I have heard of examples of all 3.

    Perhaps the worst is the "sit on research" tactic mentioned in #3. Reviewers simply should not be able to stall their peers while they complete their own research in the same field.
  • ..2 years delay I mean. If you have something urgent to share with you colleagues and the public - there are plenty of other means available. But for a record that would sit on a library shelf for generations to come - it should be well stable and not rushed out. A lot of stuff you pull from xxx.lanl.gov change before it is finally (if at all) published. So you treat it accordingly.
  • ..just press the big yellow button.
  • by Axe ( 11122 )
    ..interesting. Not spam.
  • Just check bibliography in your latest publication. Most articles will be several years old. Important experiments are not done daily.
    Quick communications are very important of course. That's what web was developed for [stanford.edu]. But IMO pre-print quality is not acceptable for the major trade publication. Different styles for different purposes.
  • Another problem with these "credibility scores" is that they are largely independant of field. When one submits an article to a scientific journal, you sugggest some possible reviewers, as people you feel are qualified to comment on your work, and the article is sent to one or more people (possibly one of the ones you suggested) in your field.

    So while I, for example, am probably qualified to discuss new research in semi-empirical quantum chemistry of conjugated polymers, my comments on a paper on organic synthesis of biomolecules are likely to be useless, if not plain wrong.

  • My dad teaches at Caltech, and published this paper [caltech.edu] on his website.
  • We're really trying to hide the accomplishments of the Chineese culture from the rest of the world.

    Pasta, rocketry, steel.. You'd think they would have also managed to invent a FORK!!! :)
  • This is a problem if people use proprietary
    data formats that can only be read by
    proprietary software that runs on proprietary
    OS's that are no longer supported on modern
    hardware. If anyone is dumb enough to do
    that I probably don't want to read their
    stuff anyway. Even dumber people may
    archive on physical media that can't be
    read by newer hardware. I guess their only
    hope is to put their old tapes and disks
    into cryogenic storage in case future
    gererations can somehow revive them.

    But I can't imagine that future
    computers will be incapable of running
    Linux or emulating it. Given that, they can
    access anything you can read on your
    Linux box today.

    As to the cost, my home hard drive can
    hold the equivalent of $1 million worth of
    printed academic journals ( and of course
    my next one will hold $10 million worth).
    If these materials were freely distributable
    in digital form they could be archived in
    thousands of places around the world at a
    fraction of the cost of keeping them on
    paper at the few elite universities that
    can still afford a comphrehensive collection.

    But academics are an amazingly
    conservative bunch. As long as taxpayers
    keep giving them a blank cheque they
    will keep squandering resources on
    obsolete information technology.
  • Funny how Gutenberg always gets the credit for inventing movable type when Bi Sheng in China invented it in 1045, a good couple hundred years before ol' Gutenberg.

    Maybe for our next trick we should give Charles Lindberg credit for discovering Europe? :-)
  • This is slightly off the main topic (and I'd just like to say that I think when we can find a way of certifying scientific information has been adequately peer reviewed, electronic versions will be the new medium of choice ... until then, we are stuck with the established media, and must deal with that.) ... and with that in mind:

    There are newsgroups out there where you can post such technical questions (try bionet.molbio.methds.reagnts). However, given that at some point you will have to reference your methods and reagents, you can really only take informal suggestions as just that - suggestions. (Conferences are great places to meet experts in your field who can give you some feedback, btw - and you can often email them later on when you have other concerns.)

    People can give you invaluable hints, but IMO you learn the most when you try to work through things yourself. Yeah, it sucks when things don't work perfectly the first time round, but you also end up learning far more as you try to figure out what happened (or did not happen, whatever the case might be).

    There are some journals and abstracts online that you can do a quick search for your particular research methods. Yes, you will have to ultimately read a few articles and make some choices as to which methods are right for you, but that is part of the whole scientific thing! Part of research is learning to review literature and decide what is appropriate to use in your own investigation. Part of what you are learning is independence, as well as scientific methodology. (This is something I have seen undergrads and neophyte grad students have trouble grasping ... but eventually if they stick around, they figure it out!)

    I am curious about what you are doing in confocal microscopy, sounds like it might be an interesting project. Good luck in your research,

    YS (PhD, Microbiology, The University of Calgary)
  • Admittedly small-time as yet, too small-time for Rob to even put it on his "sites that use Slash" page, but here it is: GerOL: Gerontology Online [slashdot.org]. It's a journal about life extension, both theoretical and practical. With most of the jargon linked to Slashdot's Everything site. I know there aren't many bio-geeks on this list, but check it out and see if you can make any sense of it, or suggest improvements. Oh, and it runs Slash, the best bunch of web-zine scripts around!
  • so where's the link? argh..

    well, here's a starting link I found real quick...

    http://www.ornl.gov/TechReso urces/Human_Genome/home.html [ornl.gov]


  • This is definitely the way that science _should_ go for two interlocked reasons:
    1. the dissemination of information is retarded by the current system:
    2. a central mechanism of science is unduly influenced by commercial considerations.

    Its probable that the historical development of our publishing mechanisms ( private societies hiring publishers is essentially an outgrowth of the vanity-press and the inherently high material cost was born by gentlemen-scientists and their associations) coupled to the need for stability in the social evaluation of scientific merit has led to many non-rational, non-ideal features which need re-designing. Obviously the reduction of the material costs of publishing _should_ lead to lower journal prices and hence easier access to information. Do we really want to retard the development of science in universities/countries that are too poor to pay for information? The idea of originality and intelligence being wasted so that a company can make a buck is revolting.

    It would be quite a problem to achieve this though. The most likely way for it to succeed would be with backing from established influential leaders in the field who were motivated by ideological considerations. Hope it happens!
  • The issue of rights and ownership of the document is an important one. Although you did mention it, it was hardly given the amount of space it deserves in a piece like this. Rather than it being some small issue to tack onto the bottom of your proposal, it should in fact be addressed first, before getting into the logistics of that matter.

    It is fine and noble to say that these ideas should belong solely to the author, but it has not been the case thus far in the print publishing world.

    The fact of the matter is that these journals will not go away, even with the creation of a public forum online. So, while they still exist, they will do whatever they can to keep their rights over this information. One idea might be for them to 'open' the electronic version of the paper, but retain print rights.

    So it sounds like a good idea once you have the issue of rights settled, but I think you should concentrate more on convincing the community that they will want their information to be 'open' before spending too much time planning the logistics of it.

    -Lisa

  • So, here we have the problem:

    Long term storage.
    Cause: Electronic format is too short-lived.
    Possible solutions:
    a)
    Print everything to paper a fixed period of time later. (That is, when the dust has settled and most that is worthwile about it has been said and done.)

    or
    b)
    Mass-mirroring of the information, like is done for kernels and the like. Have several "libraries" maintain the information at once.

    or
    c) both

    or
    d) Carve the texts in the earth's bedrock, hence trusting it to the next race that will destroy the earth ;-)
    ..
    uhm, I mean, find some other alternative :)

    Either way, the current system is not good enough, and I suspect everyone knows it. I personally vote for c. Whatever the solution, it will be hard to do, require hard work, and need funding in some way.
  • I agree that open review should happen, in fact, I think its inevitable. The Internet IS bringing more power to the individual. No longer should we expect to see so many top-down pre-established hierachies of power, we should get used to the bottom-up emergent organization fostered by the massive connectivity of the Net.

    Examples of publishing include the already mentioned physics archive and JAIR [jair.org] which is published online. It is still reviewed in a traditional manner but has plans for an open review process.

    Also see the Interactive Paper Project [uiuc.edu] for some technology that already allows open review (I think its a better approach than slashdot for papers) Another option would be that company that allows user to "post" messages to websites.

    My point is that the only real barrier is the established publish-or-perish publishing-house culture and, like any culture, it is just a matter of time before it evolves to match the available technology.

    --Books used to be only for monks, then came the printing press--

    Of course, I have no idea how long it will take. Soon, I hope.

    Jose
  • A lot of people have been talking about the problem of too much volume in an open publishing system. The sad truth is that this is already a problem with print.

    The academic publishing industry has ballooned under the weight of faculty and students "needing" to publish a certain volume for appointment/tenure/merit pay/etc. The pressures have created a treadmill of more and more journals to fill ever the increasing number of pages necessary for everyone to get out n papers per year. Being unable to keep up with everything published in one's own field is a common complaint.

    As it is, "moderation" is done individually by selecting the best know journals and papers mentioned by collegues.

    Maybe the idea of an open, Slashdot-style moderation acting in the place of peer-review would actually help this. Obviously, there would be no prestige in having a paper ranked 0 or 1 (on the /. scale). To get a high ranking a paper would simply have to be good. Perhaps reality would set in and the academic community would realize that most grad students (myself included) simply don't have several good papers in them.

    Or maybe not, what do I know?

    Greg

  • One of the fundamental economic assumption is that consumers have perfect information. The sad fact of history is that we haven't, mostly because perfect information is costly to obtain.

    With the internet, the costs have drastically reduced, almost to being free. So now information is more widely available and easily disseminated.

    We've seen this with music, movies, tv and we will continue to see it with all content. Information will be free. Like water chosing the path of least resistence, information finds a way out.
  • Wasn't the oldest fork yet discovered by archeologists found not too long ago at a dig in northern China? No joke... they did it all first baby...
  • In astronomy, there is a very popular preprint server at Los Alamos [lanl.gov] where many people submit electronic copies of papers they expect to get published.

    However, I usually don't peruse this server. I instead read journal tables of contents (often sorted by topic) or (even better!) newsletters focusing on my particular subfield (star formation). This is for the same reason that I read Slashdot: I don't have the time/energy to peruse everything. Instead, I put my trust in a "portal" of some nature to pre-filter my reading material for me.

    I do think it is very valuable to have large archives of electronic materials for reference research (like the NASA/ADS abstract and article [harvard.edu] service) which is much more effecient than a trip to the library. But I believe the parallel between open source software and scientific reporting breaks down at the bleeding-edge. "Beta" versions of scientific papers are either not worth much (for lack of credibility) or even dangerous (as "bugs" may propagate in the literature).
  • As said by many others, the key is trust. I trust Cmdr Taco, Hemos, etc to filter my tech news for me because I've enjoyed the articles they've chosen.

    But for the scientific articles I choose to read (which affect my career, unlike Slashdot which is entertainment for me), I need a deeper level of trust. A 100 year old journal with an editor who I have met is more worthy of trust (IMO) than a few-years-old abstract/preprint server run by people I know nothing about.

    I do hope that an open source type scheme is attempted, but it may take time to build the level of faith that we place in the established journals.
  • Preprints KICK ASS!!

    Of course, the web was invented by physicsts for JUST THIS PURPOSE. Back in the day at CERN [www.cern.ch], They wanted a way to share documentation across heterogeneous documentation systems. Tim Berners-Lee (I think that's his name) Came up with a subset of SGML known as HTML and the HTTP, ran a sever and client on a Next box, and Voila! In the begining it was just stuff like schedules, proposals, and budets and the like, but the Pre-prints are only a small leap in imagination.

    A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."

  • I think the relevant issue is not whether AOL'ers can see your paper. The issue is whether or not they have power to review your work. In a completely open system, there is a huge amount of potential for abuse.

    Imagine if mathematical journals allowed anybody who wanted to read the article to review it. The result could be disasterous! Say the problem is something to deal with pure math. The paper will state it's assumptions and methods in the proof as fact. It may appear to a "layman" who has good mathmatical knowledge that the paper is legitimate and proves the conclusion accurately, so he might up the rating on that article. This could similarly happen with lots of interested reviewers who aren't quite up to snuff. Thus the article could end up with a really credible score. Then a person who works in that field could look at the paper and discover that one of the theorems is used incorrectly or doesn't acutally prove the result. He reviews it down, but since 100 less qualified reviewers didn't catch the mistake the article still comes across as credible.

    This in itself might not be aweful, but what happnes if researchers start reading the paper because of it's credibility, but being pressed for time they don't actually verify each step of the proof, they just use the result in a proof of their own. The result is after about 4 generations of this, a large portion of the work that is being published could be incorrect! The effect could be much more pronounced in an experimental discipline where very few readers actually have the resources to repeat the experiments.

    The bottom line is this. As a scientist, I really don't want to spend my time reading articles that aren't valid. For this reason I don't see big problems with the current review process. I do think there might be room for pre-print repositories that would be unmoderated, but that official stamp of approval makes all the difference for people in the discipline.

    A more addressable problem is the issue of journal cost. I know that there are currently exploratory efforts by universities to "unionize" and demand things from journal publishers. One especially popular suggesion is to continue publishing as it currently is, but after 3 months, all works would go into the public domain and would be freely available on the web. This would allow journals to survive, but would also grant free access to the works.

    I agree that there is a serious problem with the way research is published today, but I do not think that an open review process is the answer. Scientists and Universities will continue subscribing to journals and will not be able to take advantage of more open efforts at publication if there is no guarantee of the validity of the work.
  • And just how would one stop the system from being completely overloaded with papers published by some crackpot?
    Yes, peer review / open source publishing is great, but it also means that _anybody_ can publish. That means a whole lot of crap to wade through before getting at the really useful gems.
  • However, I do agree with the major concerns for health care providers. If a research article takes one position on a health condition and this is open to the public, this could damage health care treatment. An example of this is if there was some new fangled cancer treatment and it got published in a journal online and if I was a cancer patient, knowing this could present problems with dealing with your health care provider. What if the published research is wrong and incorrect? What if it's misleading to the untrained eye?

    The biomed literature is already widely released into the press, e.g. "If you are an overweight diabetic male, eat 7.3 eggs a day (but not less than 6 or more than 9), and live with cats, you will have a lower probability of spontaneous human combustion". People who are interested in finding out what's out there for cancer are going to dig up a lot of the medical literature anyway, as well as a lot of stuff from unconventional sources. The traditional literature, which is frequently not annotated with commentary, can be difficult to understand and easily misinterpreted. Furthermore, a lot of stuff that passes through peer review proves to be incorrect or misleading anyway.

    If we can attach commentary from reputable sources to each "article", then the lay person can either try to read everything critically or just choose who he/she wants to believe (e.g. "I'm just going to do whatever Dr.Koop thinks is best in this situation"). They may not understand the statistics behind the NASCET trial themselves, but with ample commentary attached, they can probably understand the bottom line and get a sense of how controversial the conclusions of the paper are.

    The internet will make it possible to combine vast amounts outcomes information that could provide the statistical power to prove or disprove ideas that could not be adequately tested by traditional methods whereby a single center ( or a few centers) collects a statistically inadequate small sample of something and publishes it in a peer review journal. Sort of like the power of the CDC, but for everything.

  • If you have something urgent to share with you colleagues and the public - there are plenty of other means available. But for a record that would sit on a library shelf for generations to come - it should be well stable and not rushed out.

    Scientific models have always been and will always be imperfect and subject to constant revision. So why spend two years polishing up your (now old) research so that it can sit on the shelf for generations. Your article will most likely be supplanted anyway in a few years. Better to release a reasonable snapshot and move on to bigger and better things.

    There just aren't a lot of scientific articles that are worthy of being carved into stone, and many that people would consider worthy are mainly of historical interest. Even if it is a revolutionary paper (Einstein's paper on the photoelectric effect, etc.), you don't want to polish it for two years. Just get it out there and let it impregnate other people's thoughts like a seminal paper should.

  • Slashdot style moderation will work for scientific journals without the slashdot style moderation. The article is the "submission", and the comments are the "moderation" (or peer review, if you will). Comments by people who are established workers in a field would get highest billing, and would not be anonymous.

    Several journals that I read have such a system in place already. At the end of the article you can read a few paragraphs about what others in the field thought of it. So you can see if your trusted guru X thought Y's paper on flux capacitors and oscillating overthrusters was BS or not. Much more valuable than just having the article "pass" peer review and get published. Of course, they have to limit it to three or four commentators at the end of each article because of publishing limitations. Online journals won't have this limitation. There is no reason that the vocal experts in a field who are involved in traditional peer review can't migrate to online peer review. These people enjoy their power and probably won't want to give it up when online journals start to gain readership.

    Scientific information traditionally published in journals should be free. Lots of libraries have had their funding for journals slashed, and plenty of "well-funded" libraries have had to cut back too. Alas, there will be many clusters of books and journals that go online for those who pay a monthly subscription, such as with MD Consult [mdconsult.com].

  • Yes, science and life in general would be utopian if we could be judged in a fair manner on our work. However, everywhere including our beloved /. has biases. I don't see how anything in this article will really lead to a change in the way scientific works are reviewed and given credibility. Texas A&M still employs a chemistry prof who tried to prove he could turn lead into gold. I'm sure there would be some online group willing to publish this "research". Likewise, there would be some people that read it and believe it works. However, most of the people will see it as junk and not bother with it. The same is true of modern journals. Every library gets Nature, Science, all the big name journals. These start to get prestige so the competition for articles increases. Someone has to filter the stuff because you can't print it all. That person or group is human and therefore subject to human nature. This means being biased, succeptable to influences, etc. I don't see where going online and publishing your own stuff will really make any change unless you can get a computer to review it. Even then, the person who writes the algorithm to decide the value of your work will put their own weighted bias on certain things.

  • Paul Ginsparg's e-print server [lanl.gov] began as an automated e-mail archive of high-energy physics theory papers in 1991. xxx.lanl.gov (now also known as arXiv.org [arxiv.org] is now the primary source of online literature in Physics and Mathematics.
  • In physics, authors have published on the web for a long time now. In Aug '91 (yes eight years ago), a pre-print server was set up at Los Alamos [lanl.gov]. It initially was just for high energy theoretical physics, but now covers a lot of subjects, and is heavily used. Authors send their papers in (normally before sending them to a normal journal), and everyone can see the new papers the following day. These days, every piece of new research (in high energy physics at least) can be found here. Interestingly, this hasn't stopped the journal publishers much. Papers are still sent to them, and most institutions involved in the subject still subscribe to them. The big difference is that now researchers see each others results as soon as they're ready to publish, and that the papers are much easier to get to. Seems to be a good thing.
  • Slashdot can't post everything, so even though each one of us has submitted interesting and worthwhile articles, they may never get posted.
    It has nothing to do with being Open Source (IMHO) but has more to do with persons in charge.
    Review processes in general. The Internet is a great way to get published. When you start submitting your work for review so that it will be published, whether at Slashdot or anywhere else, you run into the same "cliques".
  • non-repudiation can handle the "rewriting of history" problem. Simply refuse articles that haven't been signed electronically. Now a disreputable journal site could allow authors to recall a paper for errors and resubmit a redacted paper, but how is this different from the present review and edit process?

    The problem with 60s punch card data is the specs may not be available which makes writing translation tools problematic. If e-journal submissions are required to use a published specification in order to be e-published and reviewed, presumably these specifications will stay available over time. this means tools will be written to allow continued access to legacy publications as specifications change.
  • In addition to the physics/mathematics archive mentioned by several others, there is a cognitive science archive:
    http://cogprints.soton.ac.uk/ [soton.ac.uk]

    There is no place for real discussions or peer reviews, however. Most online journals I've seen are pretty weak, just copying what a paper journal does except with lower quality and funding.

    The physics archive is at http://xxx.lanl.gov/ [lanl.gov]

  • Actually, you can turn lead to gold, if I remember my particle fizziks correctly. Just bombard it with heavy particles until the lead nuclei absorb enough protons and neutrons to become unstable, and decay (through a convoluted chain of decays) into gold nuclei. Of course, then you have a tiny amount of expensive, unstable, radioactive gold. It's still cheaper at this point in time to dig it out of the ground.

    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • It seems that the only shift that is needed is not
    that the publisher must be eliminated but that the media on which the articles are published must evolve.


    The Los Alamos e-print project shows that the electornic distribution and searching of the knowledge works. And it works very well.


    The next stage (which is already happening) is the elimination of the printed publication. But not all publications need to disapear... only those that have specialized to the point of being just a container of papers (Have you ever seen Physics Review A or B or C or D, etc). Opinion and field reviews need to have support for the lateral effect that the electronic media hasn't yet perfected (Think of 'I saw this article next to another one I was looking for in the same issue). Portals are OK but still limited.


    Printed paper has had one advantage to the electronic format... it can be read after many many years. Try to read a 5 1/4 floppy today. And worse, only those texts written in TeX or SGML or PosScript are really readable today after ten years. So a careless jump into the e-print business could really be a disaster if some sort of standards
    are not carefully thought out.


    Regarding the peer review mechanism... it will be with us for a long time because even though it may be a bit biased it is the only way to filter out the noise. It hopefully will be coexisting with a repository of freely submitted material (like Los Alamos) so that those that want to do their filtering process by themselves may have the choice to look for themselves and build their own
    filters.
  • A couple of reasons this won't work

    1) Unchallenged orthodoxy. Consider how hard it is for any pro-MS or anti-Linux comments to get published here. Each is quickly moderated down. This same effect will occur with e-journals- most of the folks interested enough to read the comments will most likely be workers in that field, who are used to the orthodox position.

    2) Who can rate? Others have posted here with the exact same comment, and it's perfectly true. If I write a quantum chemistry paper (my ex-field), are you qualified to judge? After all, I can moderate posts here discussing the Linux kernel, something I have no experience with at all.

    3) And finally, a combination of the two. There may be two or more camps, each pushing its own orthodoxy. Perhaps all the Christian fundamentalists will moderate down any paper on human evolution. After all, there are a lot of them, far more than evolutionary biologists. Similar things will occur in psychology (Scientologists), medicinal chemistry (homeopaths) and so forth.

    Not really a workable scheme. The current peer review process works very well, and can be easily extended to e-journals. Keep what works.

    Eric

  • Libraries all over the world have MAJOR problems with the preservation of old books and documents. Actually, it's not really the books from centuries ago that are causing problems, but mainly the books from this century... we're using a different kind of paper than we used to. But regardless of what type of paper we use, it won't last forever.

    An electronic archive takes up less space, it's trivial to mirror, copying the date to more modern media is easy. Yes, it will take an effort to manage this in a consistent way. However, this is a much easier process than the preservation of the millions of books that are falling apart at this moment.

    (And if you insist on having a paper archive, you can always print out the electronic articles.)
  • Or rather, should not be a problem. The money saved by cutting short the greedy publishers is more than enough to solve problems like this.
  • 1. Peer review is of fundamental importance. But that should be easy to implement, once the idea of replacing old-style publishers by free electronic archives is to some extent accepted by the scientific community. Peer review has always been fully paid by the universities (and not by the publishers), so not much will change there. The trick is to get the infrastructure working, but with the money saved by libraries that stop buying old-style journals, that should be definitely possible.

    2. Yes, the scientific community needs to recognize electronic journals. Things just have to start to get rolling.

    3. Why would an electronic archive be harder to maintain than a paper journal?

    4. Scientists of this world, unite!!!
  • | Give each reader and each paper a credibitlty score. The
    | credibility of the reader is based on the average credilibity
    | of each paper they have submitted. Each reader can give a
    | credibility score to each paper they read. The score for the
    | paper will be the submitted scores, weighted by the
    | credibility of each reviewer.

    what you're describing is a popularity contest.
    truth is not established by concensus.


  • since you are, as you said, a newbie here, I will tolerate the offtopic post and try to give you a hand. While I don't know the answer to your question, I suggest you check in the help section at Linux.com. Their IRC chat server was especially useful to me, and I have had numerous questions answered quickly and without a ton of flames there, I defenitely recoomend it highly


    Tell a man that there are 400 Billion stars and he'll believe you
  • Here's a decent suggestion: In another section of Slashdot (perhaps a slashbox), one could post questions dealing with research techniques. I've only begun my experiences in the research community, but I've found that finding out that one piece of obscure information on how to do (insert here) just right or how to do two specific things at once while ruining neither can be a laborious process of looking through article after article, reading their methodology.

    Specifically, I've been working with Confocal Microscopy and different flourescent dyes. If I had the ability to ask a specific, technical question to a large audience of technical people and receive an answer within a few days, that could save me an afternoon of searching (or days of searching, ugh) . I'm sure others feel the same way.

    I feel that Slashdot has enormous potential for expanding its technology discussions to not only computers and internet-related tech, but to biotech, engineering, and physics.

    Howard Salis Rutgers University, N.J

  • Concerning reviewing journal articles for methods, I must have depleted at least $20 worth of copy cards in the nearby Tech/Medicine library. The graduate student I work for insists that I look up everything and be told as little as possible; so while it's quite frustrating, I suppose it has gotten me used to reading journal articles.

    Currently, I'm helping the aforementioned graduate student (Jane Tjia) study phagocytosis of keratinocytes. We're using the confocal microscope to quantify the fluorescence of conjugate antibodies of a certain type of integrin.

    Howard Salis Biochemical Eng., Rutgers U.

  • The entire issue here seems to boil down to quality vs. quantity. The open-source way of publishing is definitely aimed at opening up the publishing field to more people, hence increasing quantity. However, the current peer-review board system enforces a quality that is hard to come by. The idea of having a review-board is, in my opinion, the only way to combat a barrage of unqualified papers. However, if the entire reason for creating the paper database was to steer away from biased review-boards, the mission really hasn't been accomplished. Like it or not, a peer review is the best way to go for a decent, well-justified paper.
  • I argue that even in an open source/electronic publication certain biases will form based on the opinions/ideas that people who use the resource. Any major publication, even respectable ones, are judged to have certain editorial biases. The Washinton Post is regarded as a slightly "left-wing" publication, which is not to say that there are no "right-wing" editorals or articles. The source of this bias is not readily identifiable. It can be attributed to many factors, the editors, the owners (publishers), and of course the general readership. Though perhaps not as good an example, slashdot exhibits these same trends, a bias towards open source. Again, there are many reasons for this bias, primarily due to the readership it attracts, as well as the editors. It clearly exhibits targeted editorialization. This moderation system only serves to enhance such biases, I've never seen a post trashing linux and advocating windows, however eloquent, moderated UP :-)
    Modern scientific journals have the same type of bias present in the types of scientific information they publish. Again, this is due to the readership AND the editors. When forming an electronic publication, even one that is "moderated" by the readership, the same types of biases will eventually be formed. Especially as the site gains a reputation, people with similar interest in subjects will frequent the site. It is very nearly *impossible* to accurately represent many different scientific ideas, without forming some kind of bias in the publication, be it electronic or not. It then becomes an issue of finding a publication that suits your needs. Electronic publishing simply makes it easier (and cheaper) for less well known researches to publish their works, and also easier for researches to search works without browsing through years of paper journals. I heartily endorse electronic publications, and think that they help make editorial biases irrelevant by allowing users to better filter items THEY are interested, but I don't necessarily expect it to do much to eliminate reader/publisher biases.

    Spyky
  • We know that a website that is hit more frequently is of greater interest to advertizers. I would encourage a slashdot-style website for scientific journals that encourages non-scientists to participate in the process. Though it may seem a distraction, it becomes documented evidence to those who issue research grants and politicians that approve funding that this is of interest to a part of the voting public. Links should be provided to politicains that will be voting on budgets (lets say of NASA) with perhaps even a form letter that can be sent via-email to make it convenient for people to vote their support.

    Scientists will also become more seasoned in their ability to communicate their ideas and concepts to lay-people, and members of the buisiness community can make suggestions into how a particular discovery or direction of research could be commercially applied. Ethics questions brought up in an open forum might be disturbing to a scienist, but he will be better able to handle such questions in person if he has already had the chance to respond to them in a Forum. Or if the concern is real and something he didn't consider he might modify the experiment to adress a concern, it could mean safer science in the future we all want that :) !!!

    NASA badly needs a site like this to demonstrate it's popularity at a grass-roots level so it can get the political support for the budget money it needs. Of course, I don't want to get SPAM from scientic website I merely logged onto asking for money; but I would willingly submit my name to a email list to keep me updated to when critical scientific funding is going to be voted on.
  • Sorry but I don't buy your arguments. They sound like old school thinking of "we must protect the public from themselves".

    If anything, an open display of papers accompanied by slashdot-style discussions should improve the general public's knowledge and relationship with the scientific community. With open discussions, the regular folks would be able to point out their concerns and questions in public and get answers from the best in the field from all over the world (and not just their local doctor/scientist).

    What if the published research is wrong and incorrect? What if it's misleading to the untrained eye?

    Then the discussion that follows up the paper will quickly point out these problems, just like it's happened many times here on /.
  • I didn't see this mentioned in the other posts so . . .

    A large database of papers, like xxx.lanl.gov, could be used by journals the same way sunsite.unc.edu and others are used by free software companies; or the same way other web articles are used by slashdot.

    While anything meeting certain minimum guidelines like a format or content focus would be accepted by the database (by mirroring or linking), the established journals would provide selection and review. If you can't go through 1000 articles a day, you would go to your favorite portal which would provide a digest of rigorously selected articles.

    Basically, the journals would be value-added resellers. (I hate that term, I think it's inaccurate.)

    There are surely a variety of IP issues that would have to be worked out, and the journals and databases would need some compensation for server space, collection efforts, and review. But it shouldn't be too hard to work out; Cygnus, Slashdot and the for-profit Linux distributions are proofs-of-principle and good examples.

    Each portal/journal could implement its own style of open review as well, and some might even link to articles and provide only commentary from their servers.

    If the IP issues of such a database can be worked out, there are many business models that could spring up around it. The big open databases could be compensated by restricting HTTP-REFERRER to subscribing journals or people directly browsing the server. That might be too restrictive, or too easy to work around; I haven't run a web server, so I don't know what's possible or feasable - suggestions?

  • Why?

    May I suggest that you offer us some reason to "check it"? Your post is about as helpful as a spam-o-gram.

  • Why would an electronic archive be harder to maintain than a paper journal?


    The main reason is that paper lasts for several centuries before it must be copied. Electronic media last for a few years or a few decades -- we don't really know how long tapes, disks and even CDs will retain their information perfectly.

    Equally important is that (almost) every person has the software needed to interpret printed material built into his body. But electronic archives require a layer of software between the media and the person. The software changes every few years. The format of the data on the media changes every few years. The hardware required to run the software required to read the media changes every few years.

    You may say, "No -- just keep the old hardware and the old software for several decades, don't bother to update;" but that position doesn't work very well in practice. How many institutions can afford to keep 20-year-old computers and OSes and software running? And who would make the NEW literature available on the OLD media?

    It's a real mess. I believe that paper is still the best way -- by far -- to archive scientific journals for the long term.
  • Well I don't know if it is the same everywhere, but at my school, I can just walk into the periodicals section, and pick up any journal I feel like and read it.
    It really isn't that hard, and I reccomend it highly, there is all sorts of interesting stuff out there.
  • Currently, in physics at least, some of the most prestegious journals (Physical Review Letters http://prl.aps.org/ to name one) has a limit of 4 pages. This makes it very difficult to write articles that are accessable to the non-expert. With science getting more specialized, I feel the move to electronic publishing to remove artificial lengths like these would be of great value. Writing and understanding articles would become vastly simpler - and thus reviewing articles could also become faster and easier. It is generally acknowledged that Phys. Rev. Letters are incomprehensible to researchers not already very familiar with the subject of the article.
    One difficulty with existing pre-print servers is that articles placed there are formatted for publication in journals where physical space is a very precious commodity.
    A last point: with web publication, data graphs are represented digitally, and thus it is easier to recover the data points themselves. However, there is so much more room available in web based publishing that people could actually include their raw data (something almost unheard of now if you have more than a hundred or so points.)
  • While other scinentific fields are attempting to constuct large databases of research, the Human Genome Project has been gigabytes of it's findings for years. I suggest to anyone that has any intrest in looking at data way over thier head look for this database and browse through it. It has all the codeons found to date, and is completely cross-refrenced. Very cool. If there we to be a repository of published sci papers, this would be the way to do it.
  • by Anonymous Coward
    Not new. Scientists have been talking about this since before the web. A decade ago we were all using e-mail to swap preprints in TeX, and common repositories sprung up at various places. The biggest (in my fields at least) is xxx.lanl.gov.

    The problem with these -- and this was well-known at the start -- is the editorial and review process (or lack thereof). Five years ago at CERN there was some sort of conference on faster, more widespread and community-based systems than traditional peer-review-by-old-fogies.

    There are some good ideas going around, but scientists are naturally going to be cautious about changing the present system: after all, there's an awful lot of pseudoscience knocking at the door, trying any trick to look respectable. In light of that, and the awesome responsibility "science" has in today's society, I think the caution is understandable.

    BTW, in the earlier discussions of this topic (e.g. that conf. at CERN), I didn't get the impression that the traditional journal publishing houses were luddistically trying to hold on to their current niche. Fewer and fewer libraries are subscribing to the ever-more-expensive journals, and the printers know they're in a shrinking market. Everyone, including the publishing houses, is well aware of the fact that killing large numbers of trees isn't the effecient solution we're looking for.
  • by Anonymous Coward
    I'm subscribed to a few journals, and I figure the open-source method of having journals would reduce the cost of journals (paying in upwards of $200 a year for a journal is crap).

    However, I'm not so sure that we should open it up to people outside the discipline. This is my major concern. I don't want non-mathematicians reviewing SIAM [siam.org], as I'm sure that non-physicians should have no right to review medical journals.

    Most hip journals post their articles, or most of their articles online already. SIAM, the Society for Industrial and Applied Mathematics. Mathematics publications aren't that sexy, so that attracts a limited amount.

    My major concern is that if we start pushing biomed articles, it'd be a good idea for academics to push open discourse. However, I do agree with the major concerns for health care providers. If a research article takes one position on a health condition and this is open to the public, this could damage health care treatment. An example of this is if there was some new fangled cancer treatment and it got published in a journal online and if I was a cancer patient, knowing this could present problems with dealing with your health care provider. What if the published research is wrong and incorrect? What if it's misleading to the untrained eye?

    Another topic that we have to talk about is whether or not most people would a) be interested and b) can understand topics in journals. I realise that the Slashdot reader is more intelligent than your average web surfer, but realistically, would your average AOL user be able to adequately understand stuff published in the NEJM (New England J. of Med)? Would they really care?

    All I'm saying is that we can't go nuts and start open-sourcing everything. Patience...
  • The main problem I see with this or any totally open review system is that the credibility of the reviewer comes into question.

    When I review papers for a conference, the on-line forms require that I rate myself as a reviewer (often as an "expert" or not). The conference committee members are made up of respected researchers and there is an implicit assumption that these members will pick reviewers that know what they are talking about.

    If I rate myself a non-expert, I cannot recommend a paper for acceptance or rejection. I can only make comments and weak suggestions (i.e. a "weak accept" or "weak reject"). Is this good or bad? I could argue it either way.

    I'm not saying that a totally open review process can't work. I'm just saying it needs to have some thought put into it. Ultimately the conference committee members are responsible for choosing the papers (based on the reviews they receive and their own opinions). Who takes on that responsibility in an open system?

    One final point is that nothing prevents researchers from publishing material on the web. I often see pre-published papers made available for download. The consumer understands that the papers have not gone through a formal review process yet.

  • The problem with this is that you're trusting the general public to make decisions on the validity of a scientific paper. The average person does not have the knowledge to do so. In order to have an effective system, you need experts to actually peer-review the material, to assure that there are no factual errors in the material, or procedural errors in the experiments.

    That's the problem I see with this type of site. Any sort of crackpot scientist could put up an article and get a bunch of media attention, without there being any peer-review of his work first. Do we really want crackpot scientists getting easier access to the media?
  • I agree that peer-review is vital to filter out crackpots and commercial propaganda.

    Why?

    The various LANL preprint servers [lanl.gov] have been running since 1991 with no such scoring system. Everything that's submitted from somebody associated with a university or research institution is simply accepted.

    People are pretty knowledgeable about their own fields; it doesn't take more than skimming the abstract to see if its written by a crackpot, and only a bit more to see if its interesting to them. (And if it is to them, it need not be to someone even working on very similar problems.) People looking for info out of their fields shouldn't be reading the current state-of-the-art; they need to get up to speed first with, eg, review articles.

    So I'd tend to just skip the `rating' totally; other than an ego boost/dasher, I think most scientists would skim with, in Slashdot terms, the threshold set to -2 anyway.

  • I would have to say, applying "open source" to science will be screwed up at every turn.

    A lot of modern journals are on-line now, but they require a subscription (of sometimes over $300). So, net access has started, that's a given.

    The problem is accessability, not peer review. Peer review is valid, very very valid. You can't do away with it or the result will be the "waste basket journals" that now exist in print being on the web. There are many "free" journals that will publish anything, send it to everyone they can, not peer review, and they don't care because they get advertizing money to print the journal.

    I have sketched out a very elaborate plan for an online journal about 3 months ago, and it is possable, but it would require a total "rewrite" of the whole peer review process. And, like it or not, a whole lot of money is involved. In my plan, there would be extensive peer review, yet, completely accessable to anyone free of charge across the net. And, the peer review would be structured so that "old boys" and "grudge reviewers" would be cut out of the loop. (Yes, sometimes scientists who are working on similar projects can shred a journal submission, without makeing any valid points, and the editor is forced to "accept" this review, because it's "an expert" in that field).

    The whole process can't be summed up in a "slashdot" story, it's very intense, and a very serious subject. But, I can say, there is a way, and from what I have seen that needs to be done, it will be a long way off, and we will be subject to years worth of "screwed up implementations" of the process before anyone fixes it.

    Being that the scientific publishing world is a very high dollar buisness, I won't spell out how I think it can be done _correctly_, because I don't want to see someone else who is already exploiting scientists get a better idea to further rape the real solid scientists out there.

    Now, if there were about a dozen strong willed, hard working scientists with the support of about a dozen hard working coders that were masters of system admin, web servers, and php3/sql, I would be happy to share the ideas I have _privately_ with them, only if they were willing to work on it, debate it, and develop something that worked the way _science_ should (meaning, the good work doesn't go unnoticed because it's not popular, the bad work was pointed out quickly and the authors shown the real holes in thier assumptions by real helpfull experts, and the scientists benifited from the process insted of further feeding the wealth of the publishing world).

    Don't get me going on this, because it's an ugly mess, and slashdoting solid science isn't the answer. For one, slashdot is a "discussion/news" forum, and if a story isn't "accurate" it fades away into the archives before someone invests half a million dollars and 3 years of work trying to build on that idea.

  • I saw that was comming. I would LOVE to go. I would truely love to go. I would even be willing to do a presentation on how I feel in the right hands open source, scientific publication, and the right kind of "model" could truely change how science views the net, computing, etc.

    I personally have strugled with getting Linux and FreeBSD accepted at my work. I know what it's like. Also, I have delt quite frequently with a range of quality and peer review in publication of scientific studies, and know what stands a chance at working, and what wouldn't.

    I'd give anything just to get a chance to go to that confrance... But, as it now stands, I am very busy getting some new work published, writing up to graduate this fall, and getting things together to present at the 218th ACS National Meeting, and 2000 PittCon. :-( So, all my money and most of my time is wraped up in that.

    If there is anyone out there going, please write me, and let me know if you could take notes, tap the talks, get me a program, or anything...

  • wow....you are that desparate for money and glory eh...

    You wanna know how much I make? It's a matter of public record... $12,000/year. Yea, true sign of someone money hungry, huh? That's a riot, I needed a laugh this morning.

    No, I'm not money of fame hungry. I am just pissed off at the Scientific Publishing industry, which charges outrageous amounts for "subscriptions," copywrites every printed word in thier pages even though they don't even understand what the work is or means, and has the nerve to charge the scientists if they want/need to publish a figure in color in thier papers.

    There are too many existing problems, and of all the companies out there that would exploite open source, the publication companies are probably the worst I can think of.

    It can be done, but I think it's time that the power be put back in the hands of the people actually doing the science, not the publishers. I repeat I would be happy to lay out a realistic plan for getting things back in the hands of the scientists, at no charge, with out any claim of "ownership" or ask for fame. But if you want me to tell someone who already charges $3000 for a 24 issue subscription how to make more money, your insane.

    Open Source has made significant inroad in lab science. But, I really don't want to see the publication end of things continuing to be dominated by rich giants who don't care about the science, only the profit margin. Giving them another low cost tool to exploit science is pretty low on my list of things to do today.

    Publish or Perish is a joke. I have never seen a hard working scientist have a lack of things to publish. Yes, they should be publishing, or else they are either 1) not working 2) have bad theories or 3) already raking in lots of money because they are working with an industry and patening everything under the sun.

    You think publish or perish is bad now, think about how much junk science would be out there if you illiminate the peer review process completely. Sure it's flawed, but not that bad, authors can always "resubmit" articals to the _same_ journal, and the journal will send it out to a totally new set of reviewers. I think it could be better, and I have some ideas of how, but... I'd rather see them done by the community, for science, not for money.

  • 3. Why would an electronic archive be harder to maintain than a paper journal?


    This is the kicker really. Paper is physical. While this is looked down upon by most techies it really has several advantages:

    Wide distribution of the paper materials make ex-post facto modifications (i.e. rewriting history) much more difficult and almost impossible to hide.

    There are no format or obsolescence issues involved. I can go to the right library and read scientific literature from the 3rd century B.C. while demographic data from the mid-60s on punch cards is basically lost forever. The rapid changes to information format and medium in the information age means that keeping things in a format available to all would be a massive and expensive project over time.


    Dead trees work. So far nothing electronic has come close to this for the long term storage of information.

    jim

  • However, I'm not so sure that we should open it up to people outside the discipline. This is my major concern. I don't want non-mathematicians reviewing SIAM, as I'm sure that non-physicians should have no right to review medical journals.

    While I agree that having non-physicians review medical journals would probably be a mistake how do we say whether someone is a mathematician?

    Only someone with a degree in that area is a mathematician?

    I would suggest that there are a lot of people that do not have a degree in a certain area of math or science but that still know a great deal.

    I think instead the way to go is a /. type of moderating system along with guest critics for the more interesting papers. These guest critics would quickly develop a reputation as to thier intellectual powers, clear thinking, and fairness.

    We could also give a higher starting number to the comments of recognised scientists or experts. This would funtion similarly to the way anonymous cowereds start at -1 on slashdot where as people that are logged in start higher.

    Perhaps we could even have someones starting point be figured by some sort of average of how they have been promoted or demoted in past postings. Start new posters with a 0 and let them climb their way up or down from there over time.

    Another topic that we have to talk about is whether or not most people would a) be interested and b) can understand topics in journals. I realise that the Slashdot reader is more intelligent than your average web surfer, but realistically, would your average AOL user be able to adequately understand stuff published in the NEJM (New England J. of Med)? Would they really care?

    Probably not. But the fact that your average AOLer doesn't care about a subject doen't mean that you want to prevent him from reading about it if the urge strikes. Saying that he can't handle certain information is a mistake. There are plenty of places to read about junk science (like the supermarket rags), having seachable access to "real" science would be a nice balance.
  • Well Slashdot moderation is probably too democratic and anonymous for some. The mod scores don't have a lot of credibility behind them.

    I always though the best overall solution, which would work for anything from scientific peer-reviews to Slashdot or even entertainment reviews, would be to have non-anonymous reviewers, where everyone who wants to be a reviewer can give a rating to as many things as they want to. Then each readers simply pick what reviewer they consider to be the most credible, or best matches his own tastes. (And then sort or filter items based on the reviewer's scores.)

    You could even have "virtual" reviewers that are made up of composite of other reviewers. A total democratic virtual reviewer would be an average of everyone's scores, or a Slashdot-style virtual reviewer would be an average of a randomly-selected sample of reviewers, or a "board" that averages the scores of some select group.

    Do it that way, and it can have just as much credibility and high signal-to-noise ratio as a medical journal, or as much variety as Usenet -- it would all be up to the reader to choose who is the most trustworthy.

    And, of course, readers would pick different review settings for different topics. You might pick Charles Emerson Winchester III as your medical reviewer, Bourbaki-the-math-board for your math stuff, and Opyros-the-metal-dude for your music reviews. Whatever.

  • These issues have been discussed in great detail in sci.math.research over the last few weeks. The final conclusion: lots of people continue to disagree with lots of other people. One of the best ideas that came out, in my opinion, is that the journal's real job is lending credibility to a paper. This purpose might be better served in the future by journals publishing reviews of articles available electronically, instead of the articles themselves. Of course, not every journal that's being published today could survive that way, but I don't think that's any real loss at all.
  • The problem with slashdot style peer review is that is might lead to slashdot style posts, I can see it now:
    Today on the Quantum Physics Review - Harvard professor Robert Gregors posted an article detailing a simple method of producing measurable quantities of strang quarks. Dr. Vanesh Purgabedi of Princeton responded with 'FIRST POST!!!!!!!!!!!!!!!!!!!!!!' This was promply moderated down to -1. A physicist calling himself 'Lord Voltron' observed that 'RED HAT SUCKS!!! I hate them.' This was moderated to a 3 for 'Insightful' MIT graduate student Allen Andrews observered that the energy requirements of Gregors' method would render it impractical for event the most well funded institutions. He was then accused by several 'Anonymous Particles' of beeing an 'AzzL1ck1ng M$ Wh0r3.' and moderated to -1 for being flamebait.
    --Shoeboy
  • I think this could work, for a couple of reasons.

    1. This is what we have now. You can determine the validity of an article by seeing how many other researchers cite that it. Not a perfect method, but it does work. Of course, it implies that most master's theses are pure crap, as they are never cited.
    2. A mathematical technique for quantifying and validating this approach has been developed. I don't remember where I read this, but it was 5-7 years ago. The writer indicated that the technique could be used for ranking college football teams or for separating scientific research articles from pseudoscientific ones. You started by assigning values to some teams/articles, and an iterative process based on who played those teams or who cited those articles would eventually produce values for all teams/articles.

    So while /. moderation itself would not be good science, a formalized method of review by ALL peers could be successfully implemented. I think. :)
  • ...a monthly email with a dozen or so article ID #s...
  • The thing is, stuff published on the web is evanescent. We've all encountered broken links, or links to pages that aren't what they used to be, etc. Sites come and go, and even archive sites eventually offline the stuff.

    The nice thing about paper journals is that you can usually find a copy *somewhere*. Maybe in some obscure library at the university of outer gondwanaland, maybe even only on microfiche, but the stuff doesn't go away just because somebody typed 'rm *' or reformatted the hardrive or rearranged their web site. I can dig up papers published fifty and a hundred years ago -- and sometimes that is very worth doing (among other things, consider the issue of prior art in patents). Try finding a web document from even five years ago, or five months in some cases.

    This is particularly relevant with articles that may not be "politically correct", whatever that might mean in a given context. Aside from the ease of simply destroying the article in question, it is comparitively trivially easy to change the offending article. Remember Winston Smith's job in Orwell's "1984"?

    I'm all for web publication -- when the pages are there it certainly simplifies finding them, and hyperlinking the citations to the originals and summaries to the raw data, etc, would be wonderful. But we need to give some thought too to how this stuff gets archived for accessibility ten, fifty, or a hundred years from now, and how the electronic copies avoid mutation. (PGP checksum, perhaps?)
  • To this general end, I was quite impressed with Stefano Ghirlanda's Free Science Campaign [zool.su.se]. In particular, there is a lot of useful information decoding the copyright policies of the various academic publishers. Take a look!

    From the Preamble:

    When authors of scientific papers submit their manuscripts for publication in scientific journals, they are frequently asked to sign a copyright-transfer agreement to the publishers of the journal. After such a transfer, the authors may retain little freedom to use their own papers. For example, some copyright agreements forbid authors to make their works available on a web page: you might be reading something more interesting than this, now!

    We feel that such copyright policies greatly reduce the freedom of scientists and researchers to exchange information and ideas. In our view, what is important is making scientific literature fully available to all scientists, free of the restrictions that are imposed today. Who owns the copyrights is a secondary issue (please read our objectives for more information).

    If you are a scientist or researcher, or simply an interested person, please read on.

  • One point missed here (probably because there are very few biologist reading ./) is that in biology (or in life science in general) the amount of published is much greater than in physics. Further it is often very difficult for a non expert to know if an experiment was carefully done or not. (How long was the incubation time, what cell-line was used etc etc). Finally no life-scientist write pappers in TeX and reading a word-double-spaced manuscript is much more difficult than a properly formated paper.

    So what an e-journal need to provide is:
    1) A ranking (peer-review, ./ style or somehow) that tells me (a) if this paper is correct (b) how important it is
    2) Someone formatting the paper into a nice layout.
    3) A good capacity for me to search of all papers, new papers, papers with keywords etc.
  • One reason that there is a stigma attached to non-peer review and net based papers is that you can write any old garbage. I should know, my most cited paper (by a factor of 5) is in an electronic journal (It's not garbage, but it is informational rather than scientific peice).

    Publication lists are often used by funding boards to asses the credibility of a proposal from an author. While less than ideal, this is necessary because it would be impractical for every funding board to review every aspect of every proposers work - they have neither the time, or the breadth of expertise. If they included non-peer-review publications, then there would be some people who like writing churning out heaps of garbage papers.
    (I've reviewed some which are clearly rubbish. I've also asked for the editorial board to pick another reviewer when I disagree with the author concerned).

    At the same time it would be wrong to deny that there are a few fields in which a viewpoint has acheived 'monopoly status', and alternate views cannot be expressed, although it is uncommon for all the journals in a field to be so dominated unless the field is very narrow. Experience shows that the situation usually resolves itself as the current generation of reviewers retire, although that is frustrating for individual authors at the time.

    Given the balance between not abusing the nations tax $ on project with no scientific merit, and not holding up the progress of science, I think the balence in the current system is about right, even if it goes wrong in a few specific cases. If peer-reviewed electronic publishing can reduce the number of bad cases, I'm all for it.
  • This is partly dealt with by weighting reviews by the authors credibility, and an author has no credibility until they have published something credibile.

    There are openings for abuse, for example a small group could submit papers and lend eachother support. A more complex formula for credibility of papers and authors would help. It also won't help the problem of communicating an unpopular idea.

    The system could also implement the traditional approach, by simply giving the editorial board and their chosen reviewers credility, and giving none to anyone else.

    What I slightly object to in the current system is that my best and worst peer-review papers are given equal weight in any funding review, and all are given equal weight with papers from scientists both better and worse than myself.
  • The one thing I think they'll kick and scream about the most is the notion of "peer reviewed" content, notably how the status-quo, published journal system is reviewed by professionals. My girlfriend is an oncologist (always digging up papers from somewhere) and some med friends of hers were discussing this the other night at dinner; there really seems to be a stigma attached to non-official net-papers. I think that's bunk myself, but perhaps a well-established site could develop its own mechanisms for developing the public-accepted pretense that its content is "peer reviewed", and still maintain the open-source ethic...
  • by Anonymous Coward on Wednesday July 28, 1999 @03:17PM (#1778508)
    Back when I was in grad school, my research happened to make a notable contribution to a hot topic at the time. I was (usually with other authors) submitting papers to IEEE journals at a rate of about 1 per 3-6 months. I also attended several conferences and got to know a lot of the major contributors in my research area.

    Typically, every submission got sent to 3 experts for review. My professor (and one of his collegues) even forwarded to me several papers they were asked to review. I noticed a couple of things regarding peer review:

    1. For every submission, there was a 50/50 chance that none of the three reviewers would know what the paper is really about. Part of this problem is that the IEEE journal editors simply can't know about all the topics being researched in their area and would often pass the submission to the wrong experts. These "experts" (in the wrong subject matter) often wouldn't give a shit about trying to learn about the topic at hand and would just give it a bad review. You could tell this was happening when you got your submission back with comments and criticisms that simply made no sense. I was asked to review a couple of papers that were out of my research area, and I did my best to research the prior literature on the subject and give a fair review, but many professors didn't have the time and would slam the paper rather than pass it on to somebody who would understand it.

    2. There is an "old boys" network present in every field of research. Most of these operate like an old fashioned closed businessman's club. Once you are accepted into the fold, everyone else kisses your ass and gives great reviews to any paper with your name on it, regardless of whether it is worthy of publication. The people who make it to this stage rarely make useful research contributions anymore, but they get their names on lots and lots of papers. All of the lesser known researchers practically beg the big boys to co-author their paper, thus virtually guaranteeing its publication. Some of the members of the "old boys" network have unbelievable egos that require constant stroking if you plan on ever making a name for yourself in the field.

    3. There are some researchers who will go to great lengths to stab you in the back. I remember the case of one little known French researcher who wrote a landmark paper in our field. In the review process, it was sent to one SOB who was a big player in the field but was notorious for being a backstabber and had questionable intelligence (I never knew how he got so well known in the first place). The SOB managed sit on the review and delay the paper's publication for over a year while he attemped to figure out what the French guy had done and duplicate his results. He sat on it so long in fact that he got his own paper published on the subject before the French guy's and stole all the credit. This dickhead, and several others like him, were also notorious for attacking their peers (and especially their peer's grad students) at any conference where one of their sponsors or potential sponsors were present.

    I left grad school thoroughly disgusted with the whole research community. Your status was measured by how many papers you published, not how many real contributions you made. There were too many people who capitalized on the original research of others by pumping out lots of papers covering slight variations in the application of the aformentioned original research. To keep up, you had to sell out and produce lots of junk papers with the right names on them. I managed to get published in IEEE journals 6 times in 2 years, but I'm only proud of two.
  • by Anonymous Coward on Wednesday July 28, 1999 @08:25AM (#1778509)
    This proposal is reminiscent of Drexler's suggestions for effective use of hypertext as outlined (for instance) in Engines of Creation (which is web-available, but I'm too lazy to find it right now). In fact, when he was describing hypertext in that context (before it was layered on top of the internet to create the Web as it exists today - which actually only implements about a third of the features of his proposed system) his discussion seemed to think of it primarily as a means of publishing scholarly information, reviewing it, and correcting for the lag-times and lack of back-links in conventional print media.

    This proposal doesn't seem to implement backlinks, but the availability of updating to correct bad information makes up for some of that.

    Some benefit might be had by looking at integrating some of the features found in the CritSuite (authored by Ka-Ping Yee and to be found at crit.org [crit.org]), which uses a proxy-server method to implement many of the missing features of true hypertext as a layer on top of existing WWW content.

    A good versioning system with history information available (analogous to CVS) is also desirable - this is somewhat reminiscent of Daniel Dennett's "Multiple Drafts Model" (which he used as a metaphor/model to describe how the mind handles memory, but which could equally well be taken at face value as a method for handling bona-fide document drafts in an electronic environment), but with better memory.

    I'm kind of skeptical here: a poorly implemented system is in some ways worse than no system at all, because it can lead to complacency. As long as careful consideration is given to which features are desirable, this could be a Very Good Thing.

    One characteristic to watch for is how much centralized control is given to "editors" over content and filtering. Automated filtering methods, trust networks, reputational ranking, etc. have been fairly well-developed ideas for years. It would be an unfortunate oversight to fall back on print-age social technologies when something better is available.
  • by MaxZ ( 3197 ) on Wednesday July 28, 1999 @09:16AM (#1778510) Homepage
    Academic publishing seems to be mostly based on the reputation or authors and reviewers (in fact, some papers originating from students of influential professors get published even though they are total crap).

    The other flaw in the current system is usually hidden from the masses - most professors don't review the papers themselves, but their grad students do. The professors rarely have the time to go in-depth on the paper, check the math, etc.
    Sometimes the name of the grad student is attached to the review when it's sent out, and sometimes not. So it's impossible to determine the reputation of the real reviewer of the paper.
  • by pal ( 16076 ) on Wednesday July 28, 1999 @09:17AM (#1778511)

    as stated in previously, without an editorial process, preprint servers are largely useless because of the amount of garbage one must wade through to get something useful.

    well, there are free, online journals that exist. they use the same peer-review process that print journals use, but eliminate the cost by eliminating the printer! for examples, The Electronic Journal of Combinatorics [combinatorics.org] has been around since 1994. if you look at the list of editors [combinatorics.org], you're sure to recognize a few names.

    i believe that this is the answer. the free availability of this information is what is sought after.

    - pal
  • by Trojan ( 37530 ) on Wednesday July 28, 1999 @08:30AM (#1778513)
    Subscriptions to scientific journals easily cost $3,000-$5,000 a year. For this, university libraries get a bunch of scientific articles written by scientists (who are mostly fully paid by universities and science foundations), and reviewed by the same group of people. All of this costs the publishing company about $0.00. The publisher then has to do a bit of editing and finishing up. It's practically a free lunch. And Elsevier (to name just one) is raising it prices by 10% each year.

    On top of that, almost all journals demand the scientist to sign a Transfer of Copyright Agreement. If you're not careful, you could be sued for publishing your paper on your homepage.

    But now there's internet. We don't even need a press anymore. Potentially, there's lots of (library) money available that can be used to replace the old-style publishers. All it takes is for scientists to unite.
  • by nstrug ( 1741 ) on Wednesday July 28, 1999 @08:55AM (#1778514) Homepage
    The scientific journal publication system is ripe for overhaul. I submitted a manuscript to a journal that shall remain nameless in February this year. Four weeks ago they contacted me to say that the Copyright Transfer form (yes, you have to give them your copyright before they consider publishing) had gone missing and could I please send another one; of course the manuscript was still sitting on the editor's desk. Last week I heard that the paper had been sent to a sub-editor who's job it will be to send copies (that I had to make) to reviewers for peer review. Sometime in early 2000 I will probably get reviewer's comments and amendments. I will then resubmit these and here nothing for another six months. If all goes to schedule the paper _may_ be published by late 2000 - two years after I did the original research. And for all this the journal will charge me (or rather NASA and therefore the US taxpayer) an exorbitant sum - $115 per page and $700 for each colour plate - and as my research area is satellite imagery I can't help but use a lot of colour.

    A subscription to this journal, I might add, costs several hundred dollars per year and of course it does not pay any of its reviewers (like most academic journals, it is considered an 'honour' to be asked to review a manuscript.)

    Someone, somewhere is making a lot of money out of the whole journal scam.

    Nick

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday July 28, 1999 @08:03AM (#1778515) Homepage Journal
    There's an Open Source / Open Science conference happening in October at Brookhaven National Laboratories, Long Island, New York. I'll be speaking. I expect this to be a discussion topic.

    Thanks

    Bruce

  • by teraflop user ( 58792 ) on Wednesday July 28, 1999 @08:16AM (#1778516)
    I agree that peer-review is vital to filter out crackpots and commercial propaganda. But rather than the traditional editorial board approach, why not slashdot style moderation?

    Give each reader and each paper a credibitlty score. The credibility of the reader is based on the average credilibity of each paper they have submitted. Each reader can give a credibility score to each paper they read. The score for the paper will be the submitted scores, weighted by the credibility of each reviewer.

    Now it is easy to make foulups in a paper the first time round, so an initial submission could be made to an editorial area. People could add comments slashdot style, and the authors could use these to revise the paper for final submission.

    When I started reading this, I hated it. But now I love it! If someone launches a journal in my field this way, I'll gladly submit a paper. Of course, there is no real need to have individual journals, if the database can be searched flexibly.
  • In the various subfields of Physics, this idea
    of a public "preprint server" has been implemented
    for some time: check out the Los Alamos [lanl.gov]
    Physics Preprint server.

    I've been active in research (astronomy) for
    the past ten years or so, and I've had many
    conversations with other researchers on the
    future of scientific publication. Some of the
    main points are:

    1. Review/moderation is necessary. There are
    a _lot_ of people who have crackpot theories
    about the universe, and some of them aren't
    shy. Without refereeing of some sort,
    the number of scientifically worthless --
    see definition below -- papers will grow to
    the point that they may swamp the worthwhile
    papers. At that point, many users will stop
    using the archive.

    Note on "scientifically worthless": science
    is an enterprise which depends on its
    workers to adhere to a set of rules, such
    as understanding basic physical principles,
    checking the existing literature, creating
    falsifiable hypotheses, verifying new
    results, repeating experiments, etc. Papers
    describing ideas which aren't developed
    along these rules are, by definition,
    scientifically worthless.

    2. Scientists depend on their publication
    records to land good jobs, and to advance
    in those jobs. At the moment, in astronomy,
    at least, the existing
    electronic archives are NOT viewed as
    "real publications". There's a little bit
    of a chicken-and-egg problem: until the
    electronic archives are taken seriously,
    many people won't publish in them
    exclusively. But if everyone publishes
    elsewhere, why take electronic archives
    seriously?

    3. Many people, myself included, worry a great
    deal about the use of electronic archives
    10 or 20 years hence. I have paged through
    bound journals dating back more than 100
    years, and used them occasionally in my
    research. I can interpret the information
    easily. But I don't think it will be an
    easy matter to keep electronic media up-to-
    date over a century. The librarians to whom
    I've talked are _very_ worried about this.

    Yes, I know that it may not be difficult
    in THEORY to copy old materials to new
    formats and new media every N years;
    but in practice, it's a royal pain. In an
    era of shrinking library budgets, it may
    become fiscally impossible.

    On the other hand, I do very much support the
    idea of "Open Source" publications. It will
    enable many more scientists to publish their
    ideas. In my field, for example, the authors
    have to pay the journals about $125 PER PAGE
    for the papers they publish. My last paper cost
    over $2000, and I had to pay for some of it
    myself (since I work at a small university that
    doesn't have a lot of money to support research).

    The tricky thing will be to find a mechanism
    which keeps the good points of the current
    scientific journal system, while avoid the
    pitfalls (some of which I've mentioned above).

Real programmers don't bring brown-bag lunches. If the vending machine doesn't sell it, they don't eat it. Vending machines don't sell quiche.

Working...