Become a fan of Slashdot on Facebook


Forgot your password?
Social Networks The Internet The Media Science

Web of Trust For Scientific Publications 125

An anonymous reader writes "PGP and GnuPG have been utilizing webs of trust to establish authenticity without a centralized certificate authority for a while. Now, a new tool seeks to extend the concept to include scientific publications. The idea is that researchers can review and sign each others' works with varying levels of endorsement, and display the signed reviews with their vitas. This creates a decentralized social network linking researchers, papers, and reviews that, in theory, represents the scientific community. It meshes seamlessly with traditional publication venues. One can publish a paper with an established journal, and still try to get more out of the paper by asking colleagues to review the work. The hope is that this will eventually provide an alternative method for researchers to establish credibility."
This discussion has been archived. No new comments can be posted.

Web of Trust For Scientific Publications

Comments Filter:
  • Wikipedia (Score:5, Interesting)

    by jgtg32a ( 1173373 ) on Wednesday February 04, 2009 @06:14PM (#26730405)
    This is exactly what Wikipedia needs to implement.

    This will allow it to overcome the credibility problems that it has.
    • Re:Wikipedia (Score:4, Insightful)

      by MoxFulder ( 159829 ) on Wednesday February 04, 2009 @06:45PM (#26730709) Homepage

      Indeed, the implementation of flagged revisions [] is currently being debated for the English Wikipedia, and was the subject of a recent ./ article [].

      A lot of the debate centers on exactly what the "signing" process will entail in terms of responsibilities and consequences for the articles subject to it.

      I don't think a one-size-fits-all approach to trust networks is a good idea. Requirements for effective trust in key sharing, peer review, and wiki content may differ and I think it's appropriate for each to develop a fine-tuned approach, while borrowing good ideas from one another.

    • Re: (Score:2, Insightful)

      by mahadiga ( 1346169 )

      This will allow it to overcome the credibility problems that it has.

      TRUTH is important than TRUST.

      • Re: (Score:1, Offtopic)

        by linhares ( 1241614 )

        Dormitory is important than Dirty Room

        Evangelist is important than Evil's Agent

        Desperation is important than A Rope Ends It

        The Morse Code is important than Here Come Dots

        Slot Machines is important than Cash Lost in 'em

        Animosity is important than Is No Amity

        Mother-in-law is important than Woman Hitler

        Snooze Alarms is important than Alas! No More Zs

        Alec Guinness is important than Genuine Class

        Semolina is important than Is No Meal

        The Public Art Galleries is important than Large Picture Hal

    • by puusism ( 136657 )

      Half a year ago I wrote an article about how to implement Wikipedia peer review with digital signatures: []

      • by puusism ( 136657 )

        Answering to myself... here's the Wikipedia Village Pump discussison [] about the proposal. The proposal was rejected (or at least not implemented), since it was thought to produce a disincentive for editing signed articles.

        • Answering to myself... here's the Wikipedia Village Pump discussison about the proposal. The proposal was rejected (or at least not implemented), since it was thought to produce a disincentive for editing signed articles.

          Yup, true, but it's flying NOW! Jimmy Wales just announced it today []. (Probably on /. tomorrow.)

  • Still needs a root (Score:5, Insightful)

    by orclevegam ( 940336 ) on Wednesday February 04, 2009 @06:14PM (#26730409) Journal
    The problem of course is that at some level you still need to have a known good reference for the whole "web" to work. It doesn't help your credibility at all if you've got a paper signed by 100 of your closest crackpot buddies. What this does provide is the ability for someone in addition to established authorities to vet a work, such that a well respected member of the scientific community can easily and in a verifiable fashion signify his approval of a paper.
    • Wouldn't metrics like the Erdos number suffice? Calculate the weighted distance from known experts, such as Nobel laureates, Fields medalists, etc. It isn't that hard to notice a clique that is only weakly connected to the larger network.

      • by PDAllen ( 709106 ) on Wednesday February 04, 2009 @07:01PM (#26730869)

        Journal publications are basically used to tell people who don't work in your area that you're doing decent work. If someone is doing decent work in an area you're a specialist in, you probably know them at least by sight and you probably hear about their results fairly soon after they prove them; the journal paper may well come a year or two later.

        But if you want funding, or you want a job, you have to convince a bunch of people who know very little about your area that you are a valuable person. The easiest way to do that is to point at recent papers in good journals (which, really, isn't so different to the web of trust idea: I have a paper in CPC because someone thought my work was good enough to go there, that kind of thing).

        There are lots of problems with the sort of metric you suggest; you need something relevant to now, you don't want it to discard people who do good work on their own or in tight groups (and there are quite a few of the latter), you don't want it to be distorted by the sort of mathematician who will publish every result they can get in any collaboration (there are quite a few, some of whom are very good and very well-connected but still publish some boring results along with the good ones).

    • by hobbit ( 5915 )

      The root should be established by your opinion on papers, their authors, and their reviewers. If you see eye to eye with an author or reviewer about a paper, you should increase your trust of them. In other words, everyone has a separate view onto the web of trust.

      • Re: (Score:3, Insightful)

        by orclevegam ( 940336 )
        Yes, but for the web of trust to have value to the casual observer certain respected authorities need to be established which is something people tend to do naturally on their own. If something like this is implemented it will most likely never have an official authority, but it will have several de-facto ones that people will come to associate as authority figures. Essentially someone not well entrenched in a particular field may not know if Dr. X who's work is signed by Dr. Y, is any good, but they have h
        • by hobbit ( 5915 )

          Yes, but for the web of trust to have value to the casual observer certain respected authorities need to be established which is something people tend to do naturally on their own.

          Indeed. I'd like to think that if I looked at the web of trust through, e.g., the IEEE website, I'd get something approximating the combined trust of their membership. Rent-an-opinion, if you will!

    • by DancesWithBlowTorch ( 809750 ) on Wednesday February 04, 2009 @06:33PM (#26730607)
      Actually, it doesn't need a root. Quite on the contrary, the developing graph could give amazing insight into the structure of research communities. It would be possible to identify researchers forming links between otherwise almost disconnected areas of research, and to find the great minds at the centre of such blocks. There is no "root" to the web of scientists. Even people like Erdös were only ever local subroots.

      I think this project is a great idea. Unfortunately, it currently seems to consist of only a command line tool to sign reviews with GPG. That's nowhere near enough if it is to thrive beyond the CS world. It needs a simple, rock-solid GUI, and most importantly, lots of eye-candy for the graph. It will need to look cool and work well to build up the momentum for this to work at all.
      • Re: (Score:3, Interesting)

        by orclevegam ( 940336 )
        That's true, and would be an interesting use of the data, but you have to consider the primary application this would be applied to. Something like this holds little value for someone in the same field who travels in the same circles as they're already aware of the reputations and merit of other researchers in their field of study, or baring a recommendation from someone they know, they should be capable of reviewing the paper for themselves and deciding if it has merit. Where this does provide insight is t
        • Re: (Score:3, Insightful)

          Where this does provide insight is to the outside observer

          You mean like grad students?

          • You mean like grad students?

            Who do you think is doing all the work? From my experience as a grad student, the faculty are more "outside observers" than grad students, especially in the student's thesis area. Faculty tend to deal a lot more with finances and administration than research.

            • Re: (Score:3, Funny)

              You must have gone to a top university. In my experience, grad students have no idea what their thesis is about or who's who in their chosen field until about 3/4 of the way through (but luckily there's always panic to smooth over the last 1/4 :)

              Now, postdocs, that's another story.

      • I wonder if subwebs will form around subroot voids? The clear "roots" I can think of for topics in my field would be less interested in using a tool like this because they're already known. It's us fresh-faced grad students and other unknowns who would be more likely to a) want the extra credibility and b) have the time to pursue it.
      • I agree that this is a great idea, and with your criticism of the implementation. I would like to see this as a website, compatible with the existing tool, funded by Google, and written by someone other than me (although I am willing to help). It could be sort of like MySpace for science ;-)
      • That already exists without a web of trust. There is a lot research already into analysing citation and co-citation graphs for the features that you mention. In network-analysis terms the links that you describe are called gateway nodes.

    • Re: (Score:3, Interesting)

      by Zerth ( 26112 )

      Much like Page Rank, you don't need a "known good". Start with everyone on even footing, passing their value on to those they sign, receiving value from those who sign them, and then iterate until it reaches a reasonably steady state.

      I don't recall if there is a general "scientist" number, like there is Erdos for mathematicians, but in the off chance a crackpot network was to form and become larger than any of the networks of actual scientists, then you might want a "known good", but it wouldn't matter who

    • Normal peer review is basically a grouped version of the same thing. A paper being published in a journal tells you that the journal's editors/reviewers thought it was a legitimate contribution. What that tells you depends on what you think of the journal's editors/reviewers. It's basically an endorsement mechanism: publication venue X, meaning set of people Y, says that paper Z is good. This is a somewhat more decentralized version, where individual researchers can say that paper Z is good.

    • by Sloppy ( 14984 )

      Whoever happens to be reading a paper at the time, would be the root. If you haven't set your opinion of any of the 100 crackpots to greater than zero, then their signatures don't matter.

      And yes, that means trust is subjective. Welcome to the real world. Anyone who tries (or has tried, and it happens a lot) to set up a trust system that isn't subjective, is deluded.

    • You've hit the nail on the head. For example, read the Wegman report into the Mann et al. "hockey stick" paper in the climate science debate. The right way to judge a result in science is not by the first paper on the topic, but by the number of independent follow-up studies, all of which support the result. The idea of independence is something you need to be stringent about: ideally it requires different (unconnected) researchers obtaining new data and performing their own analyses. People from the sa

      • the Wegman report into the Mann et al. "hockey stick" paper...

        Please don't encourage people to waste their time by conflating politics with science. We are talking about trusting scientific papers. Mann's "hockey stick" papers have been published in the journal science (2nd highest in academic journal rankings). The correct way for McIntyre and McKitrick to attack them is to publish contrary journal articles, asking a political committe devoted to 'energy' to act as judge and jury on a particular
        • You should read some statistics. Unlike you, I do recommend that people read key documents from different players in the debate rather than just take the party line from Real Climate (which I believe was started by Mann and his colleagues).

          You might be interested in Wegman's CV [] before suggesting his findings on the Mann et al. hockey stick were constructed politically. There's this interesting entry in his CV, for example: Appointed Chair of the Committee on Applied and Theoretical Statistics, National Ac

          • I did not say Wegman was political, I am saying he was appointed by a political commitee to give an OPINION. If you want to appeal to authority then by all means go ahead but I am kind of fond of the scientific method.

            "Of course, I don't actually expect you to read any of this stuff, but other people following this discussion might be curious."

            I have followed the science for at least 25yrs, I also followed the Wegman thing while it was happening. You don't have a clue, you are grasping at straws to tr
            • You have me confused: are you saying Wegman's testimony on the hockey stick should be ignored not because of what he said or his evident qualifications as an independent expert, but because of who asked him for the report?

              Personally I find it hard to believe you really respect the scientific method when you (a) prefer to accuse people of playing politics rather than addressing the content of their argument and (b) argue against argument by authority (I didn't: I posted a link to Wegman's CV to point out tha

          • NAS testimony to the energy commitee [] (pdf).
            NAS climate homepage []
            As you say "other people following this discussion might be curious", even if you are not.
    • If this gets enough traction/critical mass, this problem will be solved - Crackpots can mutually sign as being topz 1337, but they will be inevitably signed as being below the mark for most other, serious (and better punctuated) users.
      The site linked describes the process in too low detail to make sense of what they are proposing, but I have thought through a very similar process with some colelagues of mine - This problem is solved when not only each article, but each reviewer gets scored according to both

  • by Anonymous Coward

    Coming this fall. Jean Claude Van Damme in...Web of Trust.

    Two certificates enter, one leaves.

    But what about the new tool? Does Google have the favor?

    Rated R for violence and sexual situations.

  • Weird objection (Score:5, Insightful)

    by nine-times ( 778537 ) <> on Wednesday February 04, 2009 @06:18PM (#26730443) Homepage

    I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process. Not that I don't understand why it makes sense. It's a pretty reasonable attempt to sort valid work from crap, but...

    There's still a certain way in which it's just an appeal to authority. It's people saying, "We should accept what this scientist says because other scientists say that he's right." I guess what I'm saying is that I worry that, as a process like this becomes more technical, people will be more likely to confuse a statement like, "This study has been reviewed by other scientists and seems to have merit," with something more like, "This study is correct, infallible, and indisputable."

    And I guess part of the reason I worry about this is that there may be cases where what "everyone thinks" (i.e. the common conception even among experts) is wrong, and some random nutcase is right. It almost never happens, but it happens sometimes. It seems to me that a technical method of assigning trustworthiness of ideas in a web of trust might possibly lead to having all the groundbreaking ideas go into a spam filter somewhere, never to be seen again.

    • Re:Weird objection (Score:4, Insightful)

      by orclevegam ( 940336 ) on Wednesday February 04, 2009 @06:26PM (#26730519) Journal
      Interesting point. Something that occurs to me however is that any paper worth its salt really has two things that can be verified/approved independently of each other. The first, and easier of the two is the test procedures, and any math/established formula used. Assuming that no flaw can be found with that, you move on the second part, which is the theory being proposed to explain the results of the tests and/or how any discrepancies between the observed results and the theory are handled. It's entirely possible to have a paper that has excellent test results that raise interesting questions, but a completely nutjob theory attached to it. To ignore the results of the tests because the theory is crazy is to throw the baby out with the bath water. Likewise, just because the theory proposed in a paper is a well established and respected theory is no reason to sign off on flawed test results.
      • Re: (Score:3, Interesting)

        by nine-times ( 778537 )

        Yeah, I acknowledge that my concern shouldn't really be the primary concern. There's a reason I wanted to call it a "weird" objection.

        I just think it is possible to put too much faith in peer review, given that the "peers" reviewing it are also human beings, just as fallible as other human beings. Computers are arguably less fallible in other ways, but of course they can't really make judgements. So I'm just really trying to point out that, in the other cases where we mix fallible human beings with mach

        • Re:Weird objection (Score:5, Insightful)

          by ralphbecket ( 225429 ) on Wednesday February 04, 2009 @09:04PM (#26732065)

          Peer review is actually pretty weak. It's mainly effective at spotting obvious howling errors. Peer review is not the same as replication and, indeed, many reviewers don't bother to check the equations or data presented in a paper unless they are genuinely suspicious of the conclusion. Replication, not peer review, is the gold standard of science.

    • by m50d ( 797211 )

      Everything you say is true, but the comparison to a spam filter is apt. How many people read through and check all their emails carefully, in case they really are the heir of a deceased nigerian prince? Yes, some non-peer-reviewed papers might be valid; brilliant even. But it's simply not worth the cost of wading through the crap.

    • Re:Weird objection (Score:5, Insightful)

      by evanbd ( 210358 ) on Wednesday February 04, 2009 @06:40PM (#26730659)

      As a third party, there is no way I have the time to follow the chain of logic that results in a modern scientific paper from first principles. At some point, I have to accept some of the preconditions of the paper without verifying them, because doing otherwise implies that I am an expert in the particular field the paper is relevant to. And there are plenty of cases where I want to make use of a result from a field that is related to my work but in which I am not an expert.

      Appeal to authority is the fundamental reasoning technique I apply in such cases. A respected expert says it is so, and so I will trust them until I have reason to believe otherwise. That trust should not be blind -- if I am presented with reason to, I will happily re-evaluate that trust. Perhaps the expert is mistaken. But, in the interest of actually getting something done myself, I will accept as a default position that the experts know what they're talking about.

    • Re: (Score:3, Insightful)

      In science(and generally) the "appeal to authority" is complicated because there are actually several quite different flavors of appeal to authority, which all mean quite different things; but commonly blur together in ordinary use. The following is incomplete; but it hopefully gives a rough outline.

      On the one hand, you have the appeal to authority as an argument in itself. This is the classic medieval "According to the philosopher..." stuff. When this happens in science, it is undesirable, since science
    • by migloo ( 671559 )

      I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process. (...) there may be cases where what "everyone thinks" (i.e. the common conception even among experts) is wrong, and some random nutcase is right.

      Absolutely, and that is exactly how Science progresses. The modern peer review process only adds viscosity to the mechanism of discovery.

    • Passing a peer review doesn't provide assurance of the accuracy of a scientific paper. All a peer review does is filter out stuff that we are already pretty sure is bogus.

      But what makes a scientific concept meaningful is review by experimentation. Ultimately, it's important for people to drum up experiments that could be used to confirm or reject the theory. And really, your concern is an artifact of the lousy job being done by the public school system in teaching students what the Scientific Method is real

    • It's people saying, "We should accept what this scientist says because other scientists say that he's right." I guess what I'm saying is that I worry that, as a process like this becomes more technical, people will be more likely to confuse a statement like, "This study has been reviewed by other scientists and seems to have merit," with something more like, "This study is correct, infallible, and indisputable."

      Getting a group of scientists to fully agree on something is akin to herding cats, or for a better analogy, getting all of Slashdot to agree on the best *nix text editor.

      "This study is correct, infallible, and indisputable." is best replaced by "This study has made it through one of the must grueling gauntlets that modern civilization has to offer and made it out the other side, therefore it has merit."

    • Appearing in a peer-reviewed journal isn't a stamp of authoritative correctness on a paper, just that a few people thought it was worthy of some space in the journal. Peer-review is (supposed to be) just a rough initial filter to cut down the noise; if a paper is actually good and useful it will be cited often.

      • I know it's not supposed to be a stamp of authoritative correctness. That was my point, that (some) people incorrectly treat it as one.
    • I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process.

      This is a misunderstanding. The role of peer review is not to verify anything. To the contrary, there are many situations where a reviewer will not be able to verify results with resonable effort. Think LHC experiments, Mars probes, etc.

      Peer review is really just a spam filter. Reviewers can check whether a publication has novel aspects to it, whether it is relevant to the journal or conference, whet

    • Things like relativity and plate tectonics have taken a while to be accepted because so many people thought they were crazy to start with.

      However, it's better to have papers checked than not checked. The reviewers don't necessarily have to agree with the conclusions, but they do have to make sure the study's methodology is sound.

      Other people will then go and do similar studies to see if they can reproduce or disprove the results.

    • You should read "The structure of scientific revolutions". There is a large body of work associated with the phenomena you describe, and most scientific communities take a lot of care to prevent the negative effects.

  • by SuperBanana ( 662181 ) on Wednesday February 04, 2009 @06:24PM (#26730501)

    The hash isn't necessary. If the trust relationship between two academic peers includes "worried about him modify the paper after I review it", there is no trust relationship.

    In fact, the whole thing isn't necessary. Pubmed, anyone? All someone has to do is pick up the phone and call the reference on a CV and say, "So, what did you think of Dr. X's work on Y?", and they learn more than they will running a program that says "Hashes verified."

    This system is also never going to fly with researchers. Most (but not all) of the (brilliant) bio people I've worked with are completely helpless when it comes to technical stuff. Even some of the bioinformatics people who can write amazing algorithms aren't clued in on stuff outside of their field.

    • Re: (Score:3, Insightful)

      by evanbd ( 210358 )

      Yes, because the first thing I'd do on seeing a vaguely interesting paper is call up half a dozen random researchers, wait until they weren't busy in the lab to get a comment back, and then eventually have some clue what the consensus among those more directly involved in the field than myself is several hours later. Why not just have them publish their opinions? Then they don't have to answer the same questions repeatedly.

      The question isn't "why should we include the hashes?" but more properly "Is there

    • by Sloppy ( 14984 )

      The hash isn't necessary.

      The hash is harmless (as in: there are no downsides), and a lot faster to sign (and verify) than a megabyte.

      If the trust relationship between two academic peers includes "worried about him modify the paper after I review it", there is no trust relationship.

      You can trust someone and still have a computer hardware failure, or accidentally hit a key after you accidentally load a document into an editor instead of a viewer. You can trust someone, but acknowledge that they might make

    • The hash isn't necessary. If the trust relationship between two academic peers includes "worried about him modify the paper after I review it", there is no trust relationship.

      No. You could not be more wrong I'm afraid. When I sign a review of somebodies work I am not recommending / trusting them as a person. I am recommending that single piece of work. People who trust me can then trust that the piece of work is good.

      The hash is completely necessary because the trust relationship is slightly more fine grain

  • Very poor idea (Score:5, Informative)

    by littleghoti ( 637230 ) on Wednesday February 04, 2009 @06:31PM (#26730577) Journal
    What is important is *anonymous* peer review. There needs to be a mechanism for new scientists to question established researchers without lasting detriment to their careers. On another note, what I thought this article might be about was CiteULike [], which is great. Any academics should check it out
    • Re: (Score:3, Informative)

      by TheSunborn ( 68004 )

      I might have missed something, but I am pretty sure that most Peer review are anonymous. (The authors of the paper don't know who the reviewer are). The publisher does know, but he keeps it secret.

      • Re:Very poor idea (Score:4, Interesting)

        by Anonymous Coward on Wednesday February 04, 2009 @06:54PM (#26730789)

        They are anonymous--the same way student reviews of a professor in a class with 5 students are anonymous. By the time you're doing anything original, innovative, and remotely interesting in a field, you're going to have 20 peers tops--very likely only 5 or so. Some of your reviewers will be unrelated and able to check basic mathematics for correctness but not much more.

        Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...

        • by godrik ( 1287354 )

          Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...

          Most of the time, I do not know who read my articles and I do not believe the review I wrote tells who I am. There is a bunch of "top-level" researcher in my field (parallel computing) and probably 10 times more "classical researcher" and PhD student. Perhaps it is not true in all field.

          Even if your identity can be guessed, it is never entirely sure. And being anonymous prevent pressure such as "you rejected my last article, I will reject your". You won't risk it since you are not sure it is the same guy. A

        • by PDAllen ( 709106 )

          Depends on what area you're in, I think. I suppose there are some areas where really only a few people work and it's so different to anything else that no-one from outside can easily referee a paper - but if you're in that position, I'm hoping that you're working in a very new and growing subject. The alternative, bluntly, is that you're doing something the rest of the world thinks is boring.

          I do combinatorics; if you give me a paper that doesn't use topological or heavy probabilistical methods, I could ref

        • by jmv ( 93421 )

          Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...

          Especially when the reviewer's comment says: "you should cite this great work by professor Foo..."

        • If peer-review is done only at the top of each discipline, you might know who reviewed you. However, people write at all levels of the academic scale. Having a broad peer review process, where even people from a different specialization level than yours, can comment (and the opinions are weighed, of course, according to their respective ranking) can further improve this. Sometimes even lead you to understanding several points you didn't even consider originally, as they are outside your usual radar.

      • Ideally, though, the reviewers wouldn't know who the authors are. This stops well-known researchers from slipping in bum papers and stops people from unknown institutions getting kicked without regard for their work. This would be the review equivalent of a double blind study, and if the goal of the process was to find valid research, it would already be like this. Unfortunately getting published these days is way more about being known and shaking hands than actually doing good work, because you know th
        • by PDAllen ( 709106 )

          That's true but sadly almost impossible to work. People routinely make preprints available as soon as they submit to a journal (a few weeks before it goes to a referee, often) and many people will give talks about their work when they're sure of the correctness of the result - which can be a very long time before it's written up. I gave a bunch of talks last summer on a couple of subjects, neither of which is yet written up; one because we think we can do something more, and the other because all of my coau

      • Usually, when you review an article, it is under the double-blind method - You don't know either who the reviewee is. You must judge based only on the merits of the paper, without taking care on whether the author was a student of Einstein himself.

    • Re: (Score:3, Insightful)

      by rlseaman ( 1420667 )

      There have been numerous attempts to redefine peer review to bring it into the 21st century. There will be many more after this effort.

      Peer review is typically anonymous. It represents a trust relationship between the editor and the referee, not directly between the author and the reviewer. If the journal - or rather, the editor - is removed from the equation, then some new mechanism is needed. It isn't obvious that the web of trust as described fits the bill, however.

      An equivalent to a distributed cert

      • by slawekk ( 919270 )

        Peer review is typically anonymous

        Apart from a couple of experiments, peer review is never anonymous. It is typically half-anonymous - the author does not know the identity of reviewers, but the reviewers do know who is the author. I don't know in other disciplines but in mathematics this is a necessity from the practical point of view. Active researchers may get more than one paper to review per week. Sometimes one would need a couple of weeks to check all the details in proofs. So the reviewers have to rely on the credibility of the autho

        • Yes, that is a more precise description. The main point I was making, however, is that the web of trust is currently between the referees and the editors on the one hand, and the editors (forwarding the referee's comments) and the authors on the other. Fairly often referees wave anonymity, but that is their choice, not the editors. As you say, referees have to know the identity of the authors, although this knowledge can be abused.
        • by godrik ( 1287354 )

          In Computer Sciences, there are several journal and conferences where the reviewing process is double blind.

          Researchers that got more than one paper per week generally push the review to PhD student. Moreover, most of the time reviews are weighted with a confidence index so that reviewers can tell the editor 'it sounds correct but I did not checked in details'.

      • [the number of references] already exists and is widely used as a metric

        Citing a paper does not necessarily mean endorsement for its contents. Only that the paper you are referencing was relevant to a part of your discussion. References can be used to provide counter-examples or denounce bad research; but they are counted as a citation anyway. In scientific citation number-crunching, any publicity is good publicity.

        A crazy idea would be to add metadata to references, describing the type and relative importance of the source. That would make 'paper A, main inspiration, very i

  • by jellomizer ( 103300 ) on Wednesday February 04, 2009 @06:39PM (#26730645)

    Where popular Ideas get modded up and controversial ideas get modded down.
    We still need to find a way to get Ego out of science. Without having every crackpot idea be seriously considered.

    • Re: (Score:3, Insightful)

      by Lendrick ( 314723 )

      In addition, there will be an effect where more prominent scientists will get tons of links and favorable peer reviews, in exchange for being "friended" in this network.

      Certainly this effect must exist already, and admittedly a bit of it is good (if someone repeatedly submits excellent papers, it stands to reason that their opinions should hold a bit more weight) but this may amplify the effect far past the point of usefulness. Ultimately, science needs to stand on its own merit, and not just the reputatio

    • popular Ideas get modded up and controversial ideas get modded down

      Who would have modded up Galileo? No one.
      Very bad for science.

      • Re: (Score:2, Insightful)

        by iris-n ( 1276146 )

        No one? So, tell me, how have you heard about Galileo? Found an dusty tome in a shelf of an old monastery, translated it from latin and amazed yourself of how ingenious he was?

        He is only known today because his people 'modded him up'. Some of his ideas were controversial, against the good ol' Aristotle, but he was a very respected teacher, that made brilliant insights in various aspects of physicis and mathematics, and only later reached his astounding conclusions. Read his biography.

        If some paper is refuse

    • by glwtta ( 532858 )
      We still need to find a way to get Ego out of science. Without having every crackpot idea be seriously considered.

      I love how people want science to be this pure ethereal thing, for some reason. Of course research is ruled by the egos of those involved, just like every single other human endeavor. Why would you expect it to be an exception?
    • by Sloppy ( 14984 )

      No. Slashdot, unlike a reputation system based on a PGP WoT, has anonymous moderation. I can't look at how you have moderated, decide whether you've done a good job or not, and then set in my prefs to ignore or use your moderations. If you moderate something as insightful, then the system is going to present it to me as insightful. That is what causes the popular-goes-up, controversial-goes-down problem.

    • Well we used to have such a system when Monasteries did the peer reviews before the Universities took over. After the Universities took over they ruined and abused the scientific method so that any crackpot idea be seriously considered as long as the scientist with that crackpot idea can get a few of his/her friends to conspire to rubber stamp the paper or create aliases to peer review his/her own work. Something about a system of morals and ethics that were not based on a religion and seriously bent to all

  • Already got one (Score:5, Insightful)

    by burris ( 122191 ) on Wednesday February 04, 2009 @06:39PM (#26730651)

    Scientific publications already have a web of trust in the list of cites at the bottom. Publications don't get cited unless they are notable in some way.

    • Indeed. Google's PageRank algorithm started off as citation analysis for academic papers--one could find out which papers were notable in a given field by the quantity and notability of the papers citing it. Then they realized that the same approach could work for the Web, treating links as citations.

      As a sibling post points out, this says nothing about the correctness of the paper, only its notability--but ideally if a paper is shown to be faulty, then the paper exposing the faults will get many citation

      • by tucuxi ( 1146347 )

        When building its databases, Google does not treat all links as equal. You have the 'rel=nofollow' link attribute to indicate that you don't want to attribute trust to the destination of the link, and Google can discount link weight based on the outgoing anchor text.

        I would be very interested in seeing that done in scientific publications. As you said, notability can come in many flavors, and they are not equally yummy.

  • arXiv leads the way (Score:3, Informative)

    by MoxFulder ( 159829 ) on Wednesday February 04, 2009 @06:42PM (#26730667) Homepage

    arXiv [], the pioneering online preprint archive, already does something like this, though not as sophisticated. They have an endorsement system [], wherein more established users endorse newer ones. It's fairly rudimentary and ad-hoc, but seems to keep out crackpots and spam fairly well in practice.

    • Re: (Score:3, Insightful)

      by PDAllen ( 709106 )

      Spam yes. Crackpots piped to math.GM for the amusement of all (e.g. the guy whose 'proof' of the Riemann hypothesis was 20 pages of verbiage boiling down to 'the universe is built on maths and maths is built on primes, so they must behave naturally and therefore the result is true..').

    • Re: (Score:2, Interesting)

      by tehgnome ( 947555 )
      I feel the arXiv system is a bit weak, but when I RTFAed, this is precisely what I thought of. I would love to see this be implemented with arXiv.
  • Endorsements stink. (Score:1, Interesting)

    by Anonymous Coward

    Everyone loves you, but then some nut wants to climb the ladder. They rake through your relationships looking for pot-smokers and communists. Suddenly you lose your security clearance to the knowledge in your head about to how to make the atomic bomb that you developed.

  • If this works, it'll hopefully create an industry standard. Education and Research often lead technological standards, like the Internet, and this sounds like exactly what the entire 'net needs.
  • The Web of Trust proposal sounds a lot like the website Faculty of 1000 []. I also would suggest looking at the website for the Web of Science []. If I cite another's work, I've reviewed it and, at least on some level, agree with the findings. I'm certainly not going to cite that I think are bogus.
  • PGP and GnuPG have been utilizing webs of trust to establish authenticity without a centralized certificate authority for a while.

    A more accurate statement would be "PGP and GnuPG's web of trust system has been mostly ignored pretty much since it was created".

  • Journals still offer a critical service by at least partially blinding the review process. If I know your reviewing the paper, and you know that I know that you are reviewing my paper then large social pressures can come into play. For this reason, the idea of decentralized publishing seems less appealing to me than modifying the metric by which scientific impact is measured.
  • The idea of having varying levels of endorsement can be used in other contexts as well. For example, when someone offers indirect information, you would be wise to consider how much you trust the person relaying the information when assessing its validity. The more remote the source is, the more you want to suspect the credibility of the report. That's why hearsay evidence is inadmissible in court. But how would this be implemented in daily life? I took a whack at exploring the idea in a short story called

  • Firstly, I am a scientist, I have a number of papers published in journals significant to my field (plant biology) and have 200+ citations to date. This scheme is pointless, it amazes me how many stories along the lines of 'we can make scientific publishing work better' that get on Slashdot.

    In general the peer review process works extremely well: a journal gets multiple reviews on a submitted manuscript, the authors don't know who they are by, the editor makes a decision based on those reviews, if the auth

    • by tucuxi ( 1146347 )

      it amazes me how many stories along the lines of 'we can make scientific publishing work better' that get on Slashdot.

      So, in your humble opinion, science publishing should continue essentially as it is, perhaps with a shift towards open publishing. And the current 'science accounting' methods (adding up citation sum-totals, relying on journal weights, or combining both) are perfectly adequate. And there the pressure from these flawless accounting systems does not drive scientists to publish un-matured, partial results in a frenzy to score more papers than their peers.

      No, I do not have a silver bullet, and yes, the syste

  • Great. So now those new scientists with few/none papers or unpopular theories are going to have to fight even harder to establish credibility and get published.

    Every rose has it's thorn.

  • sure it will show who signed it.

    What is to stop one scientist to create an "alias" with a different PGP/GPG key and publish to a different scientific journal and then use that "alias" or series of aliases to peer review his/her own work. Ward Churchill did this with his works, and I have known many who published articles under aliases in the scientific community as well.

    A key sign is lack of such things as margin of error calculation to show that the data was randomly selected and not "cherry picked" to pro

  • by epine ( 68316 ) on Thursday February 05, 2009 @12:37AM (#26733715)

    Isn't "web of trust" in the same synosphere as Greenspan's failed notion of counter party surveillance? Wasn't it a "web of trust" which allowed the Catholic church to conceal deeply entrenched violations of trust while delaying its apology to Galileo for 400 years? Wasn't "web of trust" what allowed Madoff to dig a $50b crater? What percentage of novel endorsements from one genre author to another come equipped with a set of kneepads?

    Why is it that so many people are allured by this concept?

    • by skeeto ( 1138903 )

      From a technical perspective, being decentralized is a major feature for a system. Decentralized systems are generally robust, reliable, and scalable. There is no single point of failure, and the system will be much more resistant to attacks than a centralized system. The cost of the system is also shared by its nodes, so there is no expensive central point, like SSL authorities.

      From a human perspective, a web of trust is very similar to human social behavior. For most people, there are a handful of people

  • From the gpeerreview page: "If the author likes the review, he/she will include it with his/her list of published works."

    I honestly believe that the future of scientific publication is in a system where everyone publishes whatever they like and generate a hash of the article (or register some kind of unique ID). People then review these articles when they read them, with researchers in the field having unique IDs of their own. The score of the paper then works using something like the pagerank eigeinve
  • Use GPeerReview to sign the review. (It will add a hash of the paper to your review, then it will use GPG to digitally sign the review.)

    Here's where everything will fall apart. When almost all faculty members I know (except the math and some CS ones) act like this [], I can hardly see how they won't bungle it up.

    Getting back to his Why's:

    Peer reviews give credibility to an author's work.

    We already have it.

    Journals and conferences can use this tool to indicate acceptance of a paper.

    Bad idea. This will easily devolve into a numbers game. Paper X has 20 signatures approving it, with 5 of them at Level Zen. Paper Y has only 10 signatures approving it, with most being at Level Neophyte. We'll take X and reject Y.

    Think I'm exaggerating? Go observe people talk about impact f []

At work, the authority of a person is inversely proportional to the number of pens that person is carrying.