Web of Trust For Scientific Publications 125
An anonymous reader writes "PGP and GnuPG have been utilizing webs of trust to establish authenticity without a centralized certificate authority for a while. Now, a new tool seeks to extend the concept to include scientific publications. The idea is that researchers can review and sign each others' works with varying levels of endorsement, and display the signed reviews with their vitas. This creates a decentralized social network linking researchers, papers, and reviews that, in theory, represents the scientific community. It meshes seamlessly with traditional publication venues. One can publish a paper with an established journal, and still try to get more out of the paper by asking colleagues to review the work. The hope is that this will eventually provide an alternative method for researchers to establish credibility."
Wikipedia (Score:5, Interesting)
This will allow it to overcome the credibility problems that it has.
Re:Wikipedia (Score:4, Insightful)
Indeed, the implementation of flagged revisions [wikipedia.org] is currently being debated for the English Wikipedia, and was the subject of a recent ./ article [slashdot.org].
A lot of the debate centers on exactly what the "signing" process will entail in terms of responsibilities and consequences for the articles subject to it.
I don't think a one-size-fits-all approach to trust networks is a good idea. Requirements for effective trust in key sharing, peer review, and wiki content may differ and I think it's appropriate for each to develop a fine-tuned approach, while borrowing good ideas from one another.
Re: (Score:2, Insightful)
TRUTH is important than TRUST.
Re: (Score:1, Offtopic)
Dormitory is important than Dirty Room
Evangelist is important than Evil's Agent
Desperation is important than A Rope Ends It
The Morse Code is important than Here Come Dots
Slot Machines is important than Cash Lost in 'em
Animosity is important than Is No Amity
Mother-in-law is important than Woman Hitler
Snooze Alarms is important than Alas! No More Zs
Alec Guinness is important than Genuine Class
Semolina is important than Is No Meal
The Public Art Galleries is important than Large Picture Hal
Re: (Score:1)
Half a year ago I wrote an article about how to implement Wikipedia peer review with digital signatures:
http://cameralovesyou.net/random/wikipedia-digital-signatures.html [cameralovesyou.net]
Re: (Score:1)
Answering to myself... here's the Wikipedia Village Pump discussison [wikipedia.org] about the proposal. The proposal was rejected (or at least not implemented), since it was thought to produce a disincentive for editing signed articles.
Re: (Score:2)
Answering to myself... here's the Wikipedia Village Pump discussison about the proposal. The proposal was rejected (or at least not implemented), since it was thought to produce a disincentive for editing signed articles.
Yup, true, but it's flying NOW! Jimmy Wales just announced it today [youtube.com]. (Probably on /. tomorrow.)
Re: (Score:1)
insightful
Re: (Score:3, Funny)
Great place to start looking for information though.
[citation needed]
Still needs a root (Score:5, Insightful)
Re: (Score:1)
Wouldn't metrics like the Erdos number suffice? Calculate the weighted distance from known experts, such as Nobel laureates, Fields medalists, etc. It isn't that hard to notice a clique that is only weakly connected to the larger network.
Re:Still needs a root (Score:4, Insightful)
Journal publications are basically used to tell people who don't work in your area that you're doing decent work. If someone is doing decent work in an area you're a specialist in, you probably know them at least by sight and you probably hear about their results fairly soon after they prove them; the journal paper may well come a year or two later.
But if you want funding, or you want a job, you have to convince a bunch of people who know very little about your area that you are a valuable person. The easiest way to do that is to point at recent papers in good journals (which, really, isn't so different to the web of trust idea: I have a paper in CPC because someone thought my work was good enough to go there, that kind of thing).
There are lots of problems with the sort of metric you suggest; you need something relevant to now, you don't want it to discard people who do good work on their own or in tight groups (and there are quite a few of the latter), you don't want it to be distorted by the sort of mathematician who will publish every result they can get in any collaboration (there are quite a few, some of whom are very good and very well-connected but still publish some boring results along with the good ones).
Re: (Score:1)
The root should be established by your opinion on papers, their authors, and their reviewers. If you see eye to eye with an author or reviewer about a paper, you should increase your trust of them. In other words, everyone has a separate view onto the web of trust.
Re: (Score:3, Insightful)
Re: (Score:1)
Yes, but for the web of trust to have value to the casual observer certain respected authorities need to be established which is something people tend to do naturally on their own.
Indeed. I'd like to think that if I looked at the web of trust through, e.g., the IEEE website, I'd get something approximating the combined trust of their membership. Rent-an-opinion, if you will!
Re:Still needs a root (Score:5, Insightful)
I think this project is a great idea. Unfortunately, it currently seems to consist of only a command line tool to sign reviews with GPG. That's nowhere near enough if it is to thrive beyond the CS world. It needs a simple, rock-solid GUI, and most importantly, lots of eye-candy for the graph. It will need to look cool and work well to build up the momentum for this to work at all.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
You mean like grad students?
Re: (Score:2)
You mean like grad students?
Who do you think is doing all the work? From my experience as a grad student, the faculty are more "outside observers" than grad students, especially in the student's thesis area. Faculty tend to deal a lot more with finances and administration than research.
Re: (Score:3, Funny)
Now, postdocs, that's another story.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
That already exists without a web of trust. There is a lot research already into analysing citation and co-citation graphs for the features that you mention. In network-analysis terms the links that you describe are called gateway nodes.
Re: (Score:3, Interesting)
Much like Page Rank, you don't need a "known good". Start with everyone on even footing, passing their value on to those they sign, receiving value from those who sign them, and then iterate until it reaches a reasonably steady state.
I don't recall if there is a general "scientist" number, like there is Erdos for mathematicians, but in the off chance a crackpot network was to form and become larger than any of the networks of actual scientists, then you might want a "known good", but it wouldn't matter who
that's true of normal publication too (Score:2)
Normal peer review is basically a grouped version of the same thing. A paper being published in a journal tells you that the journal's editors/reviewers thought it was a legitimate contribution. What that tells you depends on what you think of the journal's editors/reviewers. It's basically an endorsement mechanism: publication venue X, meaning set of people Y, says that paper Z is good. This is a somewhat more decentralized version, where individual researchers can say that paper Z is good.
Re: (Score:1)
Re: (Score:2)
Whoever happens to be reading a paper at the time, would be the root. If you haven't set your opinion of any of the 100 crackpots to greater than zero, then their signatures don't matter.
And yes, that means trust is subjective. Welcome to the real world. Anyone who tries (or has tried, and it happens a lot) to set up a trust system that isn't subjective, is deluded.
Re: (Score:2)
You've hit the nail on the head. For example, read the Wegman report into the Mann et al. "hockey stick" paper in the climate science debate. The right way to judge a result in science is not by the first paper on the topic, but by the number of independent follow-up studies, all of which support the result. The idea of independence is something you need to be stringent about: ideally it requires different (unconnected) researchers obtaining new data and performing their own analyses. People from the sa
Re: (Score:2)
Please don't encourage people to waste their time by conflating politics with science. We are talking about trusting scientific papers. Mann's "hockey stick" papers have been published in the journal science (2nd highest in academic journal rankings). The correct way for McIntyre and McKitrick to attack them is to publish contrary journal articles, asking a political committe devoted to 'energy' to act as judge and jury on a particular
Re: (Score:2)
You should read some statistics. Unlike you, I do recommend that people read key documents from different players in the debate rather than just take the party line from Real Climate (which I believe was started by Mann and his colleagues).
You might be interested in Wegman's CV [gmu.edu] before suggesting his findings on the Mann et al. hockey stick were constructed politically. There's this interesting entry in his CV, for example: Appointed Chair of the Committee on Applied and Theoretical Statistics, National Ac
Re: (Score:2)
"Of course, I don't actually expect you to read any of this stuff, but other people following this discussion might be curious."
I have followed the science for at least 25yrs, I also followed the Wegman thing while it was happening. You don't have a clue, you are grasping at straws to tr
Re: (Score:2)
You have me confused: are you saying Wegman's testimony on the hockey stick should be ignored not because of what he said or his evident qualifications as an independent expert, but because of who asked him for the report?
Personally I find it hard to believe you really respect the scientific method when you (a) prefer to accuse people of playing politics rather than addressing the content of their argument and (b) argue against argument by authority (I didn't: I posted a link to Wegman's CV to point out tha
NAS Testimony to the energy committe (Score:2)
NAS climate homepage [nas.edu]
As you say "other people following this discussion might be curious", even if you are not.
Either a root, or critical mass (Score:2)
If this gets enough traction/critical mass, this problem will be solved - Crackpots can mutually sign as being topz 1337, but they will be inevitably signed as being below the mark for most other, serious (and better punctuated) users.
The site linked describes the process in too low detail to make sense of what they are proposing, but I have thought through a very similar process with some colelagues of mine - This problem is solved when not only each article, but each reviewer gets scored according to both
A new tool appears! (Score:1, Funny)
Coming this fall. Jean Claude Van Damme in...Web of Trust.
Two certificates enter, one leaves.
But what about the new tool? Does Google have the favor?
Rated R for violence and sexual situations.
Weird objection (Score:5, Insightful)
I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process. Not that I don't understand why it makes sense. It's a pretty reasonable attempt to sort valid work from crap, but...
There's still a certain way in which it's just an appeal to authority. It's people saying, "We should accept what this scientist says because other scientists say that he's right." I guess what I'm saying is that I worry that, as a process like this becomes more technical, people will be more likely to confuse a statement like, "This study has been reviewed by other scientists and seems to have merit," with something more like, "This study is correct, infallible, and indisputable."
And I guess part of the reason I worry about this is that there may be cases where what "everyone thinks" (i.e. the common conception even among experts) is wrong, and some random nutcase is right. It almost never happens, but it happens sometimes. It seems to me that a technical method of assigning trustworthiness of ideas in a web of trust might possibly lead to having all the groundbreaking ideas go into a spam filter somewhere, never to be seen again.
Re:Weird objection (Score:4, Insightful)
Re: (Score:3, Interesting)
Yeah, I acknowledge that my concern shouldn't really be the primary concern. There's a reason I wanted to call it a "weird" objection.
I just think it is possible to put too much faith in peer review, given that the "peers" reviewing it are also human beings, just as fallible as other human beings. Computers are arguably less fallible in other ways, but of course they can't really make judgements. So I'm just really trying to point out that, in the other cases where we mix fallible human beings with mach
Re:Weird objection (Score:5, Insightful)
Peer review is actually pretty weak. It's mainly effective at spotting obvious howling errors. Peer review is not the same as replication and, indeed, many reviewers don't bother to check the equations or data presented in a paper unless they are genuinely suspicious of the conclusion. Replication, not peer review, is the gold standard of science.
Re: (Score:1)
Everything you say is true, but the comparison to a spam filter is apt. How many people read through and check all their emails carefully, in case they really are the heir of a deceased nigerian prince? Yes, some non-peer-reviewed papers might be valid; brilliant even. But it's simply not worth the cost of wading through the crap.
Re:Weird objection (Score:5, Insightful)
As a third party, there is no way I have the time to follow the chain of logic that results in a modern scientific paper from first principles. At some point, I have to accept some of the preconditions of the paper without verifying them, because doing otherwise implies that I am an expert in the particular field the paper is relevant to. And there are plenty of cases where I want to make use of a result from a field that is related to my work but in which I am not an expert.
Appeal to authority is the fundamental reasoning technique I apply in such cases. A respected expert says it is so, and so I will trust them until I have reason to believe otherwise. That trust should not be blind -- if I am presented with reason to, I will happily re-evaluate that trust. Perhaps the expert is mistaken. But, in the interest of actually getting something done myself, I will accept as a default position that the experts know what they're talking about.
Re: (Score:3, Insightful)
On the one hand, you have the appeal to authority as an argument in itself. This is the classic medieval "According to the philosopher..." stuff. When this happens in science, it is undesirable, since science
Re: (Score:1)
I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process. (...) there may be cases where what "everyone thinks" (i.e. the common conception even among experts) is wrong, and some random nutcase is right.
Absolutely, and that is exactly how Science progresses. The modern peer review process only adds viscosity to the mechanism of discovery.
Gettin' yer hands dirty (Score:2)
Passing a peer review doesn't provide assurance of the accuracy of a scientific paper. All a peer review does is filter out stuff that we are already pretty sure is bogus.
But what makes a scientific concept meaningful is review by experimentation. Ultimately, it's important for people to drum up experiments that could be used to confirm or reject the theory. And really, your concern is an artifact of the lousy job being done by the public school system in teaching students what the Scientific Method is real
Re: (Score:2)
It's people saying, "We should accept what this scientist says because other scientists say that he's right." I guess what I'm saying is that I worry that, as a process like this becomes more technical, people will be more likely to confuse a statement like, "This study has been reviewed by other scientists and seems to have merit," with something more like, "This study is correct, infallible, and indisputable."
Getting a group of scientists to fully agree on something is akin to herding cats, or for a better analogy, getting all of Slashdot to agree on the best *nix text editor.
"This study is correct, infallible, and indisputable." is best replaced by "This study has made it through one of the must grueling gauntlets that modern civilization has to offer and made it out the other side, therefore it has merit."
Re: (Score:2)
Appearing in a peer-reviewed journal isn't a stamp of authoritative correctness on a paper, just that a few people thought it was worthy of some space in the journal. Peer-review is (supposed to be) just a rough initial filter to cut down the noise; if a paper is actually good and useful it will be cited often.
Re: (Score:2)
Re: (Score:2)
This is a misunderstanding. The role of peer review is not to verify anything. To the contrary, there are many situations where a reviewer will not be able to verify results with resonable effort. Think LHC experiments, Mars probes, etc.
Peer review is really just a spam filter. Reviewers can check whether a publication has novel aspects to it, whether it is relevant to the journal or conference, whet
Re: (Score:2)
Things like relativity and plate tectonics have taken a while to be accepted because so many people thought they were crazy to start with.
However, it's better to have papers checked than not checked. The reviewers don't necessarily have to agree with the conclusions, but they do have to make sure the study's methodology is sound.
Other people will then go and do similar studies to see if they can reproduce or disprove the results.
Re: (Score:2)
You should read "The structure of scientific revolutions". There is a large body of work associated with the phenomena you describe, and most scientific communities take a lot of care to prevent the negative effects.
a cumbersome solution in search of a problem (Score:3, Interesting)
The hash isn't necessary. If the trust relationship between two academic peers includes "worried about him modify the paper after I review it", there is no trust relationship.
In fact, the whole thing isn't necessary. Pubmed, anyone? All someone has to do is pick up the phone and call the reference on a CV and say, "So, what did you think of Dr. X's work on Y?", and they learn more than they will running a program that says "Hashes verified."
This system is also never going to fly with researchers. Most (but not all) of the (brilliant) bio people I've worked with are completely helpless when it comes to technical stuff. Even some of the bioinformatics people who can write amazing algorithms aren't clued in on stuff outside of their field.
Re: (Score:3, Insightful)
Yes, because the first thing I'd do on seeing a vaguely interesting paper is call up half a dozen random researchers, wait until they weren't busy in the lab to get a comment back, and then eventually have some clue what the consensus among those more directly involved in the field than myself is several hours later. Why not just have them publish their opinions? Then they don't have to answer the same questions repeatedly.
The question isn't "why should we include the hashes?" but more properly "Is there
Re: (Score:2)
The hash is harmless (as in: there are no downsides), and a lot faster to sign (and verify) than a megabyte.
You can trust someone and still have a computer hardware failure, or accidentally hit a key after you accidentally load a document into an editor instead of a viewer. You can trust someone, but acknowledge that they might make
Re: (Score:2)
No. You could not be more wrong I'm afraid. When I sign a review of somebodies work I am not recommending / trusting them as a person. I am recommending that single piece of work. People who trust me can then trust that the piece of work is good.
The hash is completely necessary because the trust relationship is slightly more fine grain
Very poor idea (Score:5, Informative)
Re: (Score:3, Informative)
I might have missed something, but I am pretty sure that most Peer review are anonymous. (The authors of the paper don't know who the reviewer are). The publisher does know, but he keeps it secret.
Re:Very poor idea (Score:4, Interesting)
They are anonymous--the same way student reviews of a professor in a class with 5 students are anonymous. By the time you're doing anything original, innovative, and remotely interesting in a field, you're going to have 20 peers tops--very likely only 5 or so. Some of your reviewers will be unrelated and able to check basic mathematics for correctness but not much more.
Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...
Re: (Score:1)
Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...
Most of the time, I do not know who read my articles and I do not believe the review I wrote tells who I am. There is a bunch of "top-level" researcher in my field (parallel computing) and probably 10 times more "classical researcher" and PhD student. Perhaps it is not true in all field.
Even if your identity can be guessed, it is never entirely sure. And being anonymous prevent pressure such as "you rejected my last article, I will reject your". You won't risk it since you are not sure it is the same guy. A
Re: (Score:2)
Depends on what area you're in, I think. I suppose there are some areas where really only a few people work and it's so different to anything else that no-one from outside can easily referee a paper - but if you're in that position, I'm hoping that you're working in a very new and growing subject. The alternative, bluntly, is that you're doing something the rest of the world thinks is boring.
I do combinatorics; if you give me a paper that doesn't use topological or heavy probabilistical methods, I could ref
Re: (Score:2)
Honestly, I think I haven't yet seen someone receive comments back where they couldn't take a good guess at who the originator was...
Especially when the reviewer's comment says: "you should cite this great work by professor Foo..."
Which is also a reason to expand this (Score:2)
If peer-review is done only at the top of each discipline, you might know who reviewed you. However, people write at all levels of the academic scale. Having a broad peer review process, where even people from a different specialization level than yours, can comment (and the opinions are weighed, of course, according to their respective ranking) can further improve this. Sometimes even lead you to understanding several points you didn't even consider originally, as they are outside your usual radar.
Re: (Score:2)
Re: (Score:2)
That's true but sadly almost impossible to work. People routinely make preprints available as soon as they submit to a journal (a few weeks before it goes to a referee, often) and many people will give talks about their work when they're sure of the correctness of the result - which can be a very long time before it's written up. I gave a bunch of talks last summer on a couple of subjects, neither of which is yet written up; one because we think we can do something more, and the other because all of my coau
Double-blind means _doubly_ anonymous (Score:2)
Usually, when you review an article, it is under the double-blind method - You don't know either who the reviewee is. You must judge based only on the merits of the paper, without taking care on whether the author was a student of Einstein himself.
Re: (Score:3, Insightful)
There have been numerous attempts to redefine peer review to bring it into the 21st century. There will be many more after this effort.
Peer review is typically anonymous. It represents a trust relationship between the editor and the referee, not directly between the author and the reviewer. If the journal - or rather, the editor - is removed from the equation, then some new mechanism is needed. It isn't obvious that the web of trust as described fits the bill, however.
An equivalent to a distributed cert
Re: (Score:1)
Peer review is typically anonymous
Apart from a couple of experiments, peer review is never anonymous. It is typically half-anonymous - the author does not know the identity of reviewers, but the reviewers do know who is the author. I don't know in other disciplines but in mathematics this is a necessity from the practical point of view. Active researchers may get more than one paper to review per week. Sometimes one would need a couple of weeks to check all the details in proofs. So the reviewers have to rely on the credibility of the autho
Re: (Score:1)
Re: (Score:1)
In Computer Sciences, there are several journal and conferences where the reviewing process is double blind.
Researchers that got more than one paper per week generally push the review to PhD student. Moreover, most of the time reviews are weighted with a confidence index so that reviewers can tell the editor 'it sounds correct but I did not checked in details'.
Number of citations received is far from ideal (Score:2)
[the number of references] already exists and is widely used as a metric
Citing a paper does not necessarily mean endorsement for its contents. Only that the paper you are referencing was relevant to a part of your discussion. References can be used to provide counter-examples or denounce bad research; but they are counted as a citation anyway. In scientific citation number-crunching, any publicity is good publicity.
A crazy idea would be to add metadata to references, describing the type and relative importance of the source. That would make 'paper A, main inspiration, very i
So we create a situation like slashdot. (Score:3, Insightful)
Where popular Ideas get modded up and controversial ideas get modded down.
We still need to find a way to get Ego out of science. Without having every crackpot idea be seriously considered.
Re: (Score:3, Insightful)
In addition, there will be an effect where more prominent scientists will get tons of links and favorable peer reviews, in exchange for being "friended" in this network.
Certainly this effect must exist already, and admittedly a bit of it is good (if someone repeatedly submits excellent papers, it stands to reason that their opinions should hold a bit more weight) but this may amplify the effect far past the point of usefulness. Ultimately, science needs to stand on its own merit, and not just the reputatio
Re: (Score:1)
popular Ideas get modded up and controversial ideas get modded down
Who would have modded up Galileo? No one.
Very bad for science.
Re: (Score:2, Insightful)
No one? So, tell me, how have you heard about Galileo? Found an dusty tome in a shelf of an old monastery, translated it from latin and amazed yourself of how ingenious he was?
He is only known today because his people 'modded him up'. Some of his ideas were controversial, against the good ol' Aristotle, but he was a very respected teacher, that made brilliant insights in various aspects of physicis and mathematics, and only later reached his astounding conclusions. Read his biography.
If some paper is refuse
Re: (Score:2)
I love how people want science to be this pure ethereal thing, for some reason. Of course research is ruled by the egos of those involved, just like every single other human endeavor. Why would you expect it to be an exception?
Re: (Score:2)
No. Slashdot, unlike a reputation system based on a PGP WoT, has anonymous moderation. I can't look at how you have moderated, decide whether you've done a good job or not, and then set in my prefs to ignore or use your moderations. If you moderate something as insightful, then the system is going to present it to me as insightful. That is what causes the popular-goes-up, controversial-goes-down problem.
Re: (Score:2)
Well we used to have such a system when Monasteries did the peer reviews before the Universities took over. After the Universities took over they ruined and abused the scientific method so that any crackpot idea be seriously considered as long as the scientist with that crackpot idea can get a few of his/her friends to conspire to rubber stamp the paper or create aliases to peer review his/her own work. Something about a system of morals and ethics that were not based on a religion and seriously bent to all
Already got one (Score:5, Insightful)
Scientific publications already have a web of trust in the list of cites at the bottom. Publications don't get cited unless they are notable in some way.
Re: (Score:2)
Indeed. Google's PageRank algorithm started off as citation analysis for academic papers--one could find out which papers were notable in a given field by the quantity and notability of the papers citing it. Then they realized that the same approach could work for the Web, treating links as citations.
As a sibling post points out, this says nothing about the correctness of the paper, only its notability--but ideally if a paper is shown to be faulty, then the paper exposing the faults will get many citation
Re: (Score:2)
When building its databases, Google does not treat all links as equal. You have the 'rel=nofollow' link attribute to indicate that you don't want to attribute trust to the destination of the link, and Google can discount link weight based on the outgoing anchor text.
I would be very interested in seeing that done in scientific publications. As you said, notability can come in many flavors, and they are not equally yummy.
arXiv leads the way (Score:3, Informative)
arXiv [wikipedia.org], the pioneering online preprint archive, already does something like this, though not as sophisticated. They have an endorsement system [arxiv.org], wherein more established users endorse newer ones. It's fairly rudimentary and ad-hoc, but seems to keep out crackpots and spam fairly well in practice.
Re: (Score:3, Insightful)
Spam yes. Crackpots piped to math.GM for the amusement of all (e.g. the guy whose 'proof' of the Riemann hypothesis was 20 pages of verbiage boiling down to 'the universe is built on maths and maths is built on primes, so they must behave naturally and therefore the result is true..').
Re: (Score:2, Interesting)
Endorsements stink. (Score:1, Interesting)
Everyone loves you, but then some nut wants to climb the ladder. They rake through your relationships looking for pot-smokers and communists. Suddenly you lose your security clearance to the knowledge in your head about to how to make the atomic bomb that you developed.
Industry Standard (Score:1)
Faculty of 1000 (Score:2)
A more accurate statement (Score:2)
A more accurate statement would be "PGP and GnuPG's web of trust system has been mostly ignored pretty much since it was created".
what journals still offer (Score:1)
The concept can be applied elsewhere (Score:1)
The idea of having varying levels of endorsement can be used in other contexts as well. For example, when someone offers indirect information, you would be wise to consider how much you trust the person relaying the information when assessing its validity. The more remote the source is, the more you want to suspect the credibility of the report. That's why hearsay evidence is inadmissible in court. But how would this be implemented in daily life? I took a whack at exploring the idea in a short story called
Pointless Application of Social Networking (Score:2)
Firstly, I am a scientist, I have a number of papers published in journals significant to my field (plant biology) and have 200+ citations to date. This scheme is pointless, it amazes me how many stories along the lines of 'we can make scientific publishing work better' that get on Slashdot.
In general the peer review process works extremely well: a journal gets multiple reviews on a submitted manuscript, the authors don't know who they are by, the editor makes a decision based on those reviews, if the auth
Re: (Score:2)
it amazes me how many stories along the lines of 'we can make scientific publishing work better' that get on Slashdot.
So, in your humble opinion, science publishing should continue essentially as it is, perhaps with a shift towards open publishing. And the current 'science accounting' methods (adding up citation sum-totals, relying on journal weights, or combining both) are perfectly adequate. And there the pressure from these flawless accounting systems does not drive scientists to publish un-matured, partial results in a frenzy to score more papers than their peers.
No, I do not have a silver bullet, and yes, the syste
Credibility and Noobs (Score:2)
Great. So now those new scientists with few/none papers or unpopular theories are going to have to fight even harder to establish credibility and get published.
Every rose has it's thorn.
This is a fallacy (Score:2)
sure it will show who signed it.
What is to stop one scientist to create an "alias" with a different PGP/GPG key and publish to a different scientific journal and then use that "alias" or series of aliases to peer review his/her own work. Ward Churchill did this with his works, and I have known many who published articles under aliases in the scientific community as well.
A key sign is lack of such things as margin of error calculation to show that the data was randomly selected and not "cherry picked" to pro
Woulditbe too crass to tag this story circlejerk? (Score:3, Interesting)
Isn't "web of trust" in the same synosphere as Greenspan's failed notion of counter party surveillance? Wasn't it a "web of trust" which allowed the Catholic church to conceal deeply entrenched violations of trust while delaying its apology to Galileo for 400 years? Wasn't "web of trust" what allowed Madoff to dig a $50b crater? What percentage of novel endorsements from one genre author to another come equipped with a set of kneepads?
Why is it that so many people are allured by this concept?
Re: (Score:2)
From a technical perspective, being decentralized is a major feature for a system. Decentralized systems are generally robust, reliable, and scalable. There is no single point of failure, and the system will be much more resistant to attacks than a centralized system. The cost of the system is also shared by its nodes, so there is no expensive central point, like SSL authorities.
From a human perspective, a web of trust is very similar to human social behavior. For most people, there are a handful of people
The big problem here (Score:2)
I honestly believe that the future of scientific publication is in a system where everyone publishes whatever they like and generate a hash of the article (or register some kind of unique ID). People then review these articles when they read them, with researchers in the field having unique IDs of their own. The score of the paper then works using something like the pagerank eigeinve
GPG? (Score:2)
Use GPeerReview to sign the review. (It will add a hash of the paper to your review, then it will use GPG to digitally sign the review.)
Here's where everything will fall apart. When almost all faculty members I know (except the math and some CS ones) act like this [phdcomics.com], I can hardly see how they won't bungle it up.
Getting back to his Why's:
Peer reviews give credibility to an author's work.
We already have it.
Journals and conferences can use this tool to indicate acceptance of a paper.
Bad idea. This will easily devolve into a numbers game. Paper X has 20 signatures approving it, with 5 of them at Level Zen. Paper Y has only 10 signatures approving it, with most being at Level Neophyte. We'll take X and reject Y.
Think I'm exaggerating? Go observe people talk about impact f [wikipedia.org]
Re: (Score:2)
There, fixed that for you.
There, fixed that for you. Grammer nazi's suck!
Re: (Score:2)
There, fixed that for you. Grammer Nazis suck!
There, fixed that for you. You somehow managed to make yourself look like an idiot and invoke Godwin's (note the possessive case) Law simultaneously. I'm impressed.
Re: (Score:2)
Whoosh!
Re: (Score:2)