Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Science

Scientific Study Finds There Are Too Many Scientific Studies 112

HughPickens.com writes: Chris Matyszczyk reports at Cnet that a new scientific study concludes there are too many scientific studies — scientists simply can't keep track of all the studies in their field. The paper, titled "Attention Decay in Science," looked at all publications (articles and reviews) written in English till the end of 2010 within the database of the Thomson Reuters (TR) Web of Science. For each publication they extracted its year of publication, the subject category of the journal in which it is published and the corresponding citations to that publication. The 'decay' the researchers investigated is how quickly a piece of research is discarded measured by establishing the initial publication, the peak in its popularity and, ultimately, its disappearance from citations in subsequent publications.

"Nowadays papers are forgotten more quickly. Attention, measured by the number and lifetime of citations, is the main currency of the scientific community, and along with other forms of recognition forms the basis for promotions and the reputation of scientists," says the study. "Typically, the citation rate of a paper increases up to a few years after its publication, reaches a peak and then decreases rapidly. This decay can be described by an exponential or a power law behavior, as in ultradiffusive processes, with exponential fitting better than power law for the majority of cases (PDF). The decay is also becoming faster over the years, signaling that nowadays papers are forgotten more quickly." Matyszczyk says,"If publication has become too easy, there will be more and more of it."
This discussion has been archived. No new comments can be posted.

Scientific Study Finds There Are Too Many Scientific Studies

Comments Filter:
  • by Anonymous Coward on Sunday March 15, 2015 @03:44PM (#49262819)
    Predictable, but sad outcome of the popularity contest that our lives have been converted to. Now mandatory for nearly all lifestyles and incomes.
    • by peragrin ( 659227 ) on Sunday March 15, 2015 @05:55PM (#49263277)

      Or maybe the fact that there are more scientists publishing today than there were 20 years ago.

      The article not the study, completely ignores one fact. The world population has tripled in 70 years. More papers being published is a result of more people existing. While not the sole reason, it is something to remember.

      • Having so many publishing venues available right now, with (thankfully) every day more of them available under open access licensing schemes, we can get to much more research in our field.

        That, however, means that when I start reading on a subject related to my area of study, there are too many documents fighting for my attention. And I will undoubtedly miss many among them, just because of sheer probability.

        Of course, the same will happen to my published works: They will no longer be _so_ unique, they will

      • Or maybe the fact that there are more scientists publishing today than there were 20 years ago.

        Add to that the expansion of science. The more we know the more there is to study which would naturally produce more studies.

        • And I'll add to that science is progressing faster than in the past. Computers help speed up analysis. More researchers help speed the process.

          Regarding cites, as science progresses the people getting the cites now are the ones who cited the earlier paper before. It would be interesting to see a cite tree to see how people who cited you are getting cited and so on. That might be the real measure of the strength of a study.

          (Sorry to reply to myself but I had to add that.)

      • Or maybe the fact that there are more scientists publishing today than there were 20 years ago.

        The article not the study, completely ignores one fact. The world population has tripled in 70 years. More papers being published is a result of more people existing. While not the sole reason, it is something to remember.

        It does make me wonder if we're hitting some sort of plateau where its just too hard to be a generalist in any field anymore. Like, is physics too specialized now that we won't see another Einst

    • by gweihir ( 88907 ) on Sunday March 15, 2015 @07:48PM (#49263739)

      And even worse: Publish positive results or die. As a consequence, failed experiments get repeated all the time, because nobody else knew they failed and there is a high level of incentives to lie or at least overstate success. The root-cause, IMO, is the bean-counters that allocate funding. They do not understand that Science is exploration, that mostly it will fail and that well-documented failure is just as important as success and does not in any way reflect negatively on the scientists involved. But the bean-counters only want to see "success", and by that they make it much, much harder to obtain.

      Scientific culture needs to re-invent itself. As it is, it is mostly a problem and does not benefit society much anymore.

    • 19th century system to a 21st century world.
      Science today is far more complex then it was a hundred years ago. Back then it was easy to get a superstar scientist. Experiment with a few hundred dollars of equipment you can find a new principal. Publish it and you are big news.
      Most of the easy stuff had been found we get some rare finds such as the discovery of graphine, but most of today's work is with expensive equipment needing a larger teams of scientist. That publish or parish methodology is antiquated.

      • by dj245 ( 732906 )

        19th century system to a 21st century world. Science today is far more complex then it was a hundred years ago. Back then it was easy to get a superstar scientist. Experiment with a few hundred dollars of equipment you can find a new principal. Publish it and you are big news. Most of the easy stuff had been found we get some rare finds such as the discovery of graphine, but most of today's work is with expensive equipment needing a larger teams of scientist. That publish or parish methodology is antiquated. The better approach would be open and accessable sharing of data and results in real time where more can work on you work of progress, and less trying to be Mr. Know it all scientist, who will get the Nobel prize for stumbling on the best answer.

        There is plenty of easy science still yet to be done in taboo subjects. The possibilities for illegal drugs alone are huge. Can't get funding? Crowdfund it. There are plenty of people who will contribute to good science in these areas, like this one [walacea.com] which essentially is just putting people on LSD in a fMRI machine and looking at the results. I donated some money and it looks like 1279 other people did too. They are currently at 177% of their funding goal with 34 days remaining.

        Right now, there are

  • The solution is simple. Throw out studies that sound "too meta".

    • by ShanghaiBill ( 739463 ) on Sunday March 15, 2015 @04:22PM (#49262971)

      The solution is simple. Throw out studies that sound "too meta".

      Another solution would be to shoot idiotic journalists that misrepresent what studies say. The actual study does not say there are "too many" studies. What it says is that, since there are more studies, individual studies are cited less frequently, and may be read by fewer people. But nobody expects every scientist to read every paper published in their field. I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies.

      Also, there are too many books published. Proof: Amazon lists over a million titles, and there is no way that one person can read them all.

      • The actual study does not say there are "too many" studies. What it says is that, since there are more studies, individual studies are cited less frequently, and may be read by fewer people. But nobody expects every scientist to read every paper published in their field.

        If only we had the technology to be able to search the available research for specific items of interest, so we wouldn't have to rely upon poorly-written study titles and could narrow down the available research to the items that apply to

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          If only we had the technology to be able to search the available research for specific items of interest [...]

          Actualy, the technology is here, the problem is in the paywalls. Probably No scientific institution in the world has access to all the journals that cover the relevant fields of the institution.

          Big problem.

          • >the technology is here, the problem is paywalls.

            Whilst both indexing and accurate bibliographic citation in those paywalls has improved, for fields of study that are way off the mainstream, the only way to ensure that all relevant articles from a journal are correctly indexed, for that non-mainstream field of study, is to go through each article, in each volume of the journal, doing the appropriate indexing it yourself.

            > Probably No scientific institution in the world has access to all the journals t

        • Information overload.

          I feel like this is indicative of the internet age, rather than isolated to academia, and the solution has evolved to become a penchant for sorting through the noise.

          Perhaps true insight is to be realized by a more talented eye toward the sifting through the chaff

        • If only we had the technology to be able to search the available research for specific items of interest, so we wouldn't have to rely upon poorly-written study titles and could narrow down the available research to the items that apply to our own narrow subject of interest.

          Ironically, I used to do research in this very area. It's a little easier in the biomedical field because most papers are tagged in ontologies like MeSH and ICD, but if you're trying to find the latest research on algorithms to solve a problem that you have, you're pretty much screwed.

        • by Anonymous Coward

          "I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies."

          Exactly. As a biomedical engineer I don't read (and don't want to read) every single thing that is published in the biomedical engineering field (which can be anything from drug infusion devices, dumb implants, smart implants, prosthetics, clinical devices etc). But when I'm

      • I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies.

        I took a writing class where I had to write 'scientific' papers. On my chosen topic, admittedly very narrow(but that's kind of what the teacher wanted), I found myself having to kind of 'circle' my chosen target with tangential studies. I ended up writing into the review paper that here's my hypothesis, it's supported, at least in theory, by the results of these studies(and the same 3 names popped up in quite a few of them), but that the topic itself doesn't appear to be directly studied, so it would be u

    • by gringer ( 252588 )

      There are lots of studies on studies, and in general they are a good idea. Here's my take on that (from SoylentNews [soylentnews.org]), slightly paraphrased to hopefully demonstrate why meta-studies can be good:

      Keeping track of information is difficult, and journals generally don't like people to pepper their articles with too many citations. If the same information gets spread around, then the chance of citation drops for any particular article that contains that information. This is a problem, even with Watson-level recall

    • by gweihir ( 88907 )

      Not at all. One of the few areas where science still works is well done meta-studies.

  • Wow (Score:5, Funny)

    by WGFCrafty ( 1062506 ) on Sunday March 15, 2015 @03:50PM (#49262839)

    Where's the study which examines studies about studies and found that 50% of them are fueled by irony.

    Further study is needed to confirm that number.

  • Does that include this one?

  • I am going to write up a project proposal to do a scientific study about why scientific studies are exploding at exponential rate. But calling it exponential before the doing the study would be prejudicial, so I am going to have to do a prelim study to determine whether or not it *is* exponential.
  • by Anonymous Coward on Sunday March 15, 2015 @04:03PM (#49262905)

    As Slashdot patrons are eager to point out every time this sort of story gets published, the phenomenon described is not necessarily a bad thing.

    Many physical chemists these days are investigating ways to build nanostructures that can demonstrate interesting phenomena. For example, chemists have known for a long time that certain molecules will scatter light in the visible range but decrease its frequency by a molecule-specific constant. This process is called Raman scattering. These molecules are often dissolved in water, and it was recently shown that the adding metal nanoparticles to the solution will dramatically increase the amount of observed Raman scattering. Suddenly there's a lot of new research to do: How does the increase depend on the nanoparticles' sizes? On their shapes? On the particular metal of which they're made? On whether their surfaces are smooth or rough? What if the nanoparticles are hollow, or composed of layers of different materials? What are the theoretical explanations for the observed behaviors? And do any phenomena *other* than Raman scattering benefit from the presence of these nanoparticles?

    Many papers have been (and are still being!) published on all the clever things people have tried with these nanoparticles. Ten years from now, we'll have a pretty understanding of all the properties of surface-enhanced Raman scattering and most of these papers will be "forgotten" as researchers consolidate their knowledge into a couple of good textbooks. But that's perfectly fine---in fact, that's the whole point of scientific progress. Science is the process of observing a lot of complicated stuff and finding the most compact explanations for everything that was observed. It's nice that eighty years ago one researcher could sometimes discover a new phenomenon and provide a complete explanation for it before publishing his knowledge to the world. Today we have more researchers exploring a larger space of possible experiments, and the things they're studying are much more complicated. So they publish more papers as "scratch work" to help other researchers who are investigating the same phenomena, and eventually these papers are replaced by books. Again, that's perfectly fine.

  • People are surprised that their are two many studies when when we keep pumping out academics who need to publish in the field lest they perish? That's like saying we're installing too many toilets because most only get used a handful of times.
  • Everyone wants their 15 minutes of fame, even scientists. Perhaps more so than others, because their pay scales and tenure often depend on being published and cited as often as possible.

    The sad thing is that even a plethora of citations does not demonstrate the quality of a given paper. It just means it had one or a few quotable paragraphs; not that it's methodology or conclusions were necessarily stellar.

    When I worked on some research back in the university days, the prof in charge of publishing the

  • ... we do need some way to separate good scientists that are working really hard and shit ones that are slacking off.

    So... do we have another method besides demanding that they be in various journals at some interval?

    Why do these studies need to be in journals at all? why not just have publications put out by every university where they internally audit every paper and if it is valid... publish it.

    Sure, you're going to have a lot of boring studies but so what? Science doesn't have to be exciting to be usefu

    • by godrik ( 1287354 )

      Of course we are publishing more. We are pushed to publish, so obviously paper get forgotten. But it is not clear to me that it is a bad thing. What pulish or perish accomplished is that we are communicating more. So clearly we are communicating smaller ideas, smaller experiements, smaller contributions but we are also communicating earlier in the process.

      It is frequent nowadays that one idea is spinned into 3 papers, one preliminary workshop, one conference and one journal. Clearly once the journal is publ

      • by prefec2 ( 875483 )

        When publish or perish would be about communication, we could just post all our preliminary results in our personal blog or in a forum and discuss it there. Instead we write papers which represent a minimal increment of knowledge. We then evaluate that minimal increment with often non standardized case studies and get through with it. For every topic I have to enter there are tons of publications, but most are rubbish. Some rubbish gets even published on ICSE, ASE or Models. The metric is ruining the scient

      • That's good to hear, laymen like myself only hear this expressed in negative terms.

        At some level we have to take the scientist's word for it. Though of course... they're people and people lie. Just as Dr House said "everybody lies". So people issuing grants or auditing or whatever... they have to rely on metrics and independent reviews and independent reviews of independent reviews.

        There's fishy things that go on in any organization. And the health of that organization is dependent on review least the whole

    • by prefec2 ( 875483 )

      Any metric you use which is not directly linked to the quality of the publication will cause side effects, as people optimize towards the metric. The problem with scientific quality is that it is almost impossible to come up with a solid metric. It is even impossible to come up with a ordinal ranking. Many brilliant scientists had their work rejected first time but later it was a breakthrough. So obviously the reviewers were in error when they rejected those papers. In some fields people reject papers when

      • There aren't enough academic jobs to accept all applicants. Therefore some sort of selection process has to be in effect.

        I guess I could just break a pool cue in half, throw each one a sharp cue shard, and tell them there is one opening.
        https://www.youtube.com/watch?... [youtube.com]

        eh?

        We have to have something.

  • by Sir Holo ( 531007 ) on Sunday March 15, 2015 @04:34PM (#49262987)

    The crop of PhDs from the last 10 or so years are either unable or unwilling to 'hit the books'. If they can't find it in an electronic database AND easily download a PDF, they will ignore the existence of the work.

    Such work often includes seminal publications, REVIEW articles of a field, and things like conference proceedings before 'everything-PDF' – all of which contain a wealth of information.

    It really bugs me when I see cited references from "whoever did something like that most recently," rather than drilling down to the original source. Unfortunately, there seems little we can do about it, aside from good scientists not referencing lazy scientists.

    • And how do you find the books? Personally, I'd start with the latest papers that I could find online, and then I'd follow the DAG. There might be some room for topical proximity searches of unreferenced works, though.
      • And how do you find the books? Personally, I'd start with the latest papers that I could find online, and then I'd follow the DAG. There might be some room for topical proximity searches of unreferenced works, though.

        It helps to get to know the guys who did the original work. They often have books they will share –books that had tiny print runs.

        And yes, good scientists trace back via References in articles. Lazy ones don't/

    • The crop of PhDs from the last 10 or so years are either unable or unwilling to 'hit the books'. If they can't find it in an electronic database AND easily download a PDF, they will ignore the existence of the work.

      One of the primary reasons we even have computers is to help organize and locate information. Meanwhile, because computers are so good at it and we now have so much information to process, information that is not available to a computer in 2015 is not useful information.

    • by prefec2 ( 875483 )

      As in CS every invention gets reinvented every 10-20 years it is important that the old stuff cannot be found anymore. Beside that for key ideas I start with any publication I can find and start to find from there the most recent publications in the field and then try to find the original contributions again backward in time. However, this is a time consuming process and if it is only to document a minor argument, I will stop much earlier for instance with a survey on that topic.

    • This is extremely and wildly not true. The most basic part of doing literature review is following original sources and everyone I know does this. You have to, because reviewers pick this stuff up. Even when I couldn't find a pdf or physical copy of an original source, I'd still cite it. Also, you're fooling yourself if you think that just because something was done 30 years ago, there's no point in citing more recent sources. A lot of more recent work is nothing more than just repeating old ideas but with

      • I shouldn't feed the trolls, but just for the record:

        This is extremely and wildly not true. The most basic part of doing literature review is following original sources and everyone I know does this. You have to, because reviewers pick this stuff up.

        Actually, what I said is true. As a reviewer I DO pick this stuff up. And manuscripts with inadequate citations are rejected. Many submissions come in lacking any citation to a source (say, from 25 years ago). They will instead cite one of their buddies w

    • by JanneM ( 7445 ) on Sunday March 15, 2015 @06:44PM (#49263469) Homepage

      For citations central to your argument, sure, you need to track down the main papers. It's not that difficult - just look at what papers everybody else is citing. But most citations are just fulfilling the [citation needed] reqs for facts you use in your work. Any one of dozens, sometimes hundreds, of papers would easily fill in for that role.

      You find two references about the same thing. As far as citing the fact you need they're essentially equivalent. One will take three weeks and thirty dollars - and half a day of arguing to make the lab pay those thirty dollars - to get, and half the time your thirty bucks will give you a badly printed paper copy. The other you can download into your paper manager and read right now. Guess which one almost everybody will use?

      • For citations central to your argument, sure, you need to track down the main papers. It's not that difficult - just look at what papers everybody else is citing. But most citations are just fulfilling the [citation needed] reqs for facts you use in your work. Any one of dozens, sometimes hundreds, of papers would easily fill in for that role.

        You find two references about the same thing. As far as citing the fact you need they're essentially equivalent. One will take three weeks and thirty dollars - and half a day of arguing to make the lab pay those thirty dollars - to get, and half the time your thirty bucks will give you a badly printed paper copy. The other you can download into your paper manager and read right now. Guess which one almost everybody will use?

        The most frequently cited papers are not great steps forward, but method papers; somebody does some doofy study of fish farts, but it includes a great method for analyzing exhaust gases, so every gas analysis paper ever afterwards references it
        I read a study that showed that, ironically.

  • by Anonymous Coward on Sunday March 15, 2015 @04:41PM (#49263015)

    It saddens me to see so many sarcastic and cynical posts which fail to demonstrate that the poster has given the issue any thought whatsoever. Does the Slashdot community really consider it self-evident that scientific research is a failed enterprise? And does the Slashdot community really have no idea how scientific research works?

    Scientific papers aren't published for your benefit, you silly Slashdot reader. They're published for the benefit of other researchers. Suppose that some meta-researcher studied email patterns at your place of employment and found that this year a smaller percentage of your emails are replies to other messages [as compared to last year]; that is, a higher percentage of this year's emails are about new subjects. Then this paper gets referenced on Slashdot and someone (the author of the original article, the Slashdot submitter, or the editor) suggests that the lower reply percentage implies that intelligent discussion must be on the decline at your workplace because discussion requires people talking back-and-forth about the same topics. Then imagine that a bunch of people make short sarcastic posts that agree with that interpretation and variously lament about the decline of society as a whole or of your workplace in particular.

    Let us now make the biggest assumption of all and suppose that you have enough self-respect to be offended by this challenge to your intelligence. What would be the most mature contribution you could make to this discussion? I suppose it could be something like, "Your statistical analysis of my company's email habits is interesting, but your interpretation seems a bit misguided; it seems like a pretty big jump to go from 'percentage of emails which are replies to other emails' to 'abundance of intelligent discussion.' "

    So too it is with research papers. A statistical analysis has shown that researchers in various fields are more likely to cite recent papers than older papers, and the "half-life" of the typical paper (in the author's own words) has decreased somewhat over the last couple of decades. What conclusion should we draw from this? If scientists are less likely to cite a ten-year-old paper today than they were a decade or two ago, does that mean that there are "too many papers" and they're just swamped with recent stuff? Or does it mean that they're sufficiently well-organized that problems that used to take fifteen years to work out now only take five, and the investigations are moving on to new things?

    To paraphrase an old joke: I don't go to where you work and statistically analyze all the dicks in your mouth. So stop doing the same to scientists.

  • by excelsior_gr ( 969383 ) on Sunday March 15, 2015 @04:50PM (#49263061)

    Although this could be due to the "publish or perish" mentality, that often forces researchers to break down their work in several publications of lesser impact than make a single publication of larger impact, the fact that the "lifetime" of publications is getting shorter may also mean that the research is speeding up. Knowledge moves faster from papers, then to books, and then to being "common", and before you know it you don't really have to cite someone every freaking time anymore because everyone knows what you're talking about (I'm talking about things that are considered "common knowledge" here; you surely don't cite Newton every time you mention that white light can be broken up using a prism). More commonly, somebody will sum the "state of the art" into a book or in a good introductory chapter of a doctoral dissertation and people will cite that, instead of all the papers. Also, books keep getting cited for decades after their publication, so maybe a follow-up study could check whether there is a similar trend in the citation of books?

    While the plurality of journals has made publishing quite easy nowadays, I don't think this is the reason for the observation that papers get forgotten faster. A bad paper will not even get noticed and will probably get cited only by its own authors in subsequent publications. Since we are talking about papers that do get cited here, this means that they have managed to attract some attention, and can therefore not be too crappy.

  • For a change, this is something that appeared on SoylentNews before Slashdot. It has been interesting tracking this article through the social media sites that I frequent:

    Reddit [reddit.com] — Submitted Wed, Mar 11; 211 comments at the time of writing this comment

    SoylentNews [soylentnews.org] — Submitted Sunday, Mar 15; 16 comments at the time of writing this comment

    Slashdot [slashdot.org] — Posted Monday, Mar 16; 30 comments at the time of writing this comment

  • With the "publish or perish" culture thriving, more studies that help to determine what needs to be studied would be great. I fear that this doesn't happen enough because people feel like it gives away ideas that others might complete. But if the researcher who can show the need for a study can actually follow up, then they can get a head start on that work. If not, then why not let someone else do it? In academia, this is supposed to be about the good of the species and not some misguided desire to bec
  • It seems like this (too many scientific papers) is a problem that could be solved by data mining. I know that concept is considered evil these days, but it does have it's practical, non-evil uses.

    It was inevitable, really, that at some point there would be more science going on than could conveniently be published.

  • Of course there are too many (useless or only marginally useful) scientific studies. Just look at the people who are working as scienctists that you know personally or that you otherwise vaguely know, how smart they are (everyone cannot be an Einstein) and your estimate of the quality of knowledge artifacts that they would produce, and what is the research they do, not just limited to your own field of schooling or expertise. And what do your friends and connection who are researchers have to say about the

  • As progress in most scientific fields occurs at an ever faster rate, it is logical that previous research is more quickly supplanted by more relevant recent papers. Are we supposed to be surprised by this phenomenon?
  • This reminds me of my last job. There was a BIG concern (and it was justified) that managers had no time to actually manage their departments or their people because they were doing nothing but sitting in meetings 8-9 hours/day. I didn't see my manager for over a month once because he was SOLIDLY in meetings from the time he got there until the time he left. Upper management's response to this: "Let's have a meeting to discuss that".
  • Eliminate biased studies and the rest can see the light of day.

    'Scientific Studies' today are a creation of a Marketing department in many cases. There is a product to be sold and it needs support and affirmative publicity. A company may do several studies in hope that one or two will be useful in their advertising. The others tend to disappear.

    The US government (and other governments and non-profits) conducted studies for many years with the intention of proving that smoking and second hand smoke were dang

    • >Eliminate biased studies and the rest can see the light of day.

      If it wasn't for some researchers fudging data, breaking every rule in the book, about research design, there never would have been any pilots from Tuskegee, during WW2.

      In this case, the bias of the researchers, and the funders, was a goodness.

    • Seems to me that Einstein was rather big on proving his ideas correct, and therefore was biased.

  • ...we heard you like scientific studies. so we did a scientific study of scientific studies so you can science while you science.
  • If everyone must make their data available, then a paper will be judged on the strength of its:

    a) academic contribution; and
    b) quality/usefulness of the data.

    So you might not be the author of the greatest paper, yet your impact might be the quality of the experimentation and resulting data.

    Right now, papers appear and the data is just hearsay. In that environment, anybody can publish anything ... and today, there's is a strong incentive to do just that.

  • What they need to do is rank the studies based on the number of vowels in the authors name.
  • "My sister says all the good thesis topics have been taken. She's doing hers on how a dust speck bounces when it hits the table"
    "My brother is doing his on the letter G".
  • That rather than shorter attention spans, or more useless papers, papers are not staying relevant as long simply because the rate of technological progress continues to increase?

    For example, a paper on VHS would have been cited during a longer period than a paper on DVD, which would have been cited more than a paper on Blu-Ray... The rate of innovation has increased, and thus the duration of the usefulness of the discoveries as compared to updated versions of the same has gone down.

A freelance is one who gets paid by the word -- per piece or perhaps. -- Robert Benchley

Working...