Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google Science

PageRank-Type Algorithm From the 1940s Discovered 108

KentuckyFC writes "The PageRank algorithm (pdf) behind Google's success was developed by Sergey Brin and Larry Page in 1998. It famously judges a page to be important if it is linked to by other important pages. This circular definition is the basis of an iterative mechanism for ranking pages. Now a paper tracing the history of iterative ranking algorithms describes a number of earlier examples. It discusses the famous HITS algorithm for ranking web pages as hubs and authorities developed by Jon Kleinberg a few years before PageRank. It also discusses various approaches from the 1960s and 70s for ranking individuals and journals based on the importance of those that endorse them. But the real surprise is the discovery of a PageRank-type algorithm for ranking sectors of an economy based on the importance of the sectors that supply them, a technique that was developed by the Harvard economist Wassily Leontief in 1941."
This discussion has been archived. No new comments can be posted.

PageRank-Type Algorithm From the 1940s Discovered

Comments Filter:
  • by ls671 ( 1122017 ) * on Wednesday February 17, 2010 @06:55PM (#31178378) Homepage

    Well, this is actually pretty good advice for any developer; Don't reinvent the wheel. Look around, search for what's been done before and adapt it to suit your needs. Of course, as a last resort, one can design something new once he has done his homework and made sure nothing that has been done before may be re-used.

    Through my life, I have seen a amazing high level of work that has been done in vain because it yielded poor results and that something doing the same better already existed anyway.

    Don't get me wrong here, once you have made sure that nothing already existing suits your needs or can be reused, it is fine to innovate and create real new stuff. Just don't get caught trying to reinvent the wheel unless you reinvent it better ;-)

    Also, an exception to that principle could be allowed for trivial tasks that are really quick to implement and where searching for an existing solution might cost more than implementing it yourself but be really careful applying that exception rule, it is an open door that leads to trying to reinvent the wheel sometimes ;-))

    • by cyphercell ( 843398 ) on Wednesday February 17, 2010 @07:28PM (#31178670) Homepage Journal

      It would have been a pretty exhaustive search without google.

      • Re: (Score:3, Informative)

        They could have used one of the other search engines in existence in 1997-98, like Altavista or Lycos or Magellan or WebCrawler or Yahoo or Excite or Hotbot.

        • by pipatron ( 966506 ) <pipatron@gmail.com> on Wednesday February 17, 2010 @07:46PM (#31178792) Homepage

          I can see that you must be young enough never to have used the search engines you list, if you suggest that you would have been able to find anything useful.

          • Re: (Score:1, Interesting)

            by Anonymous Coward

            You must be the one who hasn't. While they can't be compared to what we have today, they were useful. I particularly liked Infoseek. Oh and GOOGLE SUCKED at first, I remember this VERY WELL, i tried it when it started and it wasn't any good. I guess it took a while to gather enough data to really show their algorithms' potential, now i always use google of course...

            • by PCM2 ( 4486 )

              I particularly liked Infoseek.

              Infoseek was my flavor, too. Then Disney bought it, changed the name to Go.com, brought in a terrible new design, and all of a sudden the search results went to shit. Hope it was worth the money, Disney.

          • Re: (Score:3, Insightful)

            I can see that you must be young enough never to have used the search engines you list, if you suggest that you would have been able to find anything useful.

            Well the results were there, somewhere between all the adverts.

          • Re: (Score:2, Insightful)

            Remember the internet was a LOT smaller in 1997, so while my preferred engine at that time (altavista) may not have been as good as google today, there was also a lot fewer websites to search through.

            >>>I can see that you must be young enough never to have used the search engines you list

            I'm familiar with the slashdot practice of not RTFA (reading the frakking article), but when did people stop reading user names? You think I'm young? My first computer was a Commodore and my second an Amiga. I w

            • by rusl ( 1255318 )

              I would use Yahoo primarily and Altavista occassionally. But I don't know why you claim that you're so old to have done that stuff. I was young then. I remember the Commodore64. What is your definition of old then? I'm nostalgic about that too but I think of that as being because I was (and still am sorta) young and I get nostalgic - whereas an older person would be less sentimental because that wasn't the first computer and also because it wasn't that great. really.

              • I'm 24, I had my Commodore 64 when I was 4, I wasn't surfing the web until about 1997 either. I don't consider myself old (Sometimes I do, I just don't "get" this modern music nonsense). I can remember using AltaVista and Yahoo search. Sometimes searching through Geocities directories back in the day.

                I've forgotten what my point was.

                Anyway. I'm not old.

              • I can understand C64_love's frustration. When some mods purposely damage your reputation/karma, you lose your ability to post for 1-2 days time. You feel that you are being muzzled like a Chinese citizen instead of a free person.

                NOBODY should set out to destroy another slashdotter like that.

          • In that era, search engines were like mini databases. You had to put in exactly the right query to get results.

            I was famous for being able to find anything, and I mean anything, using AltaVista, when others couldn't. It was a lot like programming in a way, which explains why I was good.

            I used to be able to be king of Google until they made it much more natural-language tolerant. Now my 10 year old neighbor can find things when I can't.

            On the plus side, I can now type in "When is the damn stupid bowl muth

          • Re: (Score:3, Funny)

            I can see that you must be young enough never to have used the search engines you list, if you suggest that you would have been able to find anything useful.

            What are you talking about? I found useful results all the time! I just wish my interest in being a porn connoisseur could be turned into a career.

        • NATIONAL DEBT is ~$130,000 per US home and growing $10,000/yr.

          So what? It's not like that makes anyone poorer.

    • by Weirsbaski ( 585954 ) on Wednesday February 17, 2010 @07:56PM (#31178858)
      Well, this is actually pretty good advice for any developer; Don't reinvent the wheel. Look around, search for what's been done before and adapt it to suit your needs, and patent it.
    • by Junior J. Junior III ( 192702 ) on Wednesday February 17, 2010 @08:03PM (#31178908) Homepage

      "don't reinvent the wheel" is kindof dumb advice when you think about it.

      If I didn't already have a wheel, it would take me a really long time to traverse the world in search of a wheel to see if it had been invented yet. If it has, and it's got sufficient penetration into the market that I know about them already, then, sure, it's a no brainer not to reinvent it. On the other hand, if no one in my immediate vicinity has ever heard of the wheel, then inventing one -- quickly -- is a lot smarter than traversing the known world until I run into a culture that already has wheels. Especially if they might exploit their superior technology to subjugate and enslave my people. Better to have a home-brewed shitty wheel to start off with, and upgrade quickly if I discover that there are other friendly cultures that have better wheels already, and have at least something if I don't discover anyone else, or discover hostiles who already have them.

      As long as the cart is loosely integrated with the wheels I have, upgrading to better wheels when they are found to be available should be easy. And I might just learn something about wheels while studying them that applies to other problems, or could even possibly improve the existing state of the art with respect to wheels.

        • Re: (Score:3, Insightful)

          Reinventing the wheel is a phrase that means to duplicate a basic method that has long since been accepted and even taken for granted. [wikipedia.org]

          K, so how is Brin and Page developing PageRank when an obscure economics paper published at Harvard in 1941 and only re-discovered in 2010 reinventing the wheel?

          Would Page and Brin have gotten there faster if they'd rooted through Harvard's economics library in the hopes that the best-yet algorithm for search results ranking would be there, somewhere? Was the Harvard paper "

          • K, so how is Brin and Page developing PageRank when an obscure economics paper published at Harvard in 1941 and only re-discovered in 2010 reinventing the wheel?

            Who says it was first re-discovered in 2010, and not in the 90's, by two guys from Stanford?

            There are a million possible ways Brin and Page could have come across that paper. A friend who was studying economics, maybe a magazine like the Economist mentioned it, etc.

            Pretty funny claiming to be the first to re-discover something, though. What

            • It's an obvious idea that they most likely implemented independently. I say this because the only similarity between the models is a trivial feedback loop summed up as 'x is important if it is endorsed by important x', the methodology is completley different in each case.

            • Re: (Score:3, Informative)

              by TerranFury ( 726743 )

              Dude; it's just Jacobi iteration applied to a particular, reasonably intuitive model. This isn't to knock it -- just to say that it was probably easier to reinvent it than to find some obscure paper --- especially one which probably isn't in the electronic databases.

            • They didn't reinvent anything, the analogy is poor. If anything, they took the idea and found a new application for it, then created a functional product based on it.

              I was in college around the time Google was just getting up and running, and it was widely known that the quality of a paper could be measured in how many times it was cited by other people, in fact IIRC, some journal search engines would even show the number of citations when you did a search. So the idea that important documents were "linked

          • Re: (Score:3, Insightful)

            by b4dc0d3r ( 1268512 )

            Wow, +5 Pedantic.

            Reinventing the wheel implies that they set out to find a measure of importance. So they would have had to decide to make a search engine, have it produce relative results, decide that relative importance is the key, and then go searching for ways to find relative importance. There's a major gap there, a leap in thinking that simply wasn't present at the time. How important a page is indicates what order it should be in results. That makes sense, but it wasn't obvious at the time. Prev

          • What I reckon they did was invent a square wheel (i.e. sub-optimal search algorithm) then googled ranking algorithms, stumbled upon the aforementioned papers and then didn't re-invent it all over again.
            A side effect of this was the introduction of the term 'dogfooding' which was then re-invented by Microsoft.
            Maybe.
    • by c0lo ( 1497653 )

      Don't reinvent the wheel. Look around, search for what's been done before and adapt it to suit your needs.

      What if I need a cog or a chassis? Scrapheap Challenge [wikipedia.org] rings a bell? With, in most of the occasions, better build the chassis yourself than struggle to get it free and work furiously to adapt it?

    • Re: (Score:3, Interesting)

      by Hurricane78 ( 562437 )

      There is a second rule of advice that goes with this, but that unfortunately usually is forgotten:

      Don’t imitate. Innovate!

      Yes, it is a good idea to not reinvent the wheel. But it’s even better to invent an airplane! (You know: Thinking outside the box. “Inside the box” would be a square wheel. ;)

    • Re: (Score:3, Informative)

      by twosat ( 1414337 )
      Reminds me of the invention of Turbo Codes in the early Nineties for forward error correction for communication networks. It was later discovered that in the early Sixties, low density parity check (LDPC) coding was developed that performed a similar function but was not used because of the lack of computer power and memory back then. The LDCP patents had expired by then, so now there are two technologies doing the same thing in a different way but one is patent-free. In a similar vein, I read some years
    • Re: (Score:2, Funny)

      by Ben1220 ( 1503265 )
      This is exactly why I would rather be a Computer Scientist working at a university or industrial research lab then a software developer. Because I want to create real new stuff.
    • Ha ha, then you won't be able to publish any paper.

      The research world is now filled with researchers who simply don't read others' works, and just keep writing. Even if it's been done before (or even if it doesn't make any sense at all), eventually they will be able to find a venue that will accept the paper, because no reviewer can know everything.

      And worries not about citation! Those work will be cited because of exactly the same reasons, or even worse, because the party who cites it has performed work th

    • Well, this is actually pretty good advice for any developer; Don't reinvent the wheel.

      That is some good folksy wisdom for your intrepid yound developer. Also:

      "Don't count your chickens before they're hatched!"
      "A little too late is much too late!"
      "A small leak will sink a great ship!"
      "Lost time is never found!"
      "A chain is as strong as its weakest link!"
      "A bad broom leaves a dirty room!"

      Anyway, my point is not that your advice is bad. But are you seriously suggesting that Brin, when he had this idea, should have gone straight for the library until he found this 1941 Economics paper? I wonde

      • by ls671 ( 1122017 ) *

        > But are you seriously suggesting that Brin

        I am not suggesting anything, I even said that it was OK to innovate re-invent in some circumstance. And if one really thinks that he can innovate, he should go for it, that's what we call progress ;-)

        I perceived Google algorithm like something very innovative and I wouldn't have guessed that it was based on such an old concept so I thought that it was appropriate to write about it.

        We all like to re-invent and innovate, it is human nature and I am no exception.

    • Also, an exception to that principle could be allowed for trivial tasks that are really quick to implement and where searching for an existing solution might cost more than implementing it yourself but be really careful applying that exception rule, it is an open door that leads to trying to reinvent the wheel sometimes

      I think you just reinvented the principle of marginal gains in utility of information gathering. It's in every Microeconomy 101 text book, in the paragraph about asymmetry of information.

      And I just reinvented the exploding irony-meter.

      • by ls671 ( 1122017 ) *

        > I think you just reinvented the principle of marginal gains in utility of information gathering. It's in every Microeconomy 101

        Basic principles are sometimes expressed differently form one field of activity to another. When you encounter the same principle expressed differently, the spirit of the principle becomes some kind of "meta-principle" and the specific definition made in every field become application of the "meta-principle". This is usually a sign that the principle is valid ! ;-))

        So, I didn't

  • So it could be used as previous art to invalidate Google's patent?
    • Re: (Score:3, Funny)

      by Jorl17 ( 1716772 )
      If you hate Google: Yes. If you don't: No. If you want Bananas: Get them.
      • Re: (Score:1, Insightful)

        CHECKLIST:

        Is Google a megacorp? Check.
        Did Google threaten to move to India if Obama raises corporate taxes? Check.
        Does Google spy on users and collect data? Check.
        Did Google receive monetary assistance from the Taxpayer's Public Treasury? Not sure.

        Okay. I'll let them slide and not try to invalidate their pagerank patent, but that likely won't stop Microsoft from making the attempt.

    • Re:Patent? (Score:5, Informative)

      by Meshach ( 578918 ) on Wednesday February 17, 2010 @07:08PM (#31178512)

      So it could be used as previous art to invalidate Google's patent?

      From my read of the linked article it seems that Sergey and Larry cited the previous art in their publications. So it looks like there was no plagiarism, just building a new idea using the tools provided by an earlier idea.

    • Re:Patent? (Score:4, Informative)

      by nedlohs ( 1335013 ) on Wednesday February 17, 2010 @07:11PM (#31178542)

      No, since the one from 1941 didn't say "on the internet" or "with a computer".

    • Not very likely, but as I have not seen details of either algorithm it's possible but not probable.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Patent?

      I think the way the Google maintains its search superiority has more to do with massive banks of machines and keeping their algorithms secret rather than sending lawyers after anyone. The PageRank algorithm is little more than a useful application of Markov Chains... hardly seems patentable. (Of course, RSA doesn't seem like it should have been patentable either...)

  • Just more proof... (Score:3, Interesting)

    by pizza_milkshake ( 580452 ) on Wednesday February 17, 2010 @07:07PM (#31178492)
    Nil novi sub sole
  • linearity (Score:4, Interesting)

    by bcrowell ( 177657 ) on Wednesday February 17, 2010 @07:19PM (#31178626) Homepage

    What really shocked me when someone first described page rank to me was that it was linear. I felt that this just had to be wrong, because it didn't seem right for a *million* inbound links to have a *million* times the effect compared to a single inbound link. Maybe this is just the elitist snob in me, but I don't feel that the latest American Idol singer is really a thousand times better than Billie Holliday, just because a thousand times more people listen to him than to her. If it was me, I'd have used some kind of logarithmic scaling. I think people do usually describe page ranks in terms of their logarithms, but that's taking the log on the final outcome. I'm talking about taking logs at each step before going on to the next iteration.

    To me, this has an intuitive connection to the idea that the internet used to be more interesting and quirky, and it was more about individuals expressing themselves, whereas now it's more like another form of TV.

    Of course that's not to say that I want to go back to the days before page rank. God, search engine results were just horrible in those days.

    From an elitist snob point of view, one good thing about page rank is that it doesn't let you just vote in a passive way, as Nielsen ratings do for TV. In order to have a vote, you have to do something active, like making a web page that links to the page you want to vote for.

    • And what difference would it make? log(10^3) is still smaller than log(10^6)...
    • Re:linearity (Score:5, Insightful)

      by Ibiwan ( 763664 ) on Wednesday February 17, 2010 @07:41PM (#31178756) Journal
      From a naive, off-the-cuff armchair analysis, it seems to me that PageRank only serves as a way to provide ordering of search results. Funny thing... sorting on positive values will always yield the same ordering as a sort on those values' logs.
      • I think the best search engine is the one that can tailor it's algorithms to the specific users tastes. I'm certain that custom-algorithms would be the way to go for instance.

        When some people search for stuff they usually have a good idea and class of information they are looking for. Either way it would be an interesting project/

      • Funny thing... sorting on positive values will always yield the same ordering as a sort on those values' logs.

        Yes, but if you look back at my GP post, I explained that I'm not talking about taking the log of the final result (which would be irrelevant for sorting).

    • Re:linearity (Score:5, Informative)

      by martin-boundary ( 547041 ) on Wednesday February 17, 2010 @07:44PM (#31178770)

      What really shocked me when someone first described page rank to me was that it was linear. I felt that this just had to be wrong, because it didn't seem right for a *million* inbound links to have a *million* times the effect compared to a single inbound link. Maybe this is just the elitist snob in me,

      The algorithm is not at all linear in the effect of inbound links. Two inbound links don't have the same effect, instead their effects are first weighted by the PageRank of each node of origin.

      Now the distribution of PageRank among nodes is approximately power-law distributed on the web. Intuitively, this means that among all inbound links of a particular node, when that number is high, then 99% have practically no effect on the rank of that node, exactly as you probably thought in the first place. More precisely, you can expect a pareto (or similar) distribution for the influence of incoming nodes, which is not unexpected since these sorts of distributions occur a lot in social sciences.

      That said, the PageRank algo is actually linear, but only in the sense of being a linear operator on weight distributions. If you normalize the weights after each iteration, the algo is actually affine (on normalized distributions) rather than linear.

    • Re:linearity (Score:4, Informative)

      by Shaterri ( 253660 ) on Wednesday February 17, 2010 @07:51PM (#31178844)

      The reason why PageRank 'has to be' linear is essentially mathematical; treating importance as a linear function of the importance of inbound links means that the core equations that need to be solved to determine importance are linear and the answer can be found with (essentially) one huge matrix inversion. If you make importance nonlinear then the equations being solved become computationally infeasible.

      What's interesting to me is how close the connections are between PageRank and the classic light transfer/heat transfer equations that come up in computer graphics' radiosity (see James Kajiya's Rendering equation); I wonder if there's a reasonable equivalent of 'path tracing' (link tracing?) for computing page ranks that avoids the massive matrix inversions of the basic PageRank algorithm.

      • Re:linearity (Score:5, Insightful)

        by Jerry Talton ( 220872 ) on Wednesday February 17, 2010 @09:40PM (#31179622) Homepage
        *sigh*

        You understand neither how the parent post is using the word "linear" nor the PageRank algorithm itself. You can rewrite the eigenproblem at the heart of PageRank as the solution to a linear system, but very few people do. Moreover, this is not the correct intuition to employ to understand what's going on: there are no "massive matrix inversions" here, just a simple iterative algorithm for extracting the dominant eigenvector of a matrix.

        Furthermore, you've got it exactly backwards regarding the "connection" between PageRank and light transfer. Since the Markov process used to model a web surfer in the PageRank paper is operating on a discrete domain with an easily-calculable transition function, the stationary distribution (or ranking) can be determined exactly. In rendering, you have a continuous problem for which Markov chain Monte Carlo techniques provide one of the most efficient ways to approximate the solution...but you have to actually simulate a Markov chain to get it (see, for instance, Veach's seminal paper on Metropolis Light Transport). Computing PageRank is an "easy" problem, by comparison.
        • Re: (Score:1, Redundant)

          by Foolicious ( 895952 )
          Why the *sigh*? By your answer it's clear what you believe (about yourself and the topic). The *sigh* adds no real value, except perhaps as a sort of additional marker of how wrong you think this was.
    • Re: (Score:3, Insightful)

      by phantomfive ( 622387 )

      Maybe this is just the elitist snob in me, but I don't feel that the latest American Idol singer is really a thousand times better than Billie Holliday, just because a thousand times more people listen to him than to her.

      You are measuring the wrong thing. Google isn't measuring who is 'better,' it is trying to measure what page is more interesting to a web surfer, and pages tend to be more popular because they are more interesting to more people. Thus if you do a search for Brittany, you are more likely to find Brittany Spear's fan club than you are an academic study of why her beautiful innocence was so popular (and oh yes was it beautiful!). People who are looking for more specific academic things learn to add extra sea

      • by TheLink ( 130905 )
        > The way to solve this problem better is for Google to get to know you and your preferences
        > if Google knows that you are mainly interested in academic sorts of things, then it can automatically return that sort of thing when you do a search for Brittany.

        What would be better is if you can choose an "Aspect" or "Point of View" or "Stereotype" for a search. After all I could be doing a search on behalf of someone else. Or I could be interested at different things at different times.

        So say I pick the "T
    • Re: (Score:3, Informative)

      by j1m+5n0w ( 749199 )

      it didn't seem right for a *million* inbound links to have a *million* times the effect compared to a single inbound link

      Pagerank isn't just a citation count; it's defined recursively, such that a link from a page with a high pagerank is worth more than a link from a page with low pagerank. Similarly, a link from a page with many outlinks is worth less than a link from a page with the same rank but few outlinks.

      It does turn out to be more of a popularity contest than a quality metric, though. I think yo

    • Maybe this is just the elitist snob in me, but I don't feel that the latest American Idol singer is really a thousand times better than Billie Holliday

      It would be a problem if your search for "Billie Holiday" would return links to American Idol. Pagerank is not the WHERE clause, it's the ORDER BY clause.

  • by PPH ( 736903 ) on Wednesday February 17, 2010 @07:25PM (#31178658)
    ... of times one Pharaoh's cartouche was chiseled into the obelisks beloning to other Pharaohs.
  • by Anonymous Coward

    In many different types of jobs, people use the counts the number of times their research papers have been referenced and quoted with different "points" depending on where it was quoted. Fx. someone working with medicine hos has his work referenced in The Lancet, counts more than a reference in local-hillbilly-news.
    I belive there are sources collects this information. Sorry for being so vague, but I can't remember the specifics. (but hey, isn't that just what we do i Slashdot comments)

  • by commodoresloat ( 172735 ) * on Wednesday February 17, 2010 @07:51PM (#31178846)

    allowed pages to be ranked and categorized according to whether it was "insightful," "interesting," "informative," "funny," "flamebait," or "troll."

  • Peer Review (Score:2, Offtopic)

    by Mikkeles ( 698461 )

    is all page rank is.

  • The most amazing computer book ever. It has Doug Englebart's first description of “augmenting the human intellect” using computers. It describes what we know now as windows (generic) with pointing devices. It has an early linear document retrieval system using page ranks based on word co-occurrences and it has an early language translation system (Russian to English with examples of translating Soviet missile papers). What a preview of things to come.

    It is worth a read just to get into the h

  • I guess previous work had the idea right, but actually building a system which can handle millions of links and reply in no time is not a small feature.

    This reminds me of the discussion we had previously about the gap from research prototype transistors to having factories actually deliver them.

  • Back in the late 90s I created a relevance ranking system for my employer to rank the output of our legal research system. Similar to a hyperlink, legal documents have a unique references. Sometimes they're created by the publisher such as West (now Thomson) and their Federal Supplement and other times for unpublished documents it's a docket number from the court. Long story short, the documents were indexed and at run time using a combination of hit density and the number of times a document was referred t

  • Could some /. member who is an IP attorney comment on whether this might constitute prior art that could open up relevant Google patent(s) to an invalidation attack based on obviousness? Which, in my limited understanding, would go something like: "Well, a person skilled in the art as of the date of Google's patent application would have known of the Leontief work (published, knowledge therefore presumed) and it would have been obvious to implement the Leontif work on a computer.". And for extra interes
    • by BZ ( 40346 )

      Two things:

      1) Google does not hold a patent on PageRank. Stanford does.
      2) If you read the patent, it's more than just the ranking system. At least as far as I can
            tell. I am not a patent lawyer, of course.

      • It is still a quite relevant question, since in the wikipedia article it says:

        Google has exclusive license rights on the patent from Stanford University.

  • The mathematics of PageRank go back a century. There have been many different applications since then, including to hypertext and the web. From Wikipedia:

    PageRank has been influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua (Google's founders cite Garfield's and Marchiori's works in their original paper[5]).

    As such, the algorithm wasn't new, but Google was the firs

  • It's called being a Movie star. Your importance is ranked by how many people really like you. And it can be gamed just like the Google one.
    • actually, could think of better ranking systems for movie stars: ranking factor for each role (lead=1000, other starring role=100, minor speaking role = 10, extra = 1) summed for all movies. maybe even scale by gross receipts for each movie GNP-deflator normalized to 1970 dollars.

  • The summaries were intriguing but lame. Here's the real thing (preprint):

    http://arxiv.org/abs/1002.2858 [arxiv.org]

    Author's page is here:

    http://users.dimi.uniud.it/~massimo.franceschet/ [uniud.it]

    Interesting stuff.

  • The concept of rating something based on the weighted reputation of those entities that endorse it ("endorse" being used in the general sense) has been around, well, forever. People do it all the time when they decide who to trust. TFA should have been titled, "Brin and Page Rediscover Leontief-Type Algorithm from the 1940s" with the subtitle, "Journalists Finally Put Two and Two Together."

A computer scientist is someone who fixes things that aren't broken.

Working...