Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

Interesting Concepts in Search Engines 231

TheMatt writes "A new type of search algorithm is described at NSU. In a way, it is the next generation over Google. It works off the principle that most web pages link to pages that concern the same topic, forming communities of pages. Thus, for academics, this would be great as the engine could find the community of pages related to a certain subject. The article also points out this would be good as an actually useful content filter, compared to today's text-based ones."
This discussion has been archived. No new comments can be posted.

Interesting Concepts in Search Engines

Comments Filter:
  • this was suggested a month ago when google announced the contest... looks like someone over there reads /.
  • But.... (Score:3, Interesting)

    by ElDuque ( 267493 ) <adw5@nospAm.lehigh.edu> on Thursday March 07, 2002 @04:24PM (#3126701)

    Where would Slashdot fit in to this? There's links to everywhere!
    • Re:But.... (Score:5, Funny)

      by bourne ( 539955 ) on Thursday March 07, 2002 @04:33PM (#3126787)

      Slashdot must be the Kevin Bacon of the online world...

      • For anyone out there who doesn't quite know why this is +5 worthy, here is the joke:

        Super Bowl Sunday a commercial aired, featuring none other than Kevin Bacon at a retail store, trying to use a check to pay for his goods. The man behind the counter asked to see ID, but Bacon didn't have any on him. What now? Bacon runs around town gathering people (an extra he played in a movie with, a doctor, a priest, an attracive girl, and maybe one other guy?), who all had some ties to one another, through the other 6 in the group. The attractive girl once dated the sales clerk in the store, so Kevin explains that they are "practically bothers," hence putting to good use the principle of 7 degrees of seperation.

        Therefore, the humor lies within. :) This is, of course, a very pop-culture oriented joke that will probably fade even more quickly than AYB did after its behemoth prime of last year and the December before. Long live the meme.
        • "Six Degrees of Kevin Bacon" has been around, an in pop culture, for many many years.

          Methinks you just didn't know about the inspiration for the commercial, which does surprise me ;)
        • Oracle of Bacon (Score:3, Interesting)

          by harmonica ( 29841 )
          In addition to the other replies, here's the link to the Oracle of Bacon [virginia.edu] that lets you find out the degree of separation between Kevin and any other person who is featured at IMDB [imdb.com].

          There is also a generic search [virginia.edu] that lets you combine any actor with any other actor. Unfortunately I have forgotten who the best-connected actor was (average to all other actors is smallest). Anyone?
        • by Technodummy ( 204943 ) on Thursday March 07, 2002 @08:19PM (#3127965)
          "Before all the ruckus of living in a "global village" where we are all connected via the internet, there was the idea of "six degrees of separation," or the "small world theory." The theory posits the idea that everyone in the world is separated from everyone else by only an average of six people. That is to say, the only thing which separates you from the President, the Pope, a farmer in China, and Kevin Bacon is six people.

          It's a strange and beautiful concept. It is fascinating to think that we are all in some way interrelated by only six people or that we have some connection to people even in the remotest part of the world.

          The "small world" theory was first proposed by the eminent psychologist, Stanley Milgram. In 1967 he conducted a study where he gave 150 random people from Omaha, Nebraska and Wichita, Kansas a folder which contained a name and some personal data of a target person across the country. They were instructed to pass the document folder on to one friend that they felt would most likely know the target person.

          To his surprise, the number of intermediary steps ranged from 2 to 10, with 5 being the most common number (where 6 came from is anyone's guess). What the study proved was how closely we are connected to seemingly disparate parts of the world. It also provided an explanation for why gossip, jokes, forwards, and even diseases could rapidly spread through a population.

          Of course, the six people that connect you and the President aren't just any six people. The study showed that some people are more connected than others and act as "short cuts," or hubs which connect you to other people.

          Take for example, your connection with a doctor in Africa. Chances are your six childhood friends who you've grown up with aren't going to connect you to someone across the country, much less across the ocean. But let's say you meet someone in college who travels often, or is involved in the military or the Peace Corp. That one person who has traveled and has had contact with a myriad of other people will be your "short cut" to that doctor in Africa.

          Likewise, say that you want to figure out your connection to a favorite Hollywood socialite. If you have a friend who is well connected in the Industry, that person will act as a bridge between your sphere of existence and the Hollywood circuit.

          The Proof

          Mathematicians have created models proving the validity of the "small world" theory.

          First, there is the Regular Network model where people are linked to only their closest neighbors. Imagine growing up in a cave and the only people you have contact with for the rest of your life are in that cave with you.

          Then there is the Random Network model where people are randomly connected to other people regardless of distance, space, etc..

          In the real world, human interconnectedness is a synthesis of these two models. We are intimately connected to the people in our immediate vicinity (Regular Network), but we are also connected to people from distant random places (Random Network) through such means as travel, college, and work. It is by our intermingling with different people that our connections increase.

          You may meet someone in class that is from a different country, or whose father works in Hollywood, or whose mother owns a magazine. By this mingling and constant interaction your potential contact with the rest of the world increases exponentially.

          The Internet

          The Small World theory is interesting in light of recent advances in communication technology--namely, the internet.

          You can now instantly make contact with someone across the world through a chat room, email, or through ICQ. In all of human history, it has never been easier to get in tough with someone across the globe.

          The great irony, of course, is that although we are making contact with such a vast number of people, the quality of the contact is becoming terribly depersonalized. Our email, chat, and ICQ friends may number in the hundreds, but for the most part we'll only know them as a line of text skittering across the screen and a computer beep.

          That's not to say that there is never a cross over from the virtual world of the internet to the "real" world. But a majority of the time, the closest you'll get to actually meeting your fellow e-buddies in the flesh are the pictures they email you (notice how everyone oddly looks like Pam Lee or Tom Cruise), or a series of smilies (meet my friend Sandra :), Jenny :P, Bill :{, and Chrissy 23).

          Never in the history of mankind has there been so much technology to keep us connected.This is with so little true connection. Everything from cellular phones, pagers, voice mail, and email were designed so that we would never be alone again. Human contact would only be a few convenient buttons away. But what seems to be happening is that the convenient buttons are superceding real people. Despite the appearance of all this technology, we're still pretty much where we started, with the exception of a motley crew of digital displays, flashing lights, and cutesy computer alerts to keep us company.

          Don't get me wrong. The Internet Revolution is great and is making our lives easier. But as with ice cream, money, and sex -- too much of a good thing can be bad (money and sex are sometimes exceptions). What good are all the conveniences and promises of instant material gratification if you don't really live. The virtual world is good, but we shouldn't forsake it for the real world. The macabre image in the Matrix where we are all plugged into computers unbeknownst to us is a parable of what could be our future. A future where people never leave their homes and where we're all so dependent on computers. We wouldn't be able to walk outside without a pang of separation anxiety.

          As we enter the new millennium, there is no doubt that we will be living increasingly wired existences. Perhaps Milgram's study will be annotated, and perhaps we will find that we're only separated by three degrees of email. But what good is that if the only "handshakes" going on are between our computers??

          Russ [junebug.com]
    • Re:But.... (Score:2, Interesting)

      by eet23 ( 563082 )
      But /. generally links to things that /.ers find interesting. I imagine that sites like this would be a good way of linking wider subject areas (computers + popular science) together.
    • Re:But.... (Score:3, Interesting)

      by Dudio ( 529949 )
      It's not just Slashdot either. Blogs by their very nature link to sites/pages about anything and everything. If they could manage to programatically identify blogs with high accuracy, maybe they could develop a hybrid with Google's algorithm so that focused sites are crawled for links to related sites while blogs and such are used to cast popularity/usefulness votes.
    • What about doubleclick? Their servers link to anything and everything that nobody finds interesting!

      I think it's a great concept that will make lesser known content accessible to the average user. Instead of spending almost all their online time on a few huge sites (AOL, MSN, CNN, and a few other media giants), we can jump to a page with the same topic but no advertising budget. But how do you rank and order the list of members? Traditional text search? Even if a community has only a few hundred members, few users will go to page five in the list to find a site. Admittedly, it's only a matter of time before you can pay to be listed at the top of the community membership, instead of a random listing.

      And like all good ideas, this system wouldn't be free of abusers. People could always spam their page with links to major sites using single pixel clear gifs, thus making their page a part of any community I wanted. So it becomes a process of "give me sites with links like this page, but not links like the following black hole listed pages." Useful for filtering content (for good or bad reasons).
      • Hm. That's not the problem I see right off the bat-- yeah we'll have a bunch of people trying to scam their way to the top, but we have that now as it is. The problem that I see is that sites that link to a large number of sites, and that have a large number of sites that link to them will be considered part of more communities while sites with a lot of relevant interesting text but who have few links and few linkers will be considered to be part of fewer communities.

        The issue of people creating mass pages of links could be resolved by "teaching" the engine to ignore sites that link to too many different threads, thus cutting out search engine directories, blogs, and other "topic-non specific" pages, or lumping them together as another category.

        Sort of "If a page has x number of links to y number of topics then it can be considered for category z but if y is higher than the allowed number..."

        Or something... Oh God. I need my caffiene.

        -Sara
    • Re:But.... (Score:2, Insightful)

      by bigWebb ( 465683 )
      Slahsdot fills a different niche, one more similar to that of newspapers than to research sites. The purpose of slashdot is to provide an overview of many different areas. The new search idea would, as I understand it, work best on very narrow (esoteric perhaps) fields, such as the research into a certain area of a certain discipline. Because slashdot is more general it won't have the same community type setup and will likely not be effected by this new way of searching
  • Thus, for academics, this would be great as the engine could find the community of pages related to a certain subject.

    And for intelligence services, a great way to more quickly compile open source intelligence [slashdot.org].
  • The idea of it being used as a content filter is interesting. Presumably, you would only be able to get to pages that were part of the 'community' of that information. Of course, there will be problems with this too, but it may end up being better than just content filtering by text strings.
    • As long as we have the option to not use the filter if we don't want it, I think it's probably a good thing. Anything that has the potential to increase the relevance of my search results is good.

      Of course, it could also be used to keep you from seeing things they don't want you to see. Then again, most technologies carry that risk, I think.

  • by Jafa ( 75430 ) <`moc.setnakram' `ta' `afaj'> on Thursday March 07, 2002 @04:25PM (#3126713) Homepage
    This seems pretty cool. The interesting part is that it mimics how people surf anyway. When you find a link from a search engine now, what's your usual routine? Go to the page, look around, find another interesting link, go to that page, maybe go back one and link away again... So this can pre-define that 'island' that you would have manually browsed anyway, but hopefully with better results.

    Jason
    • Ppl usualy surf in a depth first maner. I would think that to define the islands better you would need a breadth first crawler. I guess (in my opinion) it would mimic to a certain extent.
    • Hmm, this just made me realize that I must not follow the "normal" browsing patterns (if there are really such things). A typical search for me starts out by me going to google and entering my, usually very specific, query. I will then start looking at about the first 10 hits, reading the blurb, and going to the ones that I think will answer my question. Once there I wll give the site a really quick look over for the information that I am looking for. If I do not see the information I will quickly hit the back button and go to the next on the list. Unless a site give's me the information I am looking for very quickly I will not stay long, and I very very rarely even bother to look for links from that site. If a once through the top fifty doesn't do any good, I will usually refine my query. I say usually because I am never consistent. I must say that when I do find a good site, and the links good I will bookmark that site and use it often. I really dont think type of searching would interest me that much because I don't like "prepackaged" search results, mostly because my ideas of categories dont flow with what someone elses ideas are.
    • "Go to the page, look around, find another interesting link, go to that page, maybe go back one and link away again..."

      I rarely do anything remotely resembling that. My usual routine is: go to a link, find nothing interesting, go back to Google, and repeat until I either find what I want or give up and reformulate my search. It is rare that a page that does not have what I want has links that seem likely to.
  • Problem. (Score:3, Interesting)

    by DohDamit ( 549317 ) on Thursday March 07, 2002 @04:27PM (#3126725) Homepage Journal
    So...the engine crawls through, looking at links, goes to those sites, and looks at more links. So on and so on, until it has a web of links defined. The problem with this approach is that they have to have a VALID starting point OR a valid ending point in order for this method to be of use. In other words, either they have to manually start from a good site for physics, such as Stephen Hawking's homepage, or wind up at a good site for physics, such as oh, Stephen Hawking's website, in order to determine what's a good physics site. In the end, the content still has to be managed, or a porn site manager can still get around all this by linking to all kinds of sites, rather than stuffing their text/metatags. In the end, this solves nothing.
    • Yer even dumber than your handle suggests. Linking to someone doesn't add as much value as being linked by someone. Linking to someone is not as expensive as being linked by someone. In this way you can establish the "cost" or "value" of each page as you crawl the a tree and have it hardly matter where you start.
    • From some classwork I did a few years back on the algorithm that runs google, I recall something about links TO a page or this may have just been a theoretical idea covered at the same time, my memory is fuzzy. The point is, that perhaps it doesn't use links FROM a site, but rather links TO a site or perhaps a combination of the two. So a good physics site would be one that Hawkings page links to, as do lots of other sites, that when indexed return good results for physics. I'm also assuming that the algorithm doesn't really purely on links, but uses links as well as some traditional indexing methods to achieve better results. I mean how good can a physics page be if the term physics doesn't appear in the page in any way, shape, or form.
    • It's pretty easy to discover a group of porn sites heavily interlinking in order to increase their inbound link count. These parts of the web are abnormally cliqueish, by which I mean they approach a fully interconnected set of nodes. Most of the web doesn't work like that.

      It's not hard to find popular sites using this methodology, and in this case "popular" is probably as close as you'll ever come to defining a metric for what makes a website a good website. It all depends on what physics (or whatever) sites people link to, which hopefully will be related to how good those sites are. Note that all of the link counts need to take into account some sense of "community" -- i.e. The magazines Popular Science and Science serve very different communities. So link counts need to be taken relative to other sites "around" them, or some such.

      And in the end, this solves a lot of things. For instance, the algorithms will be independent of written human lanugage. They'll also be more robust when classifying pages that use graphics for scientific typesetting (LaTeX) constructs that aren't available in HTML (yet). This is important.

      -Paul Komarek
  • ...the poor bastard that searches for pr0n on this search engine. Holy Jesus, can you imagine the links?!?
  • I get the feeling that there's much more to be done with link analysis. If you think about it, it's very similar to Sociology. There's elements of population and community as well as migration and economics. Do you think Tim Berners Lee could have imagined what he was letting the world in for back in 1990.
  • by Ieshan ( 409693 ) <ieshan@g[ ]l.com ['mai' in gap]> on Thursday March 07, 2002 @04:30PM (#3126748) Homepage Journal
    What happens to journal articles relating to specific content? How do I find information for biology class?

    Currently, I can search google and find things on the destruction of Balsam Fir in Newfoundland by Alces Alces (Moose), with this type of search engine, the journals wouldn't be listed because they themselves don't have links to anywhere (most of them are straight magazine to html conversions or PDF).

    It'd be difficult as hell to find pertinent information above the level of "3y3 4m Johnny, And Dis 1s Mai W3bsite, 4nd H3r3 Ar3 Mai LinkZorz!"
    • Ah, but there might be links. Most research group pages currently have links to their latest research in the journals. For example, my group has links to J.Chem.Phys.Online or the like, directly to the journals. This type of search could lead you to the journals that are in the area you searched (JCP for me, TetLett for an OrgChemist, etc.)

      Plus the fact that groups mainly link to others doing the same work. So, I can start at one page and soon get an idea of the cluster science community, for example.
    • Bad Idea - What Happens to Science?

      As I understand it, the exact opposite would be true.

      If you type "Balsam" into google all the top links are related to balsam products, Balsam Lake and Balsam Beach. Instead imagine you could click on the science community, and then perhaps on the biologist sub-community. Now when you type in "Balsam" you get a list of sites that are the most referenced by biologists.

      This can be particularly useful when differnt groups uses a word in very different ways. Inflation means something very different to physisists than it does the general public.

      -
  • http://www.google.com/search?hl=en&num=10&q=relate d:slashdot.org/

    http://www.google.com/search?as_lq=www.slashdot. or g&btnG=Search
  • This is great! Does this mean that we will finally have a search engine with categories that actually mean something? Until now, categories have mostly been self-proclaimed by the submiter to the search engine or the meta-tags on the site itself.
    This could mean that browsing by category will become more and more useful in the future.
  • Would be nice to have a sidebar for IE or any other browser that could be used as a filter for relevent topics from that site configurable with N depth for searching. Highlighting of relevant topic links and maybe even a graph view.
  • I've just searched Google for links to this author and his system, and can't find anything other than a citeseer reference to a project called Deadliner.

    Anyway, this would be a much more interesting submission if there had been links to how the algorithm dealt with the computational complexity, or had a site we could Slashdot :-)

    Winton
  • Online bookmarks. (Score:2, Insightful)

    by fogof ( 168191 )
    I remember my first page:
    My fav site on the internet.
    A list on unrelated pages all liked from one spot.
    I wondering if there any of those left. And how the search engine would cope with them.
    And another point. The article states that new categories can be found. How is the "crawler" going to define the name of the new categories? I feel that the article was too short on details. I mean as a concept it's great. But more information would be cool.

  • As much as I (and all of you) love Google, I wonder whether their moral high ground [google.com]approach to search results would not exist if they did not already have the worlds traffic searching through their site.
    Search engines come and go. When Google has to struggle for its existence against the Next Big Thing, how many of you really believe they won't sell out in order to keep themselves running, in effect putting the last nail in their own coffin.

    We shall see.

    • Google had this "moral high ground" from the beginning though. It was something that they built everything on top of.

      I'm not saying they wouldn't "sell out"(however a business selling a product could sell out that is), but it seems that their text-based ads work well there, and that they also get a good amount of revenue selling their search tech.
    • Google's competitive advantage is their reputation. At this stage, any attempt at sellout would backfire badly: anyone willing to pay them money for a better listing will want to stop paying when no one visits Google any more.

    • but it's worth repeating.

      Google.com is popular because of it's high moral ground, which it has had since the beginning.

      I personally switched to Google because:

      * it gave me more accurate results
      * it has a fast loading page
      * it had an honest results policy
      * it's not a parasite site, running on the coat tails of others (eg. metacrawler)

      The reasons I continue to use Google are:
      * as above
      * it has inoffensive (to me) advertising
      * it has a toolbar that saves me time on searching
      * it's as good as a spellchecker
      * it can display pdf files in html
      * it can search pdf files
      * google cache
  • AFAIK, Google uses several criteria:
    It looks off course for the words from your search but also at the words close to those (so if you look search string is 3 words and it finds them next to each other it gets a higher score than the words randomly found in the text). It also look at the links. Pages about the same topic that are linking to your page give a "vote" for your page. This looks a lot like the "new" search algorithm. Or is the new one the inverse? In stead of giving a vote to, it receives votes if it links to pages about the same topic.
    The one thing I'm thinking is that they miss a lot of pages just because they do not contain links.
    Anyways, there isn't a lot I haven't found on Google yet (thanks to all it's search engines: regular, open directory, images, news...)
  • I hope he patents this before Google steals his idea!

    Joke
  • I wish! (Score:5, Interesting)

    by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Thursday March 07, 2002 @04:37PM (#3126803) Homepage Journal
    Problem with this: "most" websites do not link to sites with similar content. Most websites link to "partner" sites that have nothing in common with them -- after all, who links to a competitor?

    Good websites link to similar sites -- academic websites link to simialr sites and sources. This type of search engine would be killer on Internet 2. But on our wonderful, chaotic, porn and paid link filled Internet 1, it's useless. Spider MSN and you'll get a circular web leading to homestore, ms.com, Freedom To Onnovate, ZDNet and Slate. Spider Sun and never find a single page in common with their close competitors like IBM.

    What happens when sites get associated with their ads? Search on Microsoft Windows and grab a lot of casino and porn links...because a "security" site covered in porn banners was spidered and came up with top relevancy.

    Now, combined with a click-to-rate-usefulness engine like Google, this could be an interesting novelty. But it'll never be the simple hands off site hunter the big Goo has become.
    • Re:I wish! (Score:2, Interesting)

      by Anonymous Coward
      "after all, who links to a competitor?"

      Well when you are not dealing with commercial sites, or even when you are some times, a lot of people.

      Google (and most other search engines) link to tons of other search engines, Art Galleries link to other Art Galleries, and gaming sites link to other gaming sites.

      A lot of other areas are inter-linking too. Some times when I am trying to find something I get caught in a loop and have to start over again from a different starting point.

      For instance when finding out information on LED lights.

      I exhausted most of the results that came in for my original search term, so am going to change it to {fibre,fiber} optics LED

      Tada, whole new batch of sites to read though. :)
    • by FreeUser ( 11483 ) on Thursday March 07, 2002 @05:15PM (#3127048)
      You make a very interesting point:

      Problem with this: "most" websites do not link to sites with similar content. Most websites link to "partner" sites that have nothing in common with them -- after all, who links to a competitor?

      Good websites link to similar sites -- academic websites link to simialr sites and sources.


      Combine the algorithm described in this article with google's approach (or some other contextual approach to deterimining relevance) and you not only have a way of identifying "communities," you have a way of easilly identifying "marketdroid mazes of worthless links" as well.

      Since the content of most marketdroid sites is usually next to worthless, the hits for a given search could be ordered accordingly. Sites, and groups of sites, that clearly form communities related to the topic you're interested in at the top, single websites as yet to be linked to somehwere in the middle, and marketdroid "partner" sites at the very bottom.

      This would actually produce better, more useful results than either approach alone.
      • A decent implementation of the algorithm would search for and rank pages as it currently does, but then 'communitize' the results. If result 5 is in the same community as result 2, don't put 5 on the page. Instead, change result 2 to point to another page that will list most of the community that 2 and 5 share, and then increase the ranking of 2 or just list the 'communities' first in the result. This will greatly shorten the results that must manually be looked at by categorizing them.

    • Your argument holds true for most commercial sites, but not always for small to medium sized sites. Slashdot, for example, links to several dozen related web sites, a lot of them competition.

      In addition, when I search on Google, more often than not I am not looking for huge commercial sites. I am looking for smaller pages, sometimes written and hosted by individuals, that contain information on the subject I am searching for.

      These types of pages completely fall under your argument. They are not big enough to warrant ideas like "competition" and "sponsors." It is just some Joe Public, writing a web page about something he is interested in and housing it in the 5 megs of web space his ISP gives him.
    • Re:I wish! (Score:2, Interesting)

      Spider Sun and never find a single page in common with their close competitors like IBM.

      i guess this [sun.com] page doesn't list a bunch of Sun competitors, like IBM, BEA, and CA, then. even competitors thrive off of partering with each other.

      -rp
    • I've seen a version of this algorithm before. At a SIAM conference held at WPI while I was an undergraduate. You can get around the "partner" effect by creating a feedback look between two types of ranked sites:
      group A) Authorative sites...sites other sites link to but don't necassarily link to a lot of other sites

      groub b) Link sites...sites that cobble together links to useful authorative sites based on subject matter.

      In your algorithm you keep a track of a particular ranking in both groups A and B interatively.
      Yer ranking in group A gets weighted by the quality of the sites in group B that point to you.
      Yer ranking in group B gets weighted by how many quality sites you link to in group A.

      Iterate the process...and you know what you have...you have an eigenvector problem....and what you get in the end is an eigenvector of highly ranked group B sites which span subsets of group A based on subject matter.

      The cononocal example is the word jaguar.
      Run this agorithm on a search engine and you will get atleast 3 very distinct collections....
      the animal, the car, and the game system..primarily.

      The problem is you have to ITERATE for it to be particularly useful...and that costs cpu time....I don't know if a search engine is gonna want to really invest that time.

      Frankly I'm suprised any of this is patentable since I saw this at an academic talk like 6 years ago.

      -jef

  • The way I see it, this would work extremely well to find the most obscure references and complete information about a subject, but would be pretty bad for general purpose searching (a.k.a. googling). A subject like "cosplay" certainly tends to create a community of pages, but search for something like "distributed computing" or "operating systems".
    If a search engine comes out of this, I think I'll first google for whatever I want, and if I can't find it/come out with too little info, I'd expand my search into this "communi-search"
  • It seems to me that this is why web rings were started. The other thing that concerns me is what about the result speed. First you have to find a "good" page to start from and then follow its links ( and we all know they can get off topic really fast). Then if that one doesnt link to good pages start from another "good" page. nasty cycle.

    I do believe it is a good idea but the person that thinks that all relevant pages link to more relevant pages has been taking more than harmless smoke breaks.

  • by Restil ( 31903 ) on Thursday March 07, 2002 @04:40PM (#3126825) Homepage
    Google pioneered the use of links to deducepages' relevance. Its PageRank technology counts a link from site A to site B as a vote for B from A. But it does not take account of all the other sites to which A has links, as NEC's new technique does.

    I won't pretend to know all the inner workings of google's search engine technology. But I believe that google DOES care about other links from site A. This falls into the hub and authority model, which is definined recursively. A hub is a site that links to a lot of authority sites. An authority site is a site that is linked to by a lot of hubs. Basically, authorities provide the content, and hubs provide links to the content. In this example, B is an authority site, and A is a hub.

    The way the ranking works, is that if B is linked to by a large number of quality hub sites, then it has a respectively large quality rating. Likewise, if a hub links to a large quantity of high quality authority sites, then its quality will also be ranked highly as a result.

    This also allows Google to provide links to sites even if the search terms don't match the content of that site. A hub that links to a lot of sites about cars will relate cars to ALL the links regardless if the word "car" is included on the site that is provided.

    Of course, I'm not THAT familiar with google. Its possible I'm full of bunk. But I'm pretty sure it works this way to some extent and that google does pay attention to the hub based links.

    -Restil
  • Here are a few papers that better describe the rank technology involved:

    http://www.cindoc.csic.es/cybermetrics/articles/ v5 i1p1.html

    http://www.scit.wlv.ac.uk/~cm1993/papers/2001_Ex tr acting_macrosopic_information_from_web_links.pdf

    .
  • Here [psu.edu] is the research working paper that goes into detail.
  • by Anonymous Coward on Thursday March 07, 2002 @04:43PM (#3126838)
    Since so many of the Adult sites seem to have their "Please leave now..." links pointed at disney.com or nickelodeon.com or something.... will they end up in the adult communities? :)

    Not that I would know...
  • Clustering (Score:5, Informative)

    by harmonica ( 29841 ) on Thursday March 07, 2002 @04:44PM (#3126840)
    Clustering pages is what other search engines like Teoma [teoma.com] are doing already.

    In a recent interview in c't magazine [heise.de], a Google employee (Urs Hölzle) said, when asked about clustering, that they had tried that a long time ago, but they never got it to work successfully. He mentioned two problems:
    - the algorithms they came up with delivered about 20 percent junk links for almost all topics
    - it's hard to find the right categories and give them correct names, esp. for very generic queries

    Of course, just because Google didn't get it to work properly doesn't mean nobody else can. But it's harder than it looks, and it's been known for quite a while.
  • by gnugnugnu ( 178215 ) on Thursday March 07, 2002 @04:47PM (#3126860) Homepage

    This sounds like a useful idea and give use better directory systems, but its utility would be limited. Im sure there will be people poking holes in this algorithm [corante.com]in no time. Slashdot [slashdot.org] has a odd mix of subjects loosely tied together. News for Nerds [slashdot.org] is not a very strict group. Classification and grouping is a hard problem. There is no clear black and white, there are many shades of gray.

    Interesting, but not as intersting as Google Bombing [corante.com]

    --
    Guilty [microsoft.com]

  • Routers (Score:3, Interesting)

    by Perdo ( 151843 ) on Thursday March 07, 2002 @04:53PM (#3126907) Homepage Journal
    The internet is the world's best source of information and while transport of that information is built in, organization of that information is not. We have only half an internet.

    We will really know what is out there on the net when Cisco includes a search function in their routers. Distributed searching. Access to over 90% of the world's data. Anonymous usage statistics. Person X searched for data (a) and spent the most time at www.example.xyz. Cross refrence it all and include hooks into TCP/IP V.x for cataloging search, usage and content statistics.

    A website might contain information about leftover wiffle-waffles. That website sends that same data 1000 times an hour to end users. I want the router to pipe up "1000 unique page veiws for leftover wiffle-waffles" somewhere else a router says "500 unique page veiws for leftover wiffle-waffles". So when I do my search, I get 2 hits, most popular and second most popular.

    Why incorperate it into TCP/IP? what good is moving all that data if it is just a morass of chaos? Let that which transports it also serve to catalog it. Currently, user data's content is transparent to TCP/IP. But if I wanted my data to be found, I could enclose tags that would allow the Router to sniff my data, insuring my data was included in the next real time search.
  • by Violet Null ( 452694 ) on Thursday March 07, 2002 @04:54PM (#3126915)
    Interesting article here at http://www.operatingthetan.com/google/ [operatingthetan.com] about how the Church of Scientology exploits google's ranking system.

    The basic gist is that google flags pages as more important (or higher relevance) if they have more links pointing to them...so the CoS makes thousands of spam pages that points at its main pages. Google sees the thousands of links, assigns the main CoS pages a high relevance, and thus they're the first to come up in any scientology-related search.

    The moral being, for any new cool search technique devised to help fetch more relevant content, there'll be someone out there looking for a way to defeat it.
  • A majority of links may go to related information depending on the type of page. One of my site, contains links to mostly related information. But, there are links to humor (because we all need a laugh) some links to Amazon. My [barbieslapp.com] other site [sorehands.com] contains my resume, which have links to companies I worked for. I also have some links to articles that I have written. Since this site is more than one subject area, it may hold more true. Porn sites link to cyberfilters (Surf Patrol, Cyber Patrol, etc.), so you will not find many offsite links related to porn.


    So, this only holds true with a focused sites. Using links, but then checking the links based on text would be useful, but not just links alone.

  • I don't really buy into the idea that linked pages will necessarily be related to the same subject. Look at sites like slashdot or cnn, which link to a variety of pages in totally disparate subjects. If you applied transitivity, then you'd end up with every connected page on the web being on the same subject. Page A links to B, so B is probably on the same subject as A. Page B, links to page C, and therefore is probably on the same subject as B, and therefore A as well...Oops. The task of categorizing pages, unfortuately may always have to be done by humans to be done well. This type of software might help categorizing websites, but it won't replace wetware anytime soon.
  • by Fencepost ( 107992 ) on Thursday March 07, 2002 @04:58PM (#3126945) Journal

    So it's actually working on the basis of webs of related sites - not a novel concept, but useful.

    I suspect that some of the commercial knowledge management tools have been doing something much like this for some time, and TheBrain.com [thebrain.com] has had a product to manually build this kind of network of clusters for some time. The key thing about this is that with web indexing/cataloging the information needed to do the automatic linking is available.

    TheBrain.com seems to have a working demo of using it for the Web at WebBrain.com [webbrain.com] based on the Open Directory Project [dmoz.org]. It's not a great example because of display limitations that don't really let you see more than one cluster of information at the time, but it's one example of the general concept. Once you dig down in an area you can see how it shows links between related categories as well.

    Note: the demo above says it requires Java 1.1 and IE 4.01 or Netscape 4.07+, to bypass that test try here [webbrain.com]. Seems to work fine in Netscape 6.2, and will probably be OK in Mozilla if the JRE is available.)

  • by Lumpy ( 12016 ) on Thursday March 07, 2002 @05:02PM (#3126965) Homepage
    How about displaying on a 3D image dots as file with the zero point of xyz being the absolute most significant and the nspread out the hits from there? that way we can zoom in on what interests us in the search.

    I.E. search for linux apache router

    linux is one axis, apache is another and router is a 3rd. if the pages are relevent in that context then the closer to zero they will show up while linux apache donuts will resolve close to zero on the XY but be way out on the Z axis..
    • Ug, a 2D [map.net] web directory is confusing enough.
    • Your idea is really interesting. You would have to specify which three parameters (or group the parameters) to use for the graphing. You would also need a way to visualize and rotate the data.

      One of the nice things about Google and other current search engines is that you can easily look at the context in which the search term occurs and determine if the link is relevant. I think this would be harder to do in 3D. It would be nice if you were able to weight your search terms (scale of 1-10?) on Google. That might accomplish the same goal as what you want without the 3d niftyness.

    • How is the graphical display better than just calculating the distance each hit is from the origin, and using that distance as a ranking metric? That would give you exactly the set of points closest to the origin. If one of your search terms is less important to you (meaning you'd be willing to look at pages further out on that term's axis), then maybe what you need is a way to specify that that particular term should have less effect on the ranking. That approach would be extensible beyond three search terms, as well.

      What I'd really like to have is the ability to specify that terms need to be near one another within the document. Sometimes using quotes to delimit a phrase works, but often the words can appear in various orders and various ways, but if they're all close together there's a very high chance that the text is discussing the stuff I'm interested in.

    • There is quite a bit of research out there that suggests that most people find 3d navigation imposssibly hard to understand/use. Here are some relevant links:

      Summary of book on Web Usability [webreference.com]

      Why 3-D navigation is bad (People aren't frogs) [useit.com]

      Not to mention how to you display such things to a reader who is blind or has any other type of disability... the list goes on. Beautiful idea, but not very good in practice.
    • and we can add another dimension to the fray..

      color to make a 4th,5th and 6th dimensions in it. R,G,and B components can also be added... giving you a 6d search engine... not only are the ones closest to center what you are after, but the ones that are white. (invert it for the politically correct people)
  • This is not new work (Score:5, Interesting)

    by Anonymous Coward on Thursday March 07, 2002 @05:05PM (#3126984)
    This work is not new. In fact I submitted my undergraduate thesis on this topic. The roots of this work is really in citation analysis where the idea is that references that are highly cited are high quality references (this is the idea that Google is built around). Extending this to the web, a "reference" can be thought of as a "link" and you can generalize the hypothesis to the idea to: "similar works link to each other" and therefore you should be able to find communities of similar documents by following links within documents.

    Intuitively this seems reasonable and in practice this is often the case when there is no conflict of interest for a document to link to another document (as in the case of researchers linking to other works in their field). Yet, often this is not the case when there *are* conflicts of interest (a pro-life site will probably not link to a pro-choice site;BMW will probably not link to Honda or any of it's other competitors). Therefore, since the truth of the hypothesis that "similar documents link to each other" is not clear, I worked to test this very idea.

    To do this I used The Fish Search, Shark Search, and other more advanced "targeted crawling algorithms" that take connectivity of documents into account (as is discussed in the Nature article), but these algorithms often go further than just using the link relationship by taking the contextual text of the link itself as well as the text surrounding the link into account too when choosing which links are the "best ones" that should be followed in order to discover a community of documents that are related in a reasonable amount of time (you'd have to crawl through a lot of documents if documents have as few as, say, 6 links per page on average! Choosing good links to follow is crucial for timely discovery of communities). The conlusion of my thesis was that it is (unfortunately) still not clear whether the hypothesis holds. I only did this work on a small subset of web documents (about 1/4 million pages) so perhaps a better conclusion would be reached by using a larger set of documents (adding more documents can potentially add more links between documents in a collection). What I did discover however, was that if document communities do exist, you have a statistically good chance of discovering a large subset of the documents in the community by starting from any document within the community and crawling to a depth of no more than 6 links away from the starting point. (This turns out to be useful to know so that your crawler knows a bound on the depth it has to crawl from any starting point). Moreover, if you have a mechanism for obtaining backlinks (ie. the documents that link to the current document) you can do discover even more of the community...
  • by Anonymous Coward
    ISI [isinet.com] has been doing this for years with their databases. You look at a research paper, and jump around by what it cites and what cites it. It's good stuff, helps you find research that's related to what you're doing that you'd have never thought to actually search for.

    The idea predates Google, it probably predates you. They did it in print, way back when.

  • What I'd really like to see is a search engine that can differentiate (reasonably well) between sites with information and sites that are trying to sell you something. It seems that whenever I'm looking for a good info page about X, all I can find is someone trying to sell me X, and vice versa.

    I would imagine that using modern search engine techniques, one would be able to determine what commercial pages "generally" look like, and what informational pages "generally" look like, and categorize appropriately. If you used a learning neural network, you could even accept user ratings on specific search results and use that to fine-tune the algorithm.
  • I actually came up with a similar idea 3 years ago. Not that anyone will believe me, but kudos to the people who actually put some work into it instead of me sittin on half and idea.
  • Instead of getting a porn link in every search, now you get counter links and online casino links, this is a great improvement.

    I guess when we get bored though we can search for search engines and watch the system grind to a halt.
  • How long before someone claims that organizing and using information which relates various sites this way is some sort of privacy violation?

  • An interesting technique I worked on many years ago (as a hobby, online databases seem to be a habit of mine) was to take a large collection of pages and effectively 're-link' them based on their content, adding a section at the top of the page giving links to perhaps the top 20 other pages that 'seemed' to be similar by sharing a large number of the more unusual words in the page.

    This works suprisingly well after some tuning, and can actually be generated reasonably efficiently, however I never tried it on more than about 100k pages on not totally dissimilar material.

    This technique, which specifically ignored the links in the page can often help when you find a page not quite on the subject you are looking for, it is very interesting to see what 'key-words' can link pages, often you would never think of using them as search words, but they are very obvious after the fact.

    When using links to cluster pages, you also need some form on supporting analysis of the content to atleast try and tell if the pages are at all related, otherwise the clusters grow too large to be useful.
  • by augustz ( 18082 ) on Thursday March 07, 2002 @05:27PM (#3127126)
    Why do all the fanboys who swallow stuff that has yet to actually prove worthwhile in the real world mod comments lauding this stuff up?

    It reminds me of all the graphics chip makers, computer chip makers, heck, even zeosync with their incredible breakthroughs. 90% of the time, when anyone takes a hard look at it it turns out to be a waste of time and money.

    So, before proclaiming this the "next generation over Google" why not check to make sure google hasn't already thought of it and discarded the idea. Or that it won't lead to stupid circular clusters, 90% of the time I'm not interested in partner sites, but competitior sites. Is slashdot in the Microsoft cluster?

    And above all, stop the judgement calls like "this is the next generation" unless you've got some special insight and qualification to make that call.
  • by John Harrison ( 223649 ) <johnharrison@[ ]il.com ['gma' in gap]> on Thursday March 07, 2002 @05:45PM (#3127210) Homepage Journal
    I will refer you to the Clever project [ibm.com] at IBM. I first read about this years ago when Google was still a project at google.stanford.edu.

    Clever does Google one better by separating the results of searches into "hubs" and content. Hubs are sites with lots of links on a particular subject. Content sites are the highly rated sites linked to by the hubs.

    I thought it was a very intersting concept and I am surprised that it was not comercialized. Of course, IBM is in the business of buying banner ads rather than selling them. They could always do like /. and OSDN and mostly run ads for their own stuff though....

  • This model doesn't favor businesses at all. If there's one thing a commerce-oriented website won't do, its put a lot of links to their competitors. Depending on your point of view, this is a great thing. Unfortunately, many businesses believe that advertising means getting in your face as much as possible, there's no such thing as bad press, etc.

    Amazon.com is an example of this: I bought a pair of speakers from them a several months ago, and yet every time I go there, they helpfully inform me that they have these great new speakers on sale! Buy now! I suppose it works to recommend similar books and CDs, but when someone buys speakers, they usually stop being in the market for speakers after that.

    Anyway, I don't know why no-one has thought of making an e-commerce-only search engine. I think there's a clear distinction between those two types of searching that warrants a separate engine. Sometimes you want to buy stuff, and sometimes you just want information. When you are doing one, the other just gets in the way, and disguising advertising as content like AOL/MSN/AltaVista do just discourages you from using their services. Obviously, web-based businesses have a long way to go before they actually realize, "Oh, Internet users don't like to be tricked! Maybe if we were straight-forward with consumers they'd be more trusting of us."

  • by Com2Kid ( 142006 ) <com2kidSPAMLESS@gmail.com> on Thursday March 07, 2002 @06:41PM (#3127558) Homepage Journal
    Anybody else here (well, obviously, likely quite a few people) ever browse around foreign (to me at least. :P ) language sites?

    The culture that exists there-in is defiantly quite different.

    Japanese sites are even MORE self referencing then American sites. This trend has taken off onto American sites though in the form of Cliques, which themselves tend to lie outside of the sites that many of us /.'ers typically frequent.

    Seriously though, in Japan it seems that sites actually have others ask permission to link to them! (As an aside, whenever that topic is brought up here on /. people tend to get all freaked out about it. ^_^ )

    This obviously creates a VERY different social structure that heavily alters the dissemination of information, not to mention the way that sites are linked together.

    Here in the states (or any other culture that has pretty much a free linking policy) it is common to say "oh yah, and for more info go to this site over here and also this site here has some good information and and and . . . . "

    Anybody who reads www.dansdata.com knows how he (uh, Dan obviously. :) ) likes to sprinkle relevant to kind of relevant links throughout his articles and reviews. Almost all of the links are VERY interesting and much can be learned from them (he does link to e2.com quite a bit though. :P ) but that many of the links are to further outside resources on the same topic.

    (such as a LED light review having links to the Online LED museum)

    In a culture where linking is no so free, I would think that there would be more of a trend towards keeping a lot of the information in-house so to speak, and thus at the very least the bias's that the search engine uses to judge relevance of links would have to be altered a bit.

    Links would have to be given a higher individual weight, since their would be a larger chance of them being on topic.
  • by Quixote ( 154172 ) on Thursday March 07, 2002 @07:22PM (#3127743) Homepage Journal
    Prof. Jon Kleinberg [cornell.edu] of Cornell did this work many years ago. IIRC, he was the first to come up with the idea (first published in 1997). Check out his list of publications [cornell.edu] for the work (and related stuff).
    Disclaimer: I happen to know him, but this is not biased.
  • by MbM ( 7065 )
    Does this mean that doubleclick must be a wealth of information? There certainly is a number of sites linking to it...
  • I high school (10 years ago) we had an assignment about thinking about how to map cyberspace. My partner and I decided that the best way was via a Ven diagram as oppossed to a physical map of where machines actually were. That is, map the information (then mostly on ftp, gopher, and non-internet BBS's) by subject. You would have a "sports" circle and a "medicine" circle and the overlap would be sports medicine. As a hobby, I've been trying to make such maps every since.

    Later, in college, I tried to model a website as a physical system with the links acting like springs. You basically make up some formula causing different web pages to repulse each other and then make links give an attractive "force" that grows with the "distance" between web sites.

    This gives you a system of equations that you can solve for an equillibrium point giving an "information distance" between web sites. This will tend to group websites on similar subjects together because they tend to link to each other.... but then again, who cares how close related are to each other, it should still be possible to get a cool picture with this data.

    But the stopping point I came up against was how to represent these information distances as a space. I couldn't figure out how to calculate the dimensionality of the space. Was it 2-D, 3-D, or 400-D?

    Here's an example of why this is a problem: Take four points that are all 1 meter from each other. In 3-D these points form the corner of a tetrahedron, but you cannot draw these points in 2 dimensions. If I have N equidistant points, I need a space with at least N-1 dimensions.

    So how many web pages can be at the same "information distance" from each other? How many dimensions are needed to map the web this way?

    Maybe this question only interests me, but I find it fascinating.

  • ...about 1/2 to 2/3 of the way through a book I am reading called "Emergence: The Connected Lives of Ants, Brains, Cities, and Software" by Steven Johnson.

    At the point where I am reading, Johnson is discussing how emergence is closely tied to feedback, and without feedback, emergence doesn't occur. Thus, cities and businesses tend to be emergent (is that a word?) entities, while the web typically isn't. Because links on the web tend to be "one way", and information isn't communicated back, he argues that emergence can't take place.

    Someone else has made a post here discussing how on Japanese web sites, it is expected that before you link to a site, you ask the operator of the site permission. The poster then says that for American sites, it is more of a "sprinkle willy-nilly" (my words) type reference, without regard for the operators of the sites. However, at one time, netiquette was indeed to ask the operators to "swap links" - I remember doing this quite often. But I think what happened is that when businesses and the "ignorant masses" came online, less link-swapping occurred because many times you would email the admin of the machine, and never get a response. The feedback link was broken.

    Johnson uses this argument to further his statement that because of this, the Web won't be emergent. But will the Japanese web spawn emergence?

    Johnson then goes on to talk about weblogs (though he doesn't use that term), referencing /. specifically - and noting how the whole rating and karma system gives rise to feedback, and may allow such discussion groups to become, over time, emergent. I haven't read anything yet about p2p in the book (the way the book reads, it seems like it was written or originally published longer ago than it seems), but I tend to wonder if emergence will be found there...

    These kind of search engine technologies might help make the web turn around, and allow it to become emergent. I don't know if such thing would bode well for humanity, but it would be very interesting to see such a thing in practice (I highly reccommend the book I referenced above if you are into this kind of thing - it makes an excellent sequel of sorts to the book "Out of Control")...

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...