Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Internet Science

Describing The Web With Physics 133

Fungii writes: "There is a fascinating article over on physicsweb.com about 'The physics of the Web.' It gets a little technical, but it is a really interesting subject, and is well worth a read." And if you missed it a few months ago, the IBM study describing "the bow tie theory" (and a surprisingly disconnected Web) makes a good companion piece. One odd note is the reseachers' claim that the Web contains "nearly a billion documents," when one search engine alone claims to index more than a third beyond that, but I guess new and duplicate documents will always make such figures suspect.
This discussion has been archived. No new comments can be posted.

Describing The Web With Physics

Comments Filter:
  • Doesn't it make sense that the web would follow patterns just like a living organism? What are the basic motivations? When millions of developers and individuals act in their own interest, a basic pattern evolves. The question is, could an "intelligent" approach at design result in a better result, or does this naturally evolving creature continue to evolve, with un-efficient parts dying off until a unimagineable result comes about, far more complex and efficient than we could have ever come up with?
  • According to a recent study by Steve Lawrence of the NEC Research Institute in New Jersey and Lee Giles of Pennsylvania State University, the Web contains nearly a billion documents. The documents represent the nodes of this complex network and they are connected by locators, known as URLs, that allow us to navigate from one Web page to another.
    The Lawrence and Giles study was published in 1999, so stop picking on the 1 billion number... it's quite out of date. Web researchers know this already.

    The important thing from that paper is on the growth of the web; and from Kumar's bowtie-theory paper, we also think that most of the web is growing in places where we can't see.

  • One odd note is the reseachers' claim that the Web contains "nearly a billion documents," when one search engine alone claims to index more than a third beyond that

    I have to apologize for that one. I was VPN'ed in the other day and I opened MS Explorer on "\\internet". I accidentally selected "www" and hit CTRL-C, CTRL-V, CTRL-V. I guess the "Copy of www" and "Copy (2) of www" tripled the document count on some search engines. My bad.
  • Well, its an interesting read.. but most of the technical stuff is kind of glossed-over .. I'm sure the graph theory behind it makes sense if you know the math.

    The article mentions a 0.05% sample... is that statistically significant? Not to mention the fact that 'web page' is a vaguely defined term (i.e. static versus dynamic) -- this makes me doubt that this report contains any type of 'real' conclusions.

    However I suspect this type of research must be really juicy for the big search engine comanies (e.g. Google, etc..). I especially like the idea of giving the user a feeling of spatial orientation when browsing the internet (but what would that mean??)... in the end, I'm afraid that the internet/web/whatever is simply changing too fast -- by the time we analyze it enough to determine its topology and organization, something new will be replacing it. Note that the data in this article is already 2 years old... the web has probably at least doubled in size by now.

    To really understand the internet, statistical mechanics is not going to cut it-- we need better tools - adaptive ones that learn the new rules without being reprogrammed... ;)

    • The required sample size can be determined based on desired precision and accuracy, with the general shape of the probability curve playing an effect as well. It doesn't actually matter how big the population is, as long as we know how it's distributed so that we can factor it in to the calculations.

      In this case, don't think of it as a .05% sample, but rather as a sample of 500000 data points. Then consider that national surveys can get accurate results with only a few thousand data points.

      In any case, it is entirely possibly for their results to be statistically valid.
    • I especially like the idea of giving the user a feeling of spatial orientation when browsing the internet (but what would that mean??)...

      I'm reluctant to post this without having had more time to revise, but one way of spatializing the data is by making it into more familiar terrain. Again, still early stages, but for an example, how about the terrain described by the hyperlinks surrounding Slashdot in a typical week [washington.edu].

  • Physics or math??? (Score:2, Interesting)

    by Eythor ( 445730 )
    I'm confused. The subject of the article is "Describing the Web with Physics" while, to me, it looks like Describing the Web with Graph Theory or Mathematics. Is there not a distinction between math and physics?
    • It's physics if the researchers are physicists :-) There's a lot of overlap, since physics is inherently a very mathematical subject. But the particular ideas concerning scale-invariant systems (power-law behavior of various sorts), dynamical behavior, and percolation networks come out of statistical mechanics, a branch of physics. Specifically, a lot of this was developed starting with the theory of critical phenomena in the 1970's. The move to self-organized criticality (sandpile, wildfire, earthquake etc. models) and more generally now "complexity theory" has spread to quite a number of fields from biology to economics, but there are still a lot of physicists involved in this kind of research.
  • In the "Outlook" section at the bottom of the article, they express an interest in studying the "dynamic" properties of such graph systems. That is, how they pass messages between nodes, how the messages get routed, which nodes are passing the important/unimportant messages, and is there a cyclic pattern to how messages and replies get passed.

    Does anyone know of any studies on this subject?
  • For a proper answer we need a full map of the Web. But, as Lawrence and Giles have shown, even the largest search engines cover only 16% of the Web. This is where the tools of statistical mechanics come in handy they can be used to infer the properties of the complete network from a finite sample.

    Tread softly here, Grasshopper, the very fact that you can only easily see 16% of the Web means that you must expect that your sample is strongly biased, hence does not represent the Web in its entirety. Just as statistical resampling of the census would require much more care than a political entity can usually bring to bear, so would attempting to extrapolate web characteristics from a sample at random.

  • hmmm (Score:3, Funny)

    by sewagemaster ( 466124 ) <(moc.liamg) (ta) (retsamegawes)> on Monday August 06, 2001 @03:49AM (#2122573) Homepage
    it's all about the right hand law (with F-normal force represented by the middle finger), friction, and porn!!

    *grin*

  • Looking over the article it seems most of their research is about web pages and documents. Now to me that seems like a gross oversite. What all those hosts out their that don't accept Code Red (or in a perfect world use Apache :P ), and aren't part of any traceroute. I'm sure most of the end users on the internet don't have services running that would put them on these graphs yet we aren't included in the research. Just my 2c
  • There have now been several studies asserting that a concentrated attack on just the top 3% (or some other low percentage), in terms of connectivity, of the major hubs / backbones of the internet would result in some critical failure scenario such as fragmentation into small isolated clusters. But isn't this type of condition valid for a lot of systems besides the internet?

    Consider this example, though it isn't meant to be analogous to the internet in any way. What if the President of the USA, the Vice President, the entire Cabinet, the entire Senate, the entire House of Representatives, etc. etc. were simultaneously assassinated? Can you even imagine ensuing chaos? You can even throw in all the state Governers, whatever, but that still wouldn't come out to more than the top 0.0004% of the country's population, in terms of "political importance" or some other metric. Is this scenario plausible or worth worrying about? You decide.

    - The One God of Smilies =)
    • If you really want to worry about it, Tom Clancy has written "Executive Orders" for you, which starts with the scenario as described.
    • What if the President of the USA, the Vice President, the entire Cabinet, the entire Senate, the entire House of Representatives, etc. etc. were simultaneously assassinated?

      In networking terms, I don't believe loss of the top officials would have to make a bit of difference to the operations of the country. The strength and stability of the government lies in the well-established and huge burocracy. That complex system really needs no supervision (Though it could use a kick in the ___).

      It's interesting that the correlation with the internet breaks down because the President et al are not the hubs of communication. They aren't part of the information network of government; they sit atop it, but separate from it. (They're more like ICANN than Google?)

      Unfortunately, humans are not routers, and I wouldn't even try to predict what the emotional effects of mass-assasination would do to the function of the goverment or the nation.

    • What if the President of the USA, the Vice President, the entire Cabinet, the entire Senate, the entire House of Representatives, etc. etc. were simultaneously assassinated? Can you even imagine ensuing chaos?

      It wouldn't be that bad actually. One of the major strengths the US government has is a fairly clear line of succession, it's always obvious who is in charge in a given situation. And really it isn't even important who specifically is in charge, just so long as someone is. I doubt we'd really be too worried about it anyway, we'd be far more concerned about what killed them.

      The point is though that this small minority is also under the best protection. You estimate the number to be 0.0004% of our population (a little over a thousand people), inverse that to say they have 250,000 times better security than the rest of us. As this applies to the Internet, we just need to make sure that our main routers have the same level of protection. That rule of thumb makes sense to me, if there are 10,000 machines behind a connection then it should be 10,000 times harder to take down that connection than a single machine. I know it doesn't sound like a good metric, but it's an interesting thought experiment.

  • What about multiple links from a single page?

    For example, almost every slashdot page links to Goatse.cx more than 20 times...

  • I mean, its like, wherever you look, the same patterns pop up. At the microscopic level, or the macroscopic level. Amazing

    Its this sort of technical link that keeps me coming back to slashdot, even though its not as good as it used to be, and it no longer seems to attract the 31337 intelligent posters of the good old days.

    Oh well, nothing lasts forever.

    • I believe that Mob Psychology might model the web better, particularly the growth patterns of the web, however I don't have any studies to prove it yet.

      I's interesting though that every academic out here has tried to comment on completely unrelated fields usin the language of his area of expertise. I've seen studies by mathematicians who claim to be able to model the web, and even industrial design students who claim that the design-to-maturity process of a network of websited (a small subset of the web) is identical to the processes championed by industrial designers who led the way in Japan in the late 1970s.

      This seems to suggest (to me anyway) that those who enguage in this cross-discipline analysis, are somehow unsatisfied with their chosen field and are trying to latch onto an area of study that is populat at a particular moment in time.

      --CTH
    • ... and your comment contributes greatly to that ideal.
    • Kind of like a fractal... the complexitivity (sp) goes up the harder you look into it.

      -Tim
  • by Sun Tzu ( 41522 ) on Sunday August 05, 2001 @10:48PM (#2163204) Homepage Journal
    ...In the single Continuum of Chaos [starshiptraders.com] game. Seriously, the game is played in a universe consisting of one million sectors, each of which corresponds to a web page -- with multiple sub-pages. Google and the other engines can't really index it because it requires a log in. Further, even if they did log in they would run out of Antimatter long before they got through even a tiny fraction of the pages.
  • It just seems appropriate that a Physics site called PhysicsWeb would have an article about Physics and the Web, don't you think?
  • 1,000,000,000 urls (Score:4, Insightful)

    by grammar nazi ( 197303 ) on Sunday August 05, 2001 @10:50PM (#2163212) Journal
    The story mentions "nearly 10^9 urls", so duplicate documents would be counted multiple times.

    Most of their research seems to be on 'static pages'. They state that the entire internet is connected via 16 links (similar to the way that people are connected to 5-6 aquantances). I believe as the ratio of dynamic to static content on the internet increases, this will bring increase the total number of clicks that it takes to get one site to the next. For example, I could create a website that dynamically generates pages, the first 19 pages are all contained within my site and the 20th time that the page is generated, it contains a link to google.

    The metric functions that they use are good for randomly connected maps, but they don't apply to the internet, where nodes are not randomly connected. Nodes cluster into a group depending on topic or categories. For example, one Michael Jackson site links to other Michael Jackson websites.

    • 100 million URLs on the net, 100 million URLs
      Take one down, pass worms around,
      99 million URLs on the net...

      Xix.
      • 100 million URLs on the net, 100 million URLs Take one down, pass worms around, 99 million URLs on the net...

        99 million, 999 thousand, 999 URLs on the net, surely?

    • Do all these wild guesses about the number of pages include such wonders as all of the pages in Yahoo, and Google's Web Directory. No doubt these would boost the page counts a little.

      Especially if Google is caching someone elses content pages.

      Ian.
    • The metric functions that they use are good for randomly connected maps, but they don't apply to the internet, where nodes are not randomly connected.

      Actually the article describes the finding that the connectivity of nodes on the web and Internet follow a power-law distribution instead of the poisson distribution one would expect with a randomly connected graph. Maybe we should read beyond the introduction of the article before we post?
    • The story mentions "nearly 10^9 urls", so duplicate documents would be counted multiple times.

      <pedant>Actually, it might be more accurate to say that if they're talking about "documents", then they are talking about 10^9 URIs, not 10^9 URLs. A URL merely identifies the "site", while a URI identifies a specific document. There's a difference [w3.org].</pedant>

      Or, then again, who cares?

  • ...is that no search engine covers more than 16% of the web. are the authors confusing the "hiiden web" (i.e. web-based databases, such as newspaper articles) with network topology, which is what they are concerned with? I would have thought that google covers most of the web (in terms of topology), not 16% of it. Michael
    • by Anonymous Coward
      You are confusing the two. WWW is the documents, etc. Internet (which is simply the DARPA net suit connectivity with the underling routing protocols and physical connectivity) is simply a transport mechanism. It can carry anything, it just so happens that the web is the most popular (along with email).
  • The Code Red thing was interesting in the respect that, if it had worked, it would reveal just how *evil* homogeneity is. In nature, it leads to plagues and/or like disasters.

    It turns out that computing may prove similar.

    Different is good!
    • This is certainly an idea that has been around for a while. Consider a 1989 interview with Clifford Stoll ("The Cuckoo's Egg") in The Boston Globe:

      THE BIG PUSH IN THE COMPUTER INDUSTRY IS TOWARD STANDARDS. DO YOU THINK STANDARDS HELP OR HURT SECURITY?

      Viruses spread because of standardization. If a virus gets into one hole in a standard computer operating system, it's everywhere. Diversity in computing causes survival just like its biological counterpart.

      Of course, the solution is to make every system idiosyncratic. And (also) of course, this is not anything like a reasonable solution to the problem of security. Rather, we should view networked computing as a whole system--a system that could not even exist but for standardization--and attack the real problems: exploitable weaknesses in widely used software.

    • Stoll made the same point many years ago in The Cuckoo's Egg, when discussing the Morris Internet Worm.
  • One odd note is the reseachers' claim that the Web contains "nearly a billion documents," when one search engine alone claims to index more than a third beyond that, but I guess new and duplicate documents will always make such figures suspect.

    Also remember that most search engines are indexing only html pages and are probably only counting said pages in their "pages indexed" figures. The web CAN contain other media that may be considered documents. The obvious one is PDF.
  • A few days ago, /. had a story on how big-businesses wanted to get rid of the current *open* Internet for an allegedly better version: The Death of the Open Internet. [slashdot.org]

    The problem with getting rid of the current Internet is that we would probably lose the advantage of having scale-free topology ... something the PhysicsWeb article discusses at length. Scale-free topology is one of the key factors in keeping the current Internet stable and relatively fault tolerant even as the number of users have grown exponentially. I doubt that those who want to replace an open Internet would create a replacement that would incorporate this type of scale-free topology.

  • LAIN (Score:1, Insightful)

    by Schezar ( 249629 )
    "The number of nodes in the wired is rapidly approaching the number of cells in the human brain." Or something like that.

    What will happen as the net becomes more and more like a brain? Can it have a soul?

    Or worse, can it comprehend the garbage we use it for? ;^) "Sorry Dave, but I cannot allow you do download that pr0n..."

    • Can it have a soul?

      Would a single cell know whether the whole thing has a soul or not?
    • This is an idea that has certainly been discussed before, but the answer is "almost certainly not".

      First of all, the formation of a scale free network was caused by measurable "evolutionary" pressures for fault tolerance. In the absence of some similar evolutionary advantage to developing a global conciousness, it doesn't seem likely that it would happen spontaneously.

      On the other hand, if some (possibly unintentional) goal was aligned with that, I wouldn't be totally surprised if through maintenence and updates, some form of conciousness arose.

      Except: characteristic time scales on the internet are very large compared to connections within the brain. Any large scale behavior, including conciousness, would be expected to be slower than a human brain by orders of magnitude.
      • The characteristic timescales are remarkably similar when you think about. The brain has massively parrelel transmissions but each packet is sent with delays measured in the low milliseconds and each "message" containing a few bytes at most. Typical ping times between servers with decent connectivity are going to give you delays around 10ms and the average package size is ~1.6kB. The only real difference is that the brain currently has orders of magnitude more nodes than the internet. But at some point in the near future, this may cease to be true.
    • Re:LAIN (Score:4, Interesting)

      by Erasmus Darwin ( 183180 ) on Monday August 06, 2001 @12:21AM (#2163472)
      What will happen as the net becomes more and more like a brain? Can it have a soul?

      Please don't take this the wrong way, but that's honestly the sort of question I'd expect from someone who doesn't understand computers.

      While I believe in the possibility of machine intelligence (along with the moral, ethical, and most importantly philisophical questions that raises), the net is more of a data transfer mechanism than a processing mechanism. Short of very delibrate projects, such as SETI@Home, you just don't have your average machine on the net doing random computation. In that sense, the net really hasn't changed much since its inception. Further, if you did have a distributed consciousness, what would the consequences of lag, network outages, and outright crashes be? In that sense, it would be interesting to see if random/semi-random/genetic algorithms are capable of generating an intelligence capable of coping with such noise. However, I think such issues would rapidly kill off something before it became "evolved" enough to cope. If we do get an intelligence, I think it'll be something that happens on purpose. It may be distributed (maybe as a redundant, non-real-time simulation of a brain), but I doubt it'll be a spontaneous Skynet-like entity.

      • Well, I don't think his thought deserves total dismissal. The question is whether consciousness is something peculiar to human brains (if you want you can assert that all vertebrates have consciousness, but I'm not entirely certain) or a property of any extremely complex system. I don't think the Internet would be a candidate for consciousness since information just gets sent from one end to the other without being acted on. Of course, we can never be certain because we will never have a way of asking the Internet if it is conscious.
      • Further, if you did have a distributed consciousness, what would the consequences of lag, network outages, and outright crashes be? In that sense, it would be interesting to see if random/semi-random/genetic algorithms are capable of generating an intelligence capable of coping with such noise

        Well, in our head brain cells die left and right starting from the days when we're still inside the womb without really noticeable effects... no,wait! Could this explain the precidency of Bush?!

        -
      • (rantness>thought out argument, proceed with caution)
        I think this discussion has been somewhat flawed, in that it has been considering the internet and its human operators as separate entities. However, the Internet is so driven by human action nd interaction that it is impossible to view it as the technology alone. Yes, it is just a communications network, but with humans at its nodes, the internet may be able to act as a brain, albeit a primitive one for the moment. Ideas of group conciousness are not new, they can be traced back through Freud and Roussaeu in the Western tradition, possibly just as far or further in the East. That humans organize their social groupings the same way as biology organizes their brains would seem to lend a bit more credence to this idea. Of course, one could argue that a group conciousness is not as intellegent, self aware or responsive as individual conciousness, but in the past, group conciousness has been limited in scope to towns, villages, and tribal groups. Cities and nations push the slowness and dumbness of group conciousness to the point that it is probably meaningless (i doubt that they are nonscaling systems). The Internet, however, allows us to form a non-scaled social network of unprecedented speed and size.

        What does this mean? The possibility for conciousness is there, and perhaps the reality is, as well. But a concious Internet will not go running amok, create a body for itself, or any of that other sci-fi stuff, because it is us. Unless, of course, the whole net condenses on aol or .net.

        Oh, and btw, Lain was an ai created on the net, not out of the net, at least as far as I understand Lain, which isn't very.
      • I second this.

        If the internet became 'alive' we would all see a _lot_ of packets going around we didn't understand. We would all see hits to say our webservers from totally random IP's containing code to take over our machines and change them, as any brain changes itself. Such a system would never happen, and if it did we'd have it all over the news about the internet slowing down, and the 'internet' taking over machines.

        Besides I'd get loads of messages in my apache logs..

        oh wait

        wtf

        HELP!
        • While most individuals on slashdot may not have compromised machines... how many compromised machines are out there? If an primitive intelligece were to form, it would never be allowed to take hold of most machines. But it could certainly exist entirely in the domain of poorly managed boxes, and nobody would ever have to know ;)
      • I agree. A lot of recent science-fiction treats AIs not as spontaneously arising from the Internet but as deliberate projects to create AI. (Usually, something goes horribly wrong, but that is one of the genres of sci-fi.) One of my personal favorites was the story that had an AI that had been 'killed' twice by the NSA, and they weren't even aware that they had done it, giving rise to the theory that it had been at such a non-robust stage that it was easy to 'kill'.

        An Internet-wide AI.... hmm... lag would problem be analagous to senility or Alzheimer's, network outages would be memory loss or brain damage, crashes would be brain damage as well. However, given that a network will eventually come back up, and no crash lasts forever (although I'm certain MS is working on a 5 9's crash), it wouldn't be permanent brain damage. And theoretically, such an AI could become 'accustomed' to lag and work around it.

        But it's all still speculation...

        Kierthos
  • I try to avoid physics and use words instead. It is sooo much easier than having to remember all those equations. ugh.

    • "Physics" isn't the equations.

      The equations are just our human attempts to understand the physics.

      You are conflating the subject as taught in school with the subject matter itself.

  • by friode ( 79255 ) on Sunday August 05, 2001 @11:18PM (#2163301)
    One odd note is the reseachers' claim that the Web contains "nearly a billion documents," when one search engine alone claims to index more than a third beyond that

    Look deeper, grasshopper:

    ...This expression predicts typically that the shortest path between two pages selected at random among the 800 million nodes (i.e. documents) that made up the Web in 1999 is around 19 assuming that such a path exists...

    ...the typical number of clicks between two Web pages is about 19, despite the fact that there are now over one billion pages out there...

    Hey, Timothy, next time try reading the article instead if skimming it.
    • ...the typical number of clicks between two Web pages is about 19, despite the fact that there are now over one billion pages out there...

      So, how many clicks does it take to get to the home page of Kevin Bacon?

      -Joe

  • by beanerspace ( 443710 ) on Sunday August 05, 2001 @11:32PM (#2163346) Homepage
    The article does a good job at pointing out the seemingly chaotic and certainly cell-like nature the internet. However, unlike the article, I'm not sure if more research and/or more computer science will solve the problem.

    THat's not to say that understanding how the various layers of complexity architecture and dynamics won't provide an answer ... and not because I think such diciplines suck, but because we have and will continue to have commercial influences on how networks are established.

    Certainly some, in fact many businesses will higher and follow good practice. The problem comes about when some large companies don't. Or worse when mergers and buyouts occur, e.g. Verizon, CIHost and a few others come to mind.

    Not to sound anti-business, because business has footed much of the bill for Internet expansion ... but rather to voice concern that sometimes there is a big disparity between technical solutions and the shareholder's bottom line.

    • I'm not sure if more research and/or more computer science will solve the problem.

      What problem are you talking about? Their research found that the current structure of the internet is extremely resilient to random attacks. Yes, co-ordinated attacks against key routers could work, but every network has some vulnerability, and the best solution is probably just to make sure the few key routers are well-protected and hidden. As Mark Twain says,
      The fool says, "don't put all your eggs in one basket," whereas the wise man says, "go ahead, but watch that basket!"
      There's no problem that needs to be solved, so I don't know where you're going with this "Not to sound anti-business" rant. The current chaotic approach to building network infrastructure works great, just like many natural systems.
  • The authors claim the threshold is 0, but Bliss [uni-paderborn.de] never made it in the wild.

    The mere existence of that term IMHO shows that the threshold is greater than 0.

    • The virus infection threshold is based on something like this model:

      1) Some set of nodes are infected
      2) Each of those nodes has a probability of X of infecting it nearest neighbors.
      3) repeat
      I just made that up, and there are many oportunities for variations (add the ability for nodes to be cleaned and/or vaccinated), but under models like this:

      random networks have a critical threshold for X, above which they will infect the whole network, below which they will die out.

      scale-free networks will have a macroscopic fraction of the network infected for any value of X.

      First of all, there are additional features not caputred in this model, which could be important for "viruses" like Bliss which have an extremely low probabiliy of infection.

      Second, the internet is not exactly a scale free network. As mentioned in the article, while the dominant behavior is a power law, if you go high enough, you find exponential cutoffs. This could cause some viruses to die out (I am certain Bliss isn't the only one that never made it).
  • this was going to show itself sooner or later. im no math guy, nor am i a computer guy. i am in fact a molecular biologist and i play with complex systems every day. i was thinking about how everything is connected to the net these days. cell phones, pda's, cars, even appliences. with all this stuff, the physical topology of the net is very dynamic, almost to the point its evolving on its own. despite the fact it is an entirely man-made thing, it looks very organic. all these complex systems look the same. as chaotic and complex as it is, it seems as tho someone (with too much time on their hands) saw how it is behaving like things found in nature. looks like i was thinking along the right lines.i just wonder with the speed of expansion what it will be like 5 years from now. i hope it doesnt develope morals...
  • IBM "bow tie" paper (Score:4, Interesting)

    by mgarraha ( 409436 ) on Monday August 06, 2001 @12:00AM (#2163419)

    In "Graph structure in the web [ibm.com]," Kumar et al. divide 200 million web pages into four categories of roughly equal size:

    The first piece is a central core, all of whose pages can reach one another along directed hyperlinks -- this "giant strongly connected component" (SCC) is at the heart of the web. The second and third pieces are called IN and OUT. IN consists of pages that can reach the SCC, but cannot be reached from it - possibly new sites that people have not yet discovered and linked to. OUT consists of pages that are accessible from the SCC, but do not link back to it, such as corporate websites that contain only internal links. Finally, the TENDRILS contain pages that cannot reach the SCC, and cannot be reached from the SCC.

    So is your home page an innie or an outie?

  • ... that they didn't try to Describe The Web With Gym Class.

    I sucked at that! (As I imagine many /.ers did and do)

    :)
  • by Anonymous Coward
    This simple model illustrates how growth and preferential attachment jointly lead to the appearance of a hierarchy. A node rich in links increases its connectivity faster than the rest of the nodes because incoming nodes link to it with higher probability this "rich-gets-richer" phenomenon is present in many competitive systems.
    And there's your explanation for how VHS beat out beta, QWERTY beat out other arrangements, and Microsoft won out in the OS and Apps biz. A small initial advantage gets magnified over time. The wingbeats of a butterfly become a hurricane.
  • I wonder if the mathematics dealing with the spread of the net and the web are things that can be applied to neural nets. I mean, I know that as it stands a neural net is just a test-and-analyze-results type system, where the strongest signal (weighted by past tests) dominates on any later action, but maybe it can be broken down into classes of actions (very connected hubs) break down into specific action types (somewhat connected hubs) to specific actions (singly connected network nodes) based on vague discriptions of input data on down to specific information on it (when there are only 5 real possible actions to check). Anyone out there a Neural Net expert who thinks this might work?

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...