Describing The Web With Physics 133
Fungii writes: "There is a fascinating article over on physicsweb.com about 'The physics of the Web.' It gets a little technical, but it is a really interesting subject, and is well worth a read." And if you missed it a few months ago, the IBM study describing "the bow tie theory" (and a surprisingly disconnected Web) makes a good companion piece. One odd note is the reseachers' claim that the Web contains "nearly a billion documents," when one search engine alone claims to index more than a third beyond that, but I guess new and duplicate documents will always make such figures suspect.
Lawrence and Giles study was published in 1999... (Score:2, Insightful)
The important thing from that paper is on the growth of the web; and from Kumar's bowtie-theory paper, we also think that most of the web is growing in places where we can't see.
1,000,000,000 urls (Score:4, Insightful)
Most of their research seems to be on 'static pages'. They state that the entire internet is connected via 16 links (similar to the way that people are connected to 5-6 aquantances). I believe as the ratio of dynamic to static content on the internet increases, this will bring increase the total number of clicks that it takes to get one site to the next. For example, I could create a website that dynamically generates pages, the first 19 pages are all contained within my site and the 20th time that the page is generated, it contains a link to google.
The metric functions that they use are good for randomly connected maps, but they don't apply to the internet, where nodes are not randomly connected. Nodes cluster into a group depending on topic or categories. For example, one Michael Jackson site links to other Michael Jackson websites.
Describing the web with biology.... (Score:2, Insightful)
It turns out that computing may prove similar.
Different is good!
LAIN (Score:1, Insightful)
What will happen as the net becomes more and more like a brain? Can it have a soul?
Or worse, can it comprehend the garbage we use it for?
Internet is not web (Score:1, Insightful)
Re:critical threshold for virus spreading (Score:3, Insightful)
1) Some set of nodes are infected
2) Each of those nodes has a probability of X of infecting it nearest neighbors.
3) repeat
I just made that up, and there are many oportunities for variations (add the ability for nodes to be cleaned and/or vaccinated), but under models like this:
random networks have a critical threshold for X, above which they will infect the whole network, below which they will die out.
scale-free networks will have a macroscopic fraction of the network infected for any value of X.
First of all, there are additional features not caputred in this model, which could be important for "viruses" like Bliss which have an extremely low probabiliy of infection.
Second, the internet is not exactly a scale free network. As mentioned in the article, while the dominant behavior is a power law, if you go high enough, you find exponential cutoffs. This could cause some viruses to die out (I am certain Bliss isn't the only one that never made it).
Microsoft, Betamax, Qwerty, oh my (Score:1, Insightful)