Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Science Books Media Book Reviews

Linked: The New Science of Networks 160

kurtkilgor writes "One of the most frustrating things about many areas of science and engineering today is that we know the basics but don't know how to put them together. We know a great deal about how atoms interact, but we aren't so sure about how to combine them to make a 'big picture' of matter. We understand how an individual computer works, but how to build large informational networks with computers is another thing entirely. We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this. I've long been interested in these types of complexity problems, but not a whole lot of material has been available. In particular, Malcolm Gladwell's book The Tipping Point left me searching for an explanation for the many curiosities that he presents. Is there a mathematical description of tipping points? Is there a way to find out when and why things tip? How does information spread through society?" Kurtkilgor reviews below Albert-László Barabási's Linked: The New Science of Networking, which attempts to answer these questions.
Linked: The New Science of Networks
author Albert-László Barabási
pages 229
publisher Perseus Publishing
rating 10
reviewer kurtkilgor
ISBN 0738206679
summary An introduction to scale-free networks and their broad applications

It turns out that in the past few years, a decent amount of progress has been made on this front, largely thanks to the Internet. The Internet allows scientists to exchange information and speed up research, but more pertinently it is a test subject for these kinds of large-scale interaction problems. Linked: The New Science of Networks presents both the story of how the science has developed, and what it means. Unlike much popular scientific literature, the author himself is an active participant in the field.

The biggest surprise and most important lesson of the book is that the Internet, cellular biology, society, matter, and an incredible array of other seemingly unrelated things all form a particular type of structure called a scale-free network. These types of networks have only been described in detail recently, and their study promises to be as fundamental and rewarding as, for instance, waves or diffusion. The presence of the same structure in many unrelated situations suggests that there is a deep physical or mathematical principle which governs them.

The discovery of this principle is the subject of the first half of the book, which is a sort of detective story that leads from the most primitive concepts of graphs, as pioneered by Euler, to the state of the art. It is very interesting in itself to see how inconsistencies in mathematical models have led people to develop more and more accurate ideas of how such networks function. There is a tiny amount of math in the footnotes available for those who want it, but generally no prior knowledge is required. The author writes with plenty of anecdotes, especially in the beginning starting out with such introductions as this one of Paul Erdos:

"One afternoon in late 1920s Budapest, a seventeen-year-old youth cantered with a weird gait through the streets and stopped in front of an elegant shoe shop that sold custom-made shoes ... After knocking on the store's door-an act that would have seemed just as odd back then as today-he entered, ignoring the saleswoman at the counter, and went up to a fourteen-year-old boy in the back of the shop.

'Give me a four digit number,' he said.

'2,532,' came the wide-eyed boy's reply . . .

'The square of it is 6,441,024,' he continued. 'Sorry, I am getting old and I cannot tell you the cube.'"

For another example of both the writing style and the unusual content, the author humorously describes the discovery of a similarity between Bose-Einstein condensation and economic monopoly:

"Essentially Microsoft takes it all. As a node, it is not just slightly bigger than its next competitor. In the number of its consumers it simply cannot be compared. We all behave like extremely social Bose particles, convenience condensing us into a faceless mass of Windows users. As we purchase new computers and install Windows, we carefully feed and maintain the condensate developed around Microsoft. The operation systems market carries the basic signatures of a network that has undergone Bose-Einstein condensation, displaying clear winner-takes-all behavior."

The rest of the book devotes a chapter to a particular example of a network: epidemics, the Internet, economics, etc. One thing is abundantly clear: the more we know about how these things work, the better we'll be able to curb DDOS attacks, stop disease, and control economic failures. An unlikely example of a scale-free network is the cell. It turns out that the interactions among a cell's proteins can be modeled this way, and if we could only understand it, we would be able to come up with treatments analytically, instead of by trial and error as it is done now.

It seems to me that with a greater understanding of networks, we will be able to finally advance in many fields in which progress is currently stalled. From firefly research to AIDS treatment, this is the Next Big Thing.


You can purchase Linked from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Linked: The New Science of Networks

Comments Filter:
  • Book's site (Score:5, Informative)

    by dietlein ( 191439 ) <(dietlein) (at) (gmail.com)> on Thursday January 23, 2003 @12:22PM (#5143538)
    Here [nd.edu] is the book's official site.

    This [nd.edu] is the photos page, with photos like.. umm... this [nd.edu].
  • More reviews (Score:5, Informative)

    by dietlein ( 191439 ) <(dietlein) (at) (gmail.com)> on Thursday January 23, 2003 @12:26PM (#5143575)
    CS Monitor [csmonitor.com] (thumbs-up)

    Nature [nature.com] (ho-hum)

    Computer User [computeruser.com] (thumbs-way-up)
  • Duhh.... (Score:1, Informative)

    by Anonymous Coward
    Ahhh... Duhhh... the higher the level of abstraction the more complex the problems become because we don't really COMPLETELY understand the lower levels of abstraction. The errors in our assumpts manifest themselves in more unpredicable ways as we base higher and higher level concepts on those models.
    • Re:Duhh.... (Score:3, Insightful)

      by wilgamesh ( 308197 )
      This is somewhat of a misleading remark. And I think this comment misses the spirit of the book and topic presented.

      For instance, in the game of chess, we understand _completely_ what each piece does, but that doesn't mean we can play a perfect game, or even a good game. Although it certainly is a prerequisite in this case.

      And for instance, in a branch of physics known as critical phenomena, where one tries to explain the behavior of things like water evaporating, or magnets losing magnetization, etc. You can construct extremely simple models where there's like one lower level of abstraction to know, but then you can't answer extremely simple questions about higher levels of abstraction.

      Let me draw an example, that is widely known as the Ising model of magnetism in physics. We can make a very very simple model of magnetism by saying that all magnetic spins can be UP or DOWN, and the energy is 1 if an adjacent pair of magnetic spins are the same, and -1 if the spins are different. Then we put all these little spins on a lattice, and we call this collection of little spins a _magnet_. Ok, this is a very very simple model, but now we ask, does this thing behave like a magnet? A tough question in 2 and 3 dimensions! Why? It's not because of errors in our assumptions, it's basically because we have very primitive mathematical tools to tackle this type of problem. We are forced to resort to mathematical tools such as infinite transfer matrices, and jordan-wigner transformations.

      Yes, in one sense, I agree with your post, that round-off errors cause chaos to occur over very long simulations or models can be inaccurate and have bad predictions. But the spirit of the book is in examining very simple models that seem to have correct predictions, but are complicated enough that we can't manipulate these models with finesse to extract additional information about the system.
      • I agree w/ all of this, but now I think, perhaps we've all missed the point. I haven't read the book, so I could be completely off, but I don't think it's about "we have a grasp on at this level so what happens at another level?", tho we have to use these situations to describe the abstraction, it is "what is the process of this abstraction?" Not what happens and why, but how these abstractions are related.
  • by gpinzone ( 531794 ) on Thursday January 23, 2003 @12:32PM (#5143632) Homepage Journal
    One of the required classes for my Engineering degree was a course in the mathematics behind networks. It was without a doubt one of the most difficult coursework I have ever experienced. Even with all of the work we performed to create mathematical models of network nodes, etc., they were still unrealistic due to the overall complexity of "real" networks. For example, basic router queuing assumes the packets have an incoming probability Poisson distribution and outgoing has an exponential distribution. This is just an approximation used to allow us to get our arms around the problem. If you examine this model closely, you will find out that it implies that the packets that enter aren't necessarily the same size when they leave! Other issues like probabilistic routing rather than trying to model "smart" routers that adjust based on traffic patterns, etc. aren't usually modeled either.
    • I remember a computer network class where some mathematical models were exposed. We would see at what point a protocol was the most efficient under the assumption that traffic demand was regular.

      It turns out the all the mathematical deductions weren't valid, since observed network traffic consists of bursts of emission, instead of regular streams. So the author suggested that a whole different theory needed to be developped to understand networks better.

      So I think assuming a Poisson distribution falls in the 'theoric' branch... or does it?

  • by fygment ( 444210 ) on Thursday January 23, 2003 @12:33PM (#5143635)
    ... Wolfram's, "New Kind of Science" and Fritjof Capra's, "The Web of Life" to get a tremendous sense of convergence of many fields and principles. The incredible interconnectedness of things makes you wonder how anyone can claim to have " ... found the gene for ..." or dare to think that their actions only have local repurcussions. You listening, George?
    • Wolfram's "new kind of science" is exactly in this vein. It basically states science has been approaching complex issues back-asswards.....that very simple rules can produce very complex behavior. He gives many examples of this thru visual patterns, and then applies this same principals to almost every aspect of science, from chemistry to physics.....Very much worth a read (if you can can get past his own lack of modesty... ;-) )
    • While the fields of psychology & sociology (or atomic physicis & astronomy, or biology & psychology) model the individual level and network level indepenently, what people are searching for now is a common model, or bridge as to how the two levels of granularity relate to each other. 'The Tipping Point' is about trying to identify the point at which a phenomena goes from the individual level to the social level, and about the forces that make it happen. Its very anecdotal, not mathematical (disappointing to me) but well worth a read for the ideas it presents. It looks like 'A New Science of Networks' explores similar ideas, but in far more diverse fields. Wolframs book takes things a step further and suggests that the mathematics to describe these phenomena will come from cellular automata, but still provides mostly speculation and a bunch of observed consistencies in nature. Another read I suggest is 'Godel Escher Bach' by Douglas R. Hofstadter for a dense but lighthearted examination of how different levels of granularity interact to produce art that is appealing, in addition to the classic examples of mathematical/computational systems, society, etc.
    • I haven't read Barabasi's latest book (well, it's just out), but I've studied his previous book "Fractal Concepts in Surface Growth" (with H.E.Stanley) in some detail. This book made an enourmous contribution to the field of statistical physics, at least to it's popularity. I am approached by colleagues quite regularily that want to borrow the book, etc., although they are in completely different fields of physics. It is good for both undergraduate students and grads alike, and serves as a very good introduction to the field.

      Barabasi and co-author Albert are literarily inventing a new field of physics/math; I'm not even quite sure of what to call it. However, they are very much in touch with current research in the field, and their work is very timely (who else could tell you that the "degree of separation" on the web is 19 and not 6?)

      As for Wolfram, however, I cannot say the same. I've seen Wolfram present his book in a special seminar (but haven't read it), and my impression is this: he is an exceptionally bright guy, but not in touch with current research. Wolfram is able to explain a wide variety of fields within physics and mathematics with great confidence, and I would be the last to call him un-educated (no two-week crash course in particle physics on his behalf! Actually, I think he was the only grad-student that Richard Feynman supervised!). I realize that when you "invent paradigm-changing science", you will necessarily meet some opposition from other researchers, but Wolfram's problem is this: he had a good idea some 20 years ago (cellular automata), secluded himself in a room since then developing his idea (as well as various sales-pitches for Mathematica), and forgot to consult with the rest of the scientific community. I understand very well why he's being critizied by his peers.

      • Barabasi and co-author Albert are literarily inventing a new field of physics/math; I'm not even quite sure of what to call it. However, they are very much in touch with current research in the field, and their work is very timely (who else could tell you that the "degree of separation" on the web is 19 and not 6?)
        Not here, they're not. Search your local university library for "graph theory." See also my comment below regarding the tremendous body of extant literature in the field. Claiming that Linked reveals a "new science" is rather like claiming that Al Gore created the Internet....

        -Carter

    • Try also Yaneer Bar-Yam's "Dynamics of Complex Systems [necsi.org]". (it's freely available)
    • Hardly anyone ever actually claims to have "found the gene for" anything, because biologists know that it's usually much more complex than that. Journalists use phrases like that because they're catchy. There are a couple of exceptions to this rule -- e.g., cystic fibrosis, and I believe sickle-cell anemia as well -- but everybody who actually works in the field knows that most genetic diseases involve a lot of genes and/or regulatory mechanisms for those genes which are, in most cases, still not well understood.

      This is a bit of a pet peeve of mine: people making assumptions about the behavior and motivation of scientists that have nothing to do with the way real scientists actually do their jobs, and a lot more to do with catchy headlines and fear-mongering. Sorry if I'm overreacting.
  • The New Science (Score:4, Insightful)

    by Madcapjack ( 635982 ) on Thursday January 23, 2003 @12:36PM (#5143652)
    the science of networks is really just one branch of the emerging science of complexity. What is really interesting is that game theorists will borrow from network theorists, network theorists from game theorists, game theorists from evolutionary theorists, evolutionary theorists from game theorists, network theorists from evolutionary theorists, evolutionary theorists from AI theorists, and all of them from linguistics, philosophy, cognitive sciences, economics, and the other social sciences, computer modeling, agent-based modeling, etc. and visa versa. This is the future, and the future is bright.

    The science of networks is not so new, but it is gaining importance rapidly. I'm interested in the application of network theory to the flow of information in structured populations. Network theory would be part of this, but so would other social theories (kinship, information, psychology, etc.)

    for interesting papers on networks go to:

    http://www.santafe.edu

    the center for the science of complexity

    • Re:The New Science (Score:3, Insightful)

      by Just_Tom ( 565768 )

      "... What is really interesting is that game theorists will borrow from network theorists, network theorists from game theorists, game theorists from evolutionary theorists, evolutionary theorists from game theorists, network theorists from evolutionary theorists, evolutionary theorists from AI theorists, and all of them from linguistics, philosophy, cognitive sciences, economics, and the other social sciences, computer modeling, agent-based modeling, etc. and visa versa..."

      True, very true. Not only that, but the author of this book is a physicist!

      95% of this book was familiar and/or easy to understand, coming from an AI and maths background. Where it occasionally lost me was in the sudden jumps and links between seemingly unrelated fields.

      The Bose-Einstein condensation analogy with Microsoft's OS monopoly is one example of this. In the terms of the models Barabasi et al used, the discussion around this makes perfect sense*. It's only in the atmosphere of a physics department that such a connection would have been made, but the non-physicists reading the book could have done with a little more explanation. Most of the book, however, was extremely thorough and accessible.

      *If I understand it correctly, the scale-free model predicts (and accounts for) the formation of hubs, but it is possible to modify the model such that the hubs can all converge to one level. This is governed by equations Einstein formulated in the 1920s. Nice to know, but not very clearly explained.

    • Re:The New Science (Score:3, Informative)

      by buswolley ( 591500 )
      Mod this one up!

      and visit http://www.santafe.edu. It is very interesting. Santafe.edu is a college that gathers researchers from a wide number of fields in a shared environment, so that they can share ideas between fields of study.

    • I think people are trying to get too much credit for this new area of research. This is not new, it has existed for a years. And the "Old Kind of Science" has a lot of merit too. The difference between the two is not a mathimatical discovery or a physical theory, it is computing power. Scientists have know for a long time that things can be simulated by cellular automata. I find Wolfram arrogant for trying to get credits for this way of thinking. The passage from algebraic approximations to cellular automata comes from the computers.

      A few years ago it would have been practically impossible to use these kinds of theory. We are just now getting the computing speed and memory (at an affordable price) needed to simulate, millions on molecules in a cell, or millions of cells in a body, or millions of neurons in a brain, or millions of atoms in a stream of water etc...

      Of course now that we have the computers we can advance our knowledge in the field of cellular automata, being able to test the theories and refine them.

      And I am of the same opinion as Barabási or Wolfram: I think this kind of aproach is incredibly powerful, enabling us to predict things and create artificial intelligence in a way never possible before.

      This however does not make algebric solutions or aproximations less usefull. Math is a tool wether it is in the algebraic form or in the form of cellular automata. BOTH methods are aproximations of reality. Unless we find the grand unified theory of everything and we simulate things to the smalest most elementary particle, we will always make aproximations. And even if we find the basic rules to the univers its going to be a long time before we get to computing power to simulate large systems from "infinitely" small particles.

      Of course maybe cellular automata model is a little closser the real world. But the aproximation we have to make is worst. In every iteration of these simulations we make a small error. These error add with time so that if we simulate something for more than a few seconds we need have a result that is far from reality. Algebraic solutions dont have that cummulative error problem.



      P.S. I have to admit I haven't read Wolfram's book or Barabási`s book because I was lacking free time. So I hope I am getting the idea of these theories right. Tell me if I don't make sense.
      • I think people are trying to get too much credit for this new area of research. This is not new, it has existed for a years. And the "Old Kind of Science" has a lot of merit too. The difference between the two is not a mathimatical discovery or a physical theory, it is computing power.

        i think you are basically right. the big difference is computation power. but then again I think more things have changed than that which have encouraged these approaches.

      • Re:The New Science (Score:3, Interesting)

        by andr0meda ( 167375 )


        It seems to me that most of the dynamics and mechanics of multi-agent networking behaviour are closely related to the structure they are confined to - and by structure I mean the physical implementation constraints of the working model - more so than what it is the agents themselves do, associated with a certain probability density function.

        I've done some research in Neural Networks and I was amazed by the importance of the dimensionality of the network. There is a subfield in NN's that tries to generate appropriate networks for appropriate computing tasks. Still the difference between real neurons and neural networks, is that the first one has an analog clock, while the second one has at least a discretized clock per node, if not per layer or for the whole cell. Also the importance of having a feedback or recurency can make all the difference in the right / wrong places.

        I have the feeling - but could not proove this yet - that a dynamic combination of local optima searches and global optima searching leads to self-modifications to the structure in which the agents live, in such a way that the structure suits the needs of the original fitness function, which desribes the problem that we are trying to solve. Since the fitness function itself is a variant in time in most problems, it is logical to assume that the networks are never in a static state, so global optima searching will modify the network constantly, while local optima searching will try to exploit network capabilities best.

        Seems like interesting material, I'll have to check out this book!

    • Firstly:

      of the emerging science of complexity
      Damn funny, that...

      Secondly, it was in New Scientist, years ago ( paper edition ) where it was noted that each dimension of complexity is equivalent to about an order-of-magnitude increase in detail: Having neurons ( synthetic neural-net ) that stimulate other neurons, only, in one system, and having neurons-that-stimulate and neurons-that-suppress in a second system, the first system would require 10x~100x ( IIRC, it actually may have been a factor of 10 000 ) as many neurons to get the job done, as the second.

      Interestingly, our neurons communicate by stimulative synapses, by suppressive synapses, by pattern-of-signal, and by nitrous-oxide diffusion ( as well as possibly electrical conduction ), so the mere 100 000 000 000 neurons we're born with
      ( with ~100 synapses, and ~1000 dendrites on each one, +/- an order-of-magnitude, in short a stunning quantity of connections )
      can accomplish as much as many-more of a less-dimensionally-deep neural-net...

      And that is boggling, but the principle, that each extra ( necessary ) dimension removes the need for a quantum-order-of-magnitude of detail-level-things, holds reasonably well...
      ... is elegant, and very useful ( try simplifying your business/processes/anything this way... 'The 7 Habits..' does this, unconsciously... )

      PS, here's the link from above, clickable, for your incconvenience:
      http://www.santafe.edu/ [santafe.edu]

      "The Santa Fe Institute is a private, non-profit, multidisciplinary research and education center, founded in 1984. Since its founding SFI has devoted itself to creating a new kind of scientific research community, pursuing emerging science."

      "Operating as a small, visiting institution, SFI seeks to catalyze new collaborative, multidisciplinary projects that break down the barriers between the traditional disciplines, to spread its ideas and methodologies to other individuals and encourage the practical applications of its results."

      Worthy intention, 't seems...

    • Most of the "tipping point" theory (which goes back at least to Erdos' 1960 random-networks paper) looks at how gradual accumulation can lead to sudden shifts in system properties. Good stuff, and relevant to situations from Darwinian evolution to traffic-jam analysis, but not really new.

      However, the work of Sante Fe researcher Stuart Kaufmann (The Origins of Order, etc.) gives a whole new direction, showing how complex, interlocked systems can arise in some circumstances by winnowing a more complex chaotic system that arises naturally. It sounds circular until you look at it carefully, but Kaufmann backs up his analysis with extensive computer simulation as well as a deep analysis of genetic control processes (Kaufmann's original specialty).

      These ideas can be used far beyond the biological settings for which they were first developed. Examples range from the crystallization of activity patterns in a new organization or cultural area to the process of learning itself, where the "aha" experience marks the emergence of a set of coherent concepts from the overflowing cloud of ideas that sets the stage for it.

      Adding Kaufmann's ideas to your set of explanatory tools will permit you to resolve many complex-systems questions that are otherwise intractable. And computer types are particularly well-situated to understand and use his arguments.
  • by DaBj ( 168491 ) <dabj@noSpaM.dabj.net> on Thursday January 23, 2003 @12:37PM (#5143655) Homepage Journal
    We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this.
    ...Asimov argues (yes I know it's "just" SciFi) that you need an overwhelmingly large amount of "individuals" to extrapolate the behaviour of
    "societies", and you don't even have to know how the individuals act individually.

    I agree with him, we knew how the solarsystem (society)worked long before we knew how atoms (individuals) worked.

    You cannot use the knowledge of individuals to analyze society, just as you cannot use the knowledge of society to analyze individuals.
    If you want to know how society works, study society, not individuals.

    These are just my opinions though.

    (Don't call me redundant if somebody else wrote something similar while I wrote this =) )
    • Your suggestion is interesting:

      You cannot use the knowledge of individuals to analyze society, just as you cannot use the knowledge of society to analyze individuals.

      It's also -- forgive me for saying -- a little old-fashioned. It implies that there's a complete break between the individual and the society of which that individual is a part. One must have nothing to with the other. Given that every action by every individual has an impact in the society (however small) this seems unlikely. Like air in a room, the overall behaviour of the system will not be based on the behaviour of one gas molecule, but the system is still just an aggregation of gas molecules.

      Might well be that it's of more practical value to study the two separately. Going from mass and electrical charge to "how to build a nice car" might be a long and twisty road. It's probably easier to model it with math that might be a little crass, but gets the job done. But it doesn't mean mass and charge have nothing to do with what the car is made of.

      • Might well be that it's of more practical value to study the two separately.

        Maybe I should have been clearer in my statement saying "You cannot use...".

        The complexity that arise from trying to analyze a "macro situation" with a "micro theory" becomes so unwieldy and overwhelming that it is more practical to pretend that there is an uncrossable barrier between the two.
        • The complexity that arise from trying to analyze a "macro situation" with a "micro theory" becomes so unwieldy and overwhelming that it is more practical to pretend that there is an uncrossable barrier between the two.

          But what if we discovered general principles which would allow us to easily extrapolate macro behavior from known micro behavior? It would cause a revolution in all branches of science.

          My personal opinion is that this particular line of research will be like fractals, NP-completeness, complexity, etc in that it will make a big splash, but in the end will only make a small contribution to our total knowledge of the universe. But you never know.

          TTFN
      • Funnily enough, there's still this unresolved break between quatum theory and relativety...so who knows? He might have a valid point :)
    • ...Asimov argues (yes I know it's "just" SciFi) that you need an overwhelmingly large amount of "individuals" to extrapolate the behaviour of "societies", and you don't even have to know how the individuals act individually. [...] I agree with him,

      With all due respect to Asimov (who I don't think believed it himself), the theory is a load of crap and really is just fantasy. History proves over and over that single individuals can make a world-changing difference. Would the mongols have taken over asia with Gengis Kahn? Doubtful. Would Europe have been carved up the way it was without Hitler? Again, doubtful.

      And hell, what if Lincoln had not been elected President? We might have TWO "United States of Americas" occupying our current continent. I can't even imagine what the world would be like with a divided US. And how long would it have taken to free the slaves in the Southern US?

      I mean, you can go on and on. What if Lee Harvey Oswald had missed? What if what's-his-name didn't get assassinated, causing WW/I? What if Ghandi hadn't been born?

      • Asimov suspected (in the introduction to one of his books?) that it was chaos that would probably be the undoing of 'psychohistory', i.e. the individuals you mentioned would have a 'butterfly effect' on history. I have just ordered the book and one of the things I am interested in is if the science of networks offers some insights that allow you to say something about these kind of systems that are not always subject to "but chaos means we can't say."
      • I totally agree with you when you say, History proves over and over that single individuals can make a world-changing difference. But --

        > And hell, what if Lincoln had not been elected President? ..
        > What if Ghandi hadn't been born?

        The truth is, no one knows. Just as psychohistory was largely statistical, human societies are non-linear. Individual humans ("heroes") do come in, do act as inflection points -- but it is not as if other inflection points could not have existed.

        Lincoln's opponent could have risen to the occasion as well. Many historians argue that India would have become free, Gandhi or not, because Britain was much too weak after WWII to deal with the "restive natives" (not all Indians were non-violent, a good many that were sentenced were called "seditionists" then, and almost certainly would be called freedom-fighters^W terrorists today).

        Social behavior in the 1900s middle-east was pretty predictable: who in the middle of it would have predicted Kemal Atatürk? Yet, the really interesting thing is, given the almost-repeating patterns common to non-linear systems, how what will Turkey evolve into a hundred years from now? (e.g. Now a pro-Islamic party has been voted in there. Is this a major inflection or something that'll be damped out in no time? again, no one knows...)

        > With all due respect to Asimov (who I don't think believed it himself

        You are right, Asimov used it simply as one of the building blocks of a good yarn. Like the 3 laws of robotics. (In fact, in Forward the Foundation, written in the late 80s (or early 90s?), contains references to 'achaotic equations' he had to dream up because he could bear not acknowledging the growing body of evidence that the future is essentially non-linear).

      • History proves over and over that single individuals can make a world-changing difference.

        If you were to read carefully, Hari makes the point that an individual does make no difference in the history of the Empire. The role that the indivudal plays in history can be predicted, but the individual to fill it (and when it's filled) doesn't really matter.

        Even Hari was just filling a role. In Hari's case he was selectively bred for over a millenia by the Robots. His role was important but exactly why and what effect he would have, Daneel couldn't fathom. They bred him because long-term planning for humanity was just beyond the grasp of Robots.

        Had Hitler never arrived, maybe Stalin would have gone rampaging through Europe. If Lincoln had not been president, another Unionist might have fallen into his place. If Lee Harvey Oswald had missed, maybe JFK would have died of drug overdoses. If Caesar hadn't been born, perhaps another with his ambition would have eventually become Dictator and Emperor. And so on.

        Hari's larger point being that Stalin, Hitler, Lincoln, and Gandhi would all have been unimportant anyway. Even a figure whose impact was as dramatic as The Mule didn't really throw off the predictions of psychohistory by much. The Plan compensated even for him -- and he was completely unforseen.

        100 or 200 years is a myopic view of history. Larger factors like nationalism, resource pressures, population expansion, steady trends in technology, industrialism, and so on drive history -- not individuals. Taking the longer view, psychohistory (acting in hindsight) may have predicted that near 100 BCE a large seafaring empire covering the entire Mediterranian, with effective military technology, efficient government structure, rigid social classes, and a strong military influence over government would have risen. It might not have been Roman, but should have happened anyway. It might have predicted factors which would cause the Empire to fall, and the feudalism which took hold shortly thereafter; inevitably the nation-states that arose out of the fuedalism would colonize to relief resource pressures; and so on...
        • Asimov speculated that it took more than a suitable person to fill roles like Lincoln and Ceasar. In his books, it took entire societies being ready and needful of such figureheads.

          Interestingly enough, Seldon's own plan became hypocritical after the 2nd Foundationers took over "management" of history in Foundation and Empire. According to Seldon's published theories, there needed to be no management for history to conclude the formation of a 2nd Empire. After the Plan was re-established (the Mule and aftermath), the 2nd Foundationers would still meddle in the affairs of mortals.

          The Mule DID throw the plan completely off, and Seldon's plan and even the 2nd Foundation would have been unable to survive the Mule intact if it weren't for the planet consciousness of Gaia stepping into the picture in Foundation's Edge (the fourth book of the "Trilogy"), much like the 2nd Foundationers were trying to do themselves.

          Actually, I think Foundation's Edge was Asimov's revision of his view of the world of foundation. In the trilogy, his books pointed toward a class-based rulership of the masses based on selected elite mind-bending super-men, not totally unlike George Lucas's universe, actually. Foundations edge revised that to become a collective consciousness where everyone shares in the control by their needs and their connection with said "collective consciousness". To wit: Gaia for a single planet and Galaxia as the entire galaxy (not just the people) being psychicly linked.

          And all in all, Asimov's entire message from all his Robot books and Foundation books and everything else was: "Can't we all just get along?"

        • 100 or 200 years is a myopic view of history. Larger factors like nationalism, resource pressures, population expansion, steady trends in technology, industrialism, and so on drive history -- not individuals. Taking the longer view, psychohistory (acting in hindsight) may have predicted that near 100 BCE a large seafaring empire covering the entire Mediterranian, with effective military technology, efficient government structure, rigid social classes, and a strong military influence over government would have risen.

          Well, we seem to agree that on the time scales of hundreds if not thousands of years the history of man is essentially chaotic and thus impossible to predict.

          Personally I think this is true for larger time scales as well. Events like nuclear war, or Earth colliding with a big comet could change everything and cannot be predicted.

          For anyone who thinks differently, the burden of proof is on you to make correct predictions for tens of thousands of years into the future. Saying that in hindsight something was inevitible will not do.

          Tor
          • I predict that population will increase, technology will become more refined, space travel will become more of a focus as the result of both of those. Oh yeah, and I firmly predict an ice age in the next 10K years.

            Okay. Predictions are in. It's *your* job to prove me wrong. Good luck. :)
            • I predict that population will increase, technology will become more refined, space travel will become more of a focus as the result of both of those. Oh yeah, and I firmly predict an ice age in the next 10K years

              Seems reasonable. But not inevitible; populations are decreasing in industrial countries and could start doing so elsewhere as well. And then we have my earlier examples of comet collisions and nuclear war, they could happen and put an end to it all.

              Okay. Predictions are in. It's *your* job to prove me wrong. Good luck. :)

              I think you missed my point; I don't claim to be able to make these predictions - I think it is impossible. The only way to tell is to wait for a couple of thousands of years and see what happens. :)

              If you consistently make correct and useful predictions then I will start to listen. Call me skeptic, but getting it right once in a very general prediction is not enough for me.

              Tor
      • History proves over and over that single individuals can make a world-changing difference.

        It wasn't the individuals who made the difference, per se. It was the society reacting to those individuals.

        Basic tenet of sociology is that humans seek leaders, heros, and villains of various kinds -- people who cause a large segment of the society to engage in a common social behavior.

        What Asimov was saying is that we don't have to understand Oswald's specifics to understand that under given conditions, individuals attack their leaders; and, in fact, we can better predict social behavior by ignoring Oswald's specifics and just looking for other patterns in society.

        If you don't believe this premise, ask any advertiser who puts billions of dollars into researching aggregate behavior and demographics. That research always pays for itself....

    • This is true from a practical standpoint, but not necessarily true from a theoretical standpoint. It is, given enough knowledge, and barring too much non-deterministic misbehavior from quantum mechanics, to derive behaviors of larger systems from their constituent parts. However, it takes a lot of very careful calculations, and thus is usually infeasible, and is often prone to error. Thus, it's only used where it's both really absolutely necessary and where there's a lot of money available to fund the simulations. Thus only a few isolated things -- explosions of nuclear weapons, for example -- are simulated by individually modeling the constituent microscopic parts.

      Now with societies, this might be the only way to go. We don't really have enough examples of societies to be able to glean at a macroscopic level the abstract features of societies while not being tripped up by merely accidental and inconsequential features. Thus modeling individual behavior might give more insights. However, constructing such a model accurately is likely to be even more difficult than constructing an accurate model of a nuclear explosion, since people tend to behave in less predictable ways than atoms and electrons do.

    • I agree with him, we knew how the solarsystem (society)worked long before we knew how atoms (individuals) worked.

      You cannot use the knowledge of individuals to analyze society, just as you cannot use the knowledge of society to analyze individuals.
      If you want to know how society works, study society, not individuals.


      Ugh. Please intelligently explain to me how a system of components following well understood rules cannot be studied based on the behavior of the components. If you care to mention how cases like the 3-body problem introduce chaos into the behavior of the system and make it impossible to predict based on observation, please tell me why this difficulty makes it not intellectually worthwhile to study an observed cross-discipline phenomenon. Also, what should be done in cases like physics where the extremely accurate but incompatible theories of quantum mechanics and general relativity describe the macro and micro scale?
    • I agree that from a empirical perspective it makes sense to study large systems as they are to understand them on a basic level. I disagree that ignoring the individual constituents is a wise course of action.

      In a thermodynamics course I am taking we don't use the conventional approach - starting from large empirical observations and generalizing empirically. Instead, we start with basic assumptions from quantum mechanics and build upwards to show that thermal physics has a mathematical foundation. This is an example of a well-understood system.

      It is important to look at general principles to get preliminary clues of its behavior, but if you can't connect it to the individuals I don't think you can truly say you understand it.

      While many systems exhibit complex 'emergent behavior'; I think the key to understanding large systems is building it up from smaller ones.

    • "We know how people act individually,"...

      Do "we"? Seems that psychologists and psychiatrists would be out of work.
      ..."and yet we can't extrapolate the behavior of entire societies from this."

      We can't? Crowd behavior may be better understood than individual behavior. Example: an individual's response to "fire!" is less predictable than a crowd in a theater.
  • by Dr. Sinistaar ( 600393 ) on Thursday January 23, 2003 @12:41PM (#5143686)
    "We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this."

    There just happens to be an entire discipline dedicated to exploring the behavior of entire societies. It's called sociology.

    Within society, there's an entire sub field that's been studying social networks for years. Things like how information is spread, how people get jobs, how diseases like AIDS spread, all have been explored using social network analysis.

    If you want a mathematical description of "tipping points", take a look at Mark Granovetter's work on threshold models of collective behavior. Gladwell's book is based his work (though he only references Granovetter's work on how people get jobs).
    • If you want a mathematical description of "tipping points", take a look at Mark Granovetter's work on threshold models of collective behavior. Gladwell's book is based his work (though he only references Granovetter's work on how people get jobs).

      could you give a more thorough description of sociological threshold models? thanks

    • Yes but...

      I took two Sociology courses at Humber College and York University and both times my teachers and advanced students were ardent Marxists.

      There was no math to speak of either. I think Sociology is a bogus discipline designed to get communists into our school system.
      • There was no math to speak of either. I think Sociology is a bogus discipline designed to get communists into our school system.
        Might I suggest perusing the Journal of Mathematical Sociology or Social Networks for a different view of the field? While some self-described "sociologists" neatly fit your description, those of us doing actual social science would appreciate not being lumped in with the rest....

        -Carter

    • There just happens to be an entire discipline dedicated to exploring the behavior of entire societies. It's called sociology.

      And you've just agreed with author. Sociology doesn't study individuals. It studies flocks and swarms. Sociology does not study large numbers of individuals, then try to predict how those individuals will react socially. Instead, it looks for trends in societal behavior without much weight being given to the individual units.

    • "We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this."

      There just happens to be an entire discipline dedicated to exploring the behavior of entire societies. It's called sociology.
      ... and the problem with sociology is that, in fact, we don't know how people act individiually. Sure, we can quite accurately predict the behaviour of a single human being in a given enviroment, but that is mostly based on previous experience. In an arbitrary setting, it gets worse. IMO, the big problem lies with the fact that there is very little reliable research on the separation of properties acquired through enviroment vs. those acquired through heritage (and other natural factors, such as sex and even race). This opens up to heavy speculations, and leads to the possibility of constructing most elaborate theories claiming just about anything. Examples are amongst others "queer" theory and radical feminist theory (Ok, this is gonna get me flamed, but what the heck (disclaimer: I am neither a sexist or a homophobe, in fact I have a good friend who is openly gay, and as a liberal I support large parts of the feminist struggle)), both relying IMO on very little actual empirical science when it comes to the (therein) claimed properties of the individual and thus the value of these theories could be debated.

      Sociology is full of these things. There is simply no one view of how we humans work by design, not to mention to which extent we are affected by external circumstances. One of the most significant splits lie in the idea of the "natural" behaviour of man. There is still a great many people, mainly in social sciences, who believe in the ideas mainly formulated by Rousseau, that all the moral faults of man, all vice and egoism, can be blamed on society itself (an idea that I believe helped form the ideologies of the anarchist movements). Experiments (and quite heartless experiments by todays standards btw) conducted as early as during the 1700s, failed ingraciously to support this idea. (Example: A young native child was taken from his mother in a colony (I believe it must have been the island St. Bartholomy, the only real colony ever held by the Swedish crown) and taken to Stockholm to be brought up at the court without any moral guidance or rules of any kind. The idea was that if the ideas of Rousseau were correct, he would grow up something of a saint with an unquelchable thirst for knowledge, moral enlightnement etc. As the common sense of most people would dictate, the kid grew up a total pest, pulling evil little pranks on everybody in his surroundings and eventually had to be sent home).

      Gee. That was a long post. But this kind of topic really gets me going.
  • How? (Score:4, Funny)

    by jhouserizer ( 616566 ) on Thursday January 23, 2003 @12:42PM (#5143694) Homepage

    How does information spread through society?

    Rumors.

    • memes.
    • "How does information spread through society?"

      through Slashdot, of course.
    • Actually, when I said "rumors" was the way in which information spreads through society, it was both a joke and a real comment.

      I think if you want to understand paterns in a society, you need to understand how rumors work - especially the way "truth" changes (loss of info, gain of false info, exageration of certain points of view, etc.) as it is passed from person to person.

      Even in non-human groups, this is true - think about a herd of elk. If one elk standing on the outside thinks that it smells the presence of another animal in the area, it will snort and/or move into the herd a bit. This creates a ripple affect through the herd that may or may not trigger a stampede, based on the way the other elk interpret the actions of the first.

      I'm not a psychologist or anthropologist or behavioralist, but I think it's obvious that there can be no reasonable study of how a society acts without having understanding of how perceptions of the same information varies from person to person, and of how it is therefore 'retransmited' to others.

      Anyway, I'd have to say that most if not all of what I know is complete hear-say, including what school teachers have taught me.

  • We know a great deal about how atoms interact, but we aren't so sure about how to combine them to make a 'big picture' of matter.

    Uh... you mean... uh... chemistry?

    'Cause if you're talking about some other big picture of matter, fill us in.
    • The validity of a scientific theory is measured by its ability to predict a behavior of the system under observation. The family of scientific theories that describe chemistry have excellent predicitve ability over a wide range of behaviors; however, these theories offer very little predicitve powers when it comes to predicting behavior for systems that spontaneously increase in organization (such as biological systems). 'Chemistry' can explain a great deal about the behavior of biological systems, but it doesn't offer much for describing that peculiar ability of biological systems to grow and increase in complexity.

      I assume that is the 'big picture' referred to in the post.
  • similarity between Bose-Einstein condensation and economic monopoly

    Please stop drawing analogues between socioeconomical politics and physics.

    Wasn't it enough that darwinism was used to promote fascism and ultraliberal capitalism and Einstein's relativity was used to promote moral relativism. All out of context, of course, but still bought by the people and - even worse - the politicians.

  • We don't know how to build large informational networks? I'd say the internet is a smashing success....
  • Snow Crash... (Score:4, Informative)

    by airrage ( 514164 ) on Thursday January 23, 2003 @12:53PM (#5143773) Homepage Journal
    I liked Stephenson's idea of information as a virus. The "tipping point" was when the virus had reached a critical mass and became part of the basic store of information. Some info-virii, like Ford is better than Chevy doesn't infect enough people to tip society one way or the other. Other virii like the Earth revoles around the Sun, has infected basically the entire planet, and as such is passed from generation to generation.

    I really don't think there isn't much complexity that can't be explained by the mere fact that we are all actually living on top of a Giant's head
  • by pVoid ( 607584 ) on Thursday January 23, 2003 @12:54PM (#5143781)
    Bose and Einstein are added to the black list of the OSS/linux zealot guild... </humour alt="for the impaired">
  • by Pig Hogger ( 10379 ) <(moc.liamg) (ta) (reggoh.gip)> on Thursday January 23, 2003 @12:54PM (#5143786) Journal
    We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this.
    I guess it's time to invent psychohistory [psychohistory.com]... Where's Hari Seldon [google.com] when you need him?
  • The author seems to make a claim, then dwell on it for entirely too long -- boring you to tears. If the book was rewritten, I imagine it'd be half as long, if not shorter. Honestly, how long does it take to explain the kevin bacon theory? He seems to think its profound that the "degrees of seperation" are getting smaller as we become more interconnected. I appreciate the effort, but it's common friggin' sense. All in all? An interesting read if all your other books are finished and theres nothing on TV. Even in that scenerio, I'd still probably end up reading (okay okay, looking at the pictures) in Wolfram's book instead ;)
    • The author seems to make a claim, then dwell on it for entirely too long -- boring you to tears. If the book was rewritten, I imagine it'd be half as long, if not shorter.

      I'm with you. This book should have been a 10 page paper. I kept thinking there was going to be a big revelation. But there wasn't. He just kept rehashing the same thing over:

      If you want to bring down the air traffic system, it would be more effective to take out a couple of big airports rather than a couple of small ones.

      If you want to take down the internet, it would be easier to do so by removing the more heavily trafficked nodes than the undertrafficked, small nodes.

      And on and on...

      The first couple of chapters were interesting, but even those were inflated with historical digresssions.

      --t

      • If you want to bring down the air traffic system, it would be more effective to take out a couple of big airports rather than a couple of small ones.

        If you want to take down the internet, it would be easier to do so by removing the more heavily trafficked nodes than the undertrafficked, small nodes.

        If you want to take out the movie industry, it would be more effective to take out Kevin Bacon.

  • Related topics (Score:4, Interesting)

    by notfancy ( 113542 ) <matias@k-BALDWINbell.com minus author> on Thursday January 23, 2003 @12:59PM (#5143811) Homepage

    I won't research for you, but if you're interested, the preprints archive at LANL [lanl.gov] has a lot of relevant theory [lanl.gov]. Basically, the current research is trying to come with a unified framework for so-called "phase transitions" in stochastic discrete processes. One of the most studied problems is the transition between "easy" and "hard" problems in 3-SAT [wikipedia.org] (three-satisfiability). Brian Hayes has a very readable article [sigmaxi.org] about this phenomenon, with references. The authority in this field seems to be Gabriel Istrate [lanl.gov].

    The emergence of the giant component in random networks is a mature field of research, of course pioneered by Erdös, and with players of the likes of Don Knuth [stanford.edu] and Doron Zeilberger.

    From a mathematical standpoint, Graph Theory per se is not really complicated, what actually is is the asymptotic analysis of stochastic processes.

    HTH,
    Matas

    • Just off the top of my head here:

      If you had a "scale-free" network, AFAIK you have an unlimited amount of independent computational nodes, most NP problems would be trivial to solve in polynomial time. Imagine, just spawn a new node for each possible solution to the problem. Each node can check if its solution is valid in polynomial time.

      This is similiar how genetic algorithms should be executed. Each packet of genetic information in such an algorithm is a point in a very large state space you're searching for optimal solutions. Each mutation is a random jump around that state pace, and the fitness function tries to seek a local optimal point.

      Except in the 'real-world' this algorithm is not executed linearly, but concurrently. Each animal is an concurrent node able to spawn an arbitrary amount of new nodes.

      Your computational power then grows exponentially, not the problem. (Sounds like in mother Russia... eh, not worth it).

      How to factor prime numbers really quickly:
      advance nano-technology to the point where your nano-pc is able to construct duplicates of themselves when you issue a fork() command.

      Like I said, talking out of my ass.
  • Excellent summary (Score:3, Insightful)

    by stratjakt ( 596332 ) on Thursday January 23, 2003 @12:59PM (#5143814) Journal
    BTW, what's this book about?
  • by Anonymous Coward
    If "Linked" sounds interesting, check out "Six Degrees; the Science of a Connected Age" by Duncan J Watts. Watts covers the nuts-&-bolts of fractal networks much better than Barabasi, plus he's a lot less conceited & a better read to boot.

    S/N:R
  • I found a terrific book on general ideas of network connectivity graph theory-- very creative, written for smart readers who are comfortable with some math. He's got interesting ideas that can be relevant to many fields: biology, P2P apps, distributed trust systems, DNS, and more. Highly recommended.

    Amazon link [amazon.com]

    From the Amazon reviews:

    Duncan Watts uses this intriguing phenomenon--colloquially called "six degrees of separation"--as a prelude to a more general exploration: under what conditions can a small world arise in any kind of network?

    The networks of this story are everywhere: the brain is a network of neurons; organisations are people networks; the global economy is a network of national economies, which are networks of markets, which are in turn networks of interacting producers and consumers.

    Food webs, ecosystems, and the Internet can all be represented as networks, as can strategies for solving a problem, topics in a conversation, and even words in a language. Many of these networks, the author claims, will turn out to be small worlds.

  • If there was to be a big brake through in this feild then it would positive (and much of it is) there is also a down side. If some of the sciences that evolved from this were usefull in understanding how large groups of people behaved, then this extra information would be used to control us.
    • Many scale free distributions in user patterns have already been discovered (i.e. web pages against user visits -- a few popular sites like Amazon [amazon.com] and Ebay [ebay.com], but lots of mediocre web sites like mine [youdzone.com]). You generally get a scale free distribution of transactions anytime people interact with one another in a way that they feel is advantageous (preferential). Even more interesting is when web usage becomes content becomes web usages becomes... etc. Such as Amazon's "Customers who bought this book also bought", or when Google's page rank become self reinforcing over time.
  • Great Analogy (Score:1, Interesting)

    by Anonymous Coward
    Kind of amusing with the individual and society analogy to computers and large networks. If you ask a person when no one is around about something you will get one answer but put him around a group of his peers and his answer changes because of peer pressure. Kind of amusing how this can be placed into computers as well. You have dominent members of society which form and shape it just like you do in the network community (Servers).
  • I'm reading this at the moment. I'm a bit of junky for universality and complexity.

    Most of these books are journalistic endeavours indulging in overcooked analogies. They all drivel on - its like revelationary religious evangelism as each book includes the phrase "and suddenly I looked at X understanding its full Y for the first time" - be it chaos, complexity, self-organisation, wolfram, barabasi...

    Curious that these books including the damn Wolfram tome typically just rephrase computer science. It should have been called information science and then peeps might realise that its fundamental.

    Read 'em by all means but keep your scepticism until they actually say something useful.
    Zu
  • Is there a mathematical description of tipping points? Easy:

    - bill x .15 for good service
    - bill x .20 for great service
    - $.01 for crappy service

  • The square of 2,532 is actually 6411024. The cube is 16232712768.
  • Is there a mathematical description of tipping points?

    Yep, catastrophe theory

  • The book is good, but it doesn't get into the really hard stuff - how does something get from point A to point B in a loosely coupled network, or any network?

    His description of the neuron network in the brain, for example, talks about how some neurons link some parts of the brain with others, and that random links help the brain (and networks) function. But nowhere does he say how a signal actually gets from point A to point B - just that the loose coupling and random connections between brain areas make everything closer together.

    . Maybe he doesn't know? Maybe nobody knows? But the whole point of the book is "connect tightly at the micro level, connect the micro groups with their immediate neighbors, and connect each micro grouping randomly with other non-local micro groupings for better connectivity."
  • We understand how an individual computer works, but how to build large informational networks with computers is another thing entirely.

    Have you heard of this "internet" thing yet? Al Gore created it and all your friends are doing it.

    We know how people act individually, and yet we can't extrapolate the behavior of entire societies from this.

    Its called "demographics". Yep, you're part of it.

    How does information spread through society?

    People have been reading and writing for centuries now.

    One thing is abundantly clear: the more we know about how these things work, the better we'll be able to curb DDOS attacks, stop disease, and control economic failures

    Alright! Now that's my idea of a good time... I'd hate to come down with the flu, lose my stocks, and suffer a DDOS attack all on the same day. Oh, AIDS too? Even better.

  • by briancnorton ( 586947 ) on Thursday January 23, 2003 @03:09PM (#5144934) Homepage
    Calling it a "science" is sort of a misnomer. Sure there are people studying networks scientificly, but Linked is a lot of metaphors and comparisons rather than a quantitative or modelling approach. It runs in the same vein as Wolfram's ANKOS, in that they are both missing critical intermediaries needed to qualify them (to me) as a science. My studies have been in geography, which presents a whole different level of network behavior and construction. The book is good, but a little light on science.
  • by Anonymous Coward
    We do? I don't think so. In order to extrapolate societal behaviour, one needs to completely subscribe to human behaviouralism. Networks of humans are less deterministic than their particles precisely because human behaviour isn't predictable. Our current error in describing human behaviour quickly compounds when describing several or many interacting humans.

    Our lack of progress in sociology is a testament to our lack of understanding of the individual.
  • by Anonymous Coward on Thursday January 23, 2003 @04:23PM (#5145507)

    Here's a couple of examples of networks that exhibit a scale-free topology.

    1. WikiWiki.

      This [reseaucitoyen.be] shows that Wiki sites are characterized by the Pareto distribution (a.k.a. power law distribution).

    2. RPM dependency graphs [slashdot.org].

      Out of curiousity, I wrote a quick script to compute the distribution of the number of links in the RPM dependency graph. It does seem to follow the Pareto distribution.

    3. Slashdot

      Although I have no easy way of verifying this, my gut feeling is that the network of Slashdot users is also scale-free, if we define the notion of a link between two users as follows. User bobdc is linked to user bugbear, if bobdc has replied to any of bugbear's post (or submissions) at least once.

      This definition allows us to introduce the notion of a CmdrTaco number, similar to the Kevin Bacon number [virginia.edu]. Specifically, user Joe Schmoe has the CmdrTaco number of 1, if CmdrTaco has replied to any of Joe's comments. If Joe responded to wuliao's post, then wuliao has the CmdrTaco number of no greater than 2, and so on.

    Pareto distributions are pretty common. For example, the number of downloads on SourceForge follows the Pareto distribution [cam.ac.uk].

    This page [innovationwatch.com] provides a fairly comprehensive list of further reading on the subject.

  • This sounds fascinating. Might have to add it to my shelf.

    A closely related field, where there is probably lots of overlap, is Chaos Theory.
    For a good starter on that I recommend "Chaos" by James Gleick, a most excellent book. It both describes chaos theory extremely well and is engaging and readable.
    Gleick's site is here:
    http://www.around.com/ [around.com]
    His page on the book is here:
    http://www.around.com/chaos.html [around.com]
    And here is an Amazon.com link:
    http://www.amazon.com/exec/obidos/tg/detail/-/0140 092501/qid=1043352869 [amazon.com]

    Happy reading and thinking.

  • by Carter Butts ( 245607 ) on Thursday January 23, 2003 @05:35PM (#5146201)
    Contrary to what the author of Linked would have you believe, the scientific study of social networks has been around since the late 1920s/early 1930s. (Some of the very early work was a bit loopy -- check out Jacob Moreno's Who Shall Survive? for an example -- but the field rapidly progressed beyond this stage.) The first real network journal, Sociometry has been around since the late 1930s (longer than Barabasi has been alive, I expect), and today it's mantle is held by Social Networks; that's where you should look for current research in the field. Empirical, theoretical, and methodological work on social networks is also regularly published in the Journal of Mathematical Sociology, the Journal of Mathematical Psychology, the American Journal of Sociology, Sociological Methods and Research, Sociological Methodology, and Social Forces (among others). It turns out that we know quite a lot more about networks than Barabasi suggests in his book, and indeed the hub/connectivity issues on which his book focuses are only a very limited part of the overall picture.

    If you're interested in learning more about the large body of literature in this area, be sure to visit the INSNA [www.sfu.ca] web site. I think you'll find it much more informative than reading popular books on the subject.

    -Carter

  • Try to understand the theory of the cosmic gravity compression wave and it is explains the dynamics of the many absolutely perfect. Unfortunately it is still rare..

    regards
    Gerald
  • There's another book on the subject of networking... Nexus: The science of Small Worlds.... This book basically talks about the archetecture of efficiently connected networks and topics such as the "6-degrees of separation" phenomena. The easy explanation for the 6-degree problem (that we're all connected by about 6-degrees... you know somebody who knows somebody....). The simple answer is that networks that are randomly connected tend to have small degree separation. But in reality, most networks (social etc.) have an organizing principle (you tend to know more people in your area or interest group than totally random people). Any attempt to mathematically model a network with an organizing principle quickly reveals that there are many points with huge numbers of degrees between them. As it turns out that introducing just a small number of random points (small enough so that the network is still statistically ordered by the principle and not considered "random" overall) all of a sudden shrinks the number of degrees. What's cool about this is that examples of such networks turn up in brains, swarms of lightning bugs that synch. their flashes... etc. I'm sure that we are reaching a tipping point in the science of tipping points!
  • 'Give me a four digit number,' he said.

    '2,532,' came the wide-eyed boy's reply . . .

    'The square of it is 6,441,024,' he continued. 'Sorry, I am getting old and I cannot tell you the cube.'"


    Actually the square of it is 6,411,024 :)
  • It seems that this book is trying to show people what the government has known for years and what big business is becoming experts on.

    If you know how the network works, you can make very high level decisions based on calculated cause and effect. For example, what might seem like a bad decision at first may eventually give you the outcome you desire.

    In a world where all avenues seem tapped out and it's hard to get ahead, I believe networks are one of the keys to breaking through.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...