Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Math Networking Science IT

Too Many Connections Weaken Networks 48

itwbennett writes "Conventional wisdom holds that more connections make networks more resilient, but a team of mathematicians at UC Davis have found that that is only true up to a point. The team built a model to determine the ideal number of cross-network connections. 'There are some benefits to opening connections to another network. When your network is under stress, the neighboring network can help you out. But in some cases, the neighboring network can be volatile and make your problems worse. There is a trade-off,' said researcher Charles Brummit. 'We are trying to measure this trade-off and find what amount of interdependence among different networks would minimize the risk of large, spreading failures.' Brummitt's team published its work (PDF) in the Proceedings of The National Academies of Science."
This discussion has been archived. No new comments can be posted.

Too Many Connections Weaken Networks

Comments Filter:
  • HAHA (Score:3, Funny)

    by Anonymous Coward on Saturday February 25, 2012 @02:35PM (#39159767)

    pnas... lol.

  • If the neighboring connections use your connection more than you use theirs it is weakening your connection.
    • If the neighboring connections use your connection more than you use theirs it is weakening your connection.

      No, not necessarily. A use may be relatively costless to me, but the fact that I am a node that generates new connections may increase my value to others, for example.

  • The Goldie Locks network:
    "As a first theoretical step, it's very nice work," said Cris Moore, a professor in the computer science department at the University of New Mexico. Moore was not involved in the project. "They found a sweet spot in the middle," between too much connectivity and not enough, he said. "If you have some interconnection between clusters but not too much, then [the clusters] can help each other bear a load, without causing avalanches [of work] sloshing back and forth."
  • Primitive (Score:5, Interesting)

    by gilgongo ( 57446 ) on Saturday February 25, 2012 @02:52PM (#39159859) Homepage Journal

    I'm sure that in 100 years time, people will look back on our understanding of networks, information and culture in the same way as we look back on people's understanding of the body's nervous or endocrine systems 100 before now. This study hints at our lack of knowledge about what the hell is happening.

    • It would be interesting to apply what they learned here to the power grid.
      • Re:Primitive (Score:5, Informative)

        by gl4ss ( 559668 ) on Saturday February 25, 2012 @03:04PM (#39159921) Homepage Journal

        the study was about power grids, where it makes a bit more sense. of course in that context(and in data-networks, though with data it actually matters where a certain data packet goes as data consumers don't just want _any_ data, they need specific data, but power you don't much care where it actually came from..).

        still, gotta wonder, in real world context you'd need to think about what kind of real mechanisms are used for making the new connections and safeties supposed to stop cascades from spreading.

        • Re:Primitive (Score:5, Interesting)

          by Anonymous Coward on Saturday February 25, 2012 @03:39PM (#39160091)

          Cascades on power networks happen when you suddenly lose source, without rejecting the drain. I.e. the load remains high, but suddenly the flow required to supply that load has to shift because a link went down due to failure/overload.

          There is a protection against this, it is called selective load rejection. You shut off large groups of customers, plain and simple. And you do it very very fast. Then you reroute the network to make sure power is going to be able to flow over links that will not overload, and do a staggered reconnect of the load you rejected.

          That costs BIG $$$$ (in fines, lost revenue, and indirect damage due to brown-outs), and there is a silent war to try to get someone ELSE to reject load instead of your network. The only thing that balances it are extremely steep fines among the power networks themselves, and in countries that are not nasty jokes, the government regulatory body.

          I am not exactly sure how to move that to a BGP4, IS-IS or OSPFv2/v3 network, where instead of a sink pressure, you have a source pressure.

          • by Tacvek ( 948259 )

            With if there is a clear distinction between sources and sinks, then surely the same thing applies. If your current routing patterns cannot handle the incoming packets, you drop some of the sources, while you reconfigure to handle routing to a destination (if possible), then start accepting data from the sources again.

            The problem though is that in data networks the very same connections are both. In Power networks that cannot be the case since they cancel, but incoming packets do not cancel out outgoing pa

        • by Ihmhi ( 1206036 )

          Waitasec, cascades are a real thing? I thought it was just a bunch of technobabble when reversing the polarity of the warp core went badly.

    • by Anonymous Coward

      You mean that in 100 years our understanding of a scientific topic will be better than it is now, instead of the same? How shocking! I was certain that there was a negative correlation to the passage of time and our understanding of topics!

    • I'm sure that in 100 years time, people will look back on our understanding of networks, information and culture in the same way as we look back on people's understanding of the body's nervous or endocrine systems 100 before now.

      Perhaps one day we'll even figure out how car engines work.

  • Too many cooks.... (Score:5, Interesting)

    by bwohlgemuth ( 182897 ) <bwohlgemuth&gmail,com> on Saturday February 25, 2012 @03:29PM (#39160049) Homepage
    As a telecom geek, I see many people create these vast, incredibly complex networks that end up being more difficult to troubleshoot and manage because they invoke non-standard designs which fail when people wander in and make mundane changes. And then when these links fail, go down for maintenance....surprise, there's no 100% network availability.

    Three simple rules to networks...

    Simple enough to explain to your grandmother.
    Robust enough to handle an idiot walking in and disconnecting something.
    Reasonable enough to be able to be maintained by Tier I staffing.
    • How are you going to explain VLANs, STP, and ACLs to your grandmother? Has it occurred to you that there are, in fact, situations where all of those technologies are useful?

      Has it also occurred that if you are properly securing your network with port security, you cant just walk in and plug something in and have it work?

      • How are you going to explain VLANs, STP, and ACLs to your grandmother?

        I have no idea what VLAN or STP is, but ACLs are dead simple to explain to your grandmother: "There'a a list giving detailed information about who may do what with the file, and the computer enforces that list rigorously."

  • I'll admit I don't understand the reasoning why more is not always better even after reading the article. Sure the calculation overhead for topology changes increase with more choices but such costs are trivial to the overall cost of most systems. Whether the system is more or less resilliant would seem to me anyway to have everything to do with the intelligence the system is able to bring to bear to plan a stable topology based on changing conditions. You can invent and constrain a dumb network with eas

    • Imagine a waterbed with baffles: the baffles prevent the water from sloshing around the moment the mattress bears a new load. Without these baffles, when a person lays down on the bed there will be waves going everywhere, and depending on how full the mattress is the person may even bump on the platform beneath the mattress before the water evens out and the load is supported. On the other hand, if the baffles prevented water from moving at all, there would be limited compensation for any load - the wate

  • by dgatwood ( 11270 ) on Saturday February 25, 2012 @03:39PM (#39160099) Homepage Journal

    Forty-two.

    Now we finally know the question.

  • by dietdew7 ( 1171613 ) on Saturday February 25, 2012 @03:44PM (#39160131)
    Could these types of models be applied to government or corporate hierarchies? I've often heard about the efficiencies of scale, but my experience with large organizations is that they have too much overhead and inertia. I wonder if mathematicians could could come up with a most efficient organization size and structure.
    • by Attila Dimedici ( 1036002 ) on Saturday February 25, 2012 @04:19PM (#39160305)
      One of the things that technological changes since the mid-70s have taught us is that the most efficient organization size and structure changes as technology changes. There are two things that exist in dynamic and as the relationship between them changes, the efficiency point of organizations changes. One of those factors is speed of communication. As we become able to communicate faster over long distances, the most efficient organization tends towards a more centralized, larger organization. However, as we become able to process information faster and more efficiently, a smaller, more distributed organization becomes more efficient. There are probably other factors that affect this dynamic as well.
      • Luis von Ahn says that past collavorations like putting men on the moon and the A-Bomb project were all done by about 100,000 because tech constraints limited effective organization to groups of that size. But projects like reCAPTCHA, DuoLingo and Wikipedia use massive human computation and organize millions of people. We'll see many more massivly co-ordinated projects and the ideal size of organizations for each kind of project will become more clear as the number of these projects increases.
        • Putting men on the moon and the A-bomb project were also limited by a need to keep a significant amount of the information secret. The more modern projects you listed had no such constraint. That does not mean that a project done at the time of the Manhattan Project or the Apollo Program could have harnessed more without the constraints of secrecy those had, just that even if those two projects were initiated today they would be unlikely to use more people.
          That being said, it will be interesting to see how
    • by Lehk228 ( 705449 )
      the ideal size for an organization is just large enough to accomplish it's 1 task.
  • by fluffy99 ( 870997 ) on Saturday February 25, 2012 @04:09PM (#39160241)

    "The study, also available in draft form at ArXiv, primarily studied interlocked power grids but could apply to computer networks and interconnected computer systems as well, the authors note. The work could influence thinking on issues such as how to best deal with DDOS (distributed denial-of-service) attacks, which can take down individual servers and nearby routers, causing traffic to be rerouted to nearby networks. Balancing workloads across multiple cloud computing services could be another area where the work would apply."

    The study was about the stability of power systems, which is a completely different animal. For power systems, as demonstrated by a few wide spread outages, are at the mercy of the control systems which can over or under react. Computer networks might have some similarities but trying to draw any firm conclusions from this study would be pure speculation.

    I would agree though, that at some point you move beyond providing redundant paths to opening up additional areas of exposure and risk.

  • by Anonymous Coward

    It's interesting that this story appeared on the same day I read an article [medicalxpress.com] on brain hyper-connectivity being linked to depression.

  • Conclusions in TFA (The Fine Article) make sense.

    I remember in College, back in the Dark Ages of the 1970's. Communications Theory. That has been known since the late 1940's.

    For any system, at the bottleneck points there is a saturation level. When you try to get higher throughput, it saturates the bottleneck, and the throughput either breaks down, or slow substantially.

    Ever been stuck in traffic? If so, then you have experienced this effect.

    Ever tried to run a large Database on a Windows box? Slowdown and

  • ...at the low amount of comments on this paper. I read it over the weekend, and it offers some insights into network theory as applied "to the everyday world" that engineers have to deal with, that are not all trivial or unimportant. Is it because the article has a "pure" mathematics approach, in spite of using a model of the US power grid for illustration purposes ?

Whoever dies with the most toys wins.

Working...