Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Announcements Science

An Application For 10-Gigabit Networking 160

Chip Smith sent us a short excerpt from a news article on Supercomputing Online: "Just yesterday Lawrence Berkeley National Laboratory and several key partners put together a demonstration system running a real-world scientific application to produce data on one cluster, and then send the resulting data across a 10 Gigabit Ethernet connection to another cluster, where it is then rendered for visualization." Here's the link to follow if you'd like to read more on this experiment.
This discussion has been archived. No new comments can be posted.

An Application For 10-Gigabit Networking

Comments Filter:
  • Amazing! (Score:4, Funny)

    by molrak ( 541582 ) on Sunday July 07, 2002 @01:18AM (#3835807) Homepage
    Their first project is to create a virtual dictionary and spell checker so that the Slashdot editors make sure that their posts to the front page are spelled properly. As an added bonus, it'll even check grammar! Unfortunately, the scientists aren't sure if there's enough bandwidth available yet to correct all the mistakes.
  • I can get by just fine without 10-gigabit networking. All I haveta do is wait longer for the data!

    [/typical_karmawhoring_kneejerk_reaction]

  • A real use (Score:4, Funny)

    by skydude_20 ( 307538 ) on Sunday July 07, 2002 @01:21AM (#3835815) Journal
    This experiment shortly followed by a another showing how fast they could move MP3 collections
  • nice (Score:2, Funny)

    by zoloto ( 586738 )
    Can you imagine the pings on game servers for Quake3 or counterstrike? 0 to 1 tops!
  • by godoto ( 585752 )
    BEOWULF! Say it with me now...
  • "On June 17th it was announced that the final milestone in the IEEE standards approval process was reached last week when the IEEE 802.3ae specification for 10 Gigabit Ethernet was approved as an IEEE standard by the IEEE Standards Association (IEEE-SA) Standards Board."

    Is it just me, or is the abbreviation 'IEEE' appears a lil' bit too much in that paragraph?
    • "On June 17th it was announced that the final milestone in the IEEE standards approval process was reached last week when the IEEE 802.3ae specification for 10 Gigabit Ethernet was approved as an IEEE standard by the IEEE Standards Association (IEEE-SA) Standards Board."


      Is it just me, or is the abbreviation 'IEEE' appears a lil' bit too much in that paragraph?

      That and the word Standards.
      the IEEE Standards Association (IEEE-SA) Standards Board, which Standardises the Standardisation of Standards.

  • Who would use 10 Gbit ethernet apart from routers and labs? I know about Moore's law but I also know that 10 Gbit throughput on my hard disk is not coming soon.
    • I know about Moore's law but I also know that 10 Gbit throughput on my hard disk is not coming soon.

      True, but drives which are faster than 1 Gbps will be around a few years from now (current speeds are around 400-500 Mbps), and RAID arrays can already put out several Gbps.

      It will indeed be a while before people commonly saturate 10Gbps ethernet, but when the technology progresses by factor-of-ten steps, that's rather inevitable.
    • 10Gbit ethernet is useful for large backbones, although I still haven't seen any.

      For a normal office LAN, 1Gb ethernet up to the switches, then fast ethernet to the boxes is enough.

      But I wouldn't refuse a 10Gbit fiber up to my box, tho' :)
    • by alienmole ( 15522 ) on Sunday July 07, 2002 @08:48AM (#3836425)
      Places that deal with a lot of multimedia can use this bandwidth - easily. Medical applications, such as high-resolution digital xrays and CAT scans, can generate enormous volumes of data which can take ages to push around - sneakernet is still sometimes the easiest/quickest way. Then there's video - anyone doing rendering and editing of video can use this.

      The throughput from a single hard disk is not that important: in these environments, RAID arrays (typically fiber channel) optimized for read performance allow overall disk performance at e.g. 1.6Gbps. If you have multiple such servers on your network, with many workstations trying to get at the data, 1Gbps Ethernet starts looking a little slow, especially for the backbones.

      • Another group of users that need this bandwidth are network operators/backbone carriers. As expensive as 10 Gbps Ethernet is, it's still a lot cheaper than a Sonet or ATM solution of the same speed, and much more suited to connecting routers and switches together in a colo / switch center.

        Besides, it's not that far from regular data centers either. Servers typically come with gigabit cards these days, and you need to be able to aggregate all these gigabit connections somehow. Clustering, backups, database duplication etc. all benefit from big pipes.
  • Think of the porn I can download now! er, [notices significant other in room] think of the amount of data I could research and the time I'd save....
  • It's like Intel coming out and saying they're going to release a 3.0GHz next year some time. It's the next logical step.

    Now of course this is better than what we have now, but look at the experiment they did. They sent gigabytes of data across the wire so it can be processed and rendered on another machine. Wouldn't it be just as easy to process the data on the originating machine and just send the bitmap over. You can see the image in the background of the photos in the article. It's mostly blue with a spot of green in the middle.

    I know somebodies going to chime in about how the data they sent isn't as important as how fast they sent it. But just think about these guys being some of the smartest in the world and they can't come up with a decent test other than what amounts to be a poorly designed client/server setup. It would be like having a server send the whole accounting database down to the client just so the CFO can pull up the details of a single transaction. That's not a practical application for that bandwidth.

    I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.
    • If you're creating a gigabyte of data every seven seconds, and sending it to a rendering farm to create an image based on that data, it would be very important.
    • You can see the image in the background of the photos in the article. It's mostly blue with a spot of green in the middle.

      Yeah! Richt! I don't need no 10 Gigabit connection to create green spots on a blue background! MS Paint could do that years ago!

      Imagine two Boeing engineers that have to set up a simulation of their newest model. "Hey, if you start that simulation, it will decrease my FPS in Quake!" "Don't worry, Gimp filters are great at these thingies. I wonder who ever looks at these silly images..."

      Sorry if this joke was silly. I haven't slept much, and I'm going to Switserland tomorrow. Let's hope the controllers don't take breaks.

    • I know those guys - they're not the smartest guys in the world but they do get some neat toys to play with.
    • by treat ( 84622 )
      I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.

      It is logically the next step in the evolution of networking, but it currently outstrips even the memory bandwidth of most systems. Even gigabit today is rarely necessary. In both cases, even if you don't use anywhere near your full bandwidth, you still win if you are using more than the 10 times slower generation provides. There are still, however, some compelling applications that immediately present themselves.

      As an uplink between network switches, more bandwidth is always needed. This is what was done in the experiment. If you can't imagine a network that busy, consider switches with 256 100MB ethernet ports, or only a few gigabit ports. It is always better, if it all possible, to be able to guarantee that the switch uplink will never become a bottleneck. If they need to do this with many gigabit ports today, be it ethernet or fibre channel, it is done by using multiple gigabit uplinks.

      In any cluster where there is substantial communcation between the nodes, more bandwidth and less latency is always useful. Sufficiently low latency and high bandwidth makes many types of computation feasible that would not otherwise be feasible. Shared memory within a cluster can always benefit from decreased latency. A cluster is always better (meaning VASTLY cheaper and VASTLY more reliable) than a many-many CPU system. Anything that makes the cluster nodes seem closer together will make it easier to move existing software onto cluster-based systems.

    • "I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed."

      Movies like Final Fantasy would have greatly benefitted from 10 gigabit networking. They had over 1,000 machines rendering at a time. The resolution was enormous. (I apologize, I don't remember the exact resolution.) The animation was heavily layered, each frame having a miniumum of 10 layers and some hit as high as 100.

      I can imagine that whenever they did the compositing, they didn't do it over the network. Rather, they copied alllll of the frames of each layer of the section they were working on to the compositing machine. I betcha anything they found it was quicker to carry around hot-swappable firewire drives in order to ferry data around faster than the network could move it.

      If they had 10-gigabit capability, it would likely have drastically altered their workflow. They would probably have a central, heavily raided server holding all the source images. Then the compositing artists control the compositing from their machine, but the data stays on the server via network.

      This would definitely have saved them ooodles of time. Even if their workflow was exactly the same, the huge increase in bandwidth would have greatly reduced their networking overhead.

      A studio like that'd happily pay big $$$ for a network like that. The investment would have paid off right away.
    • I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.

      iSCSI? Yeah I would want my stack of jbod's for the database talking to it over 10gbit ethernet instead of simply gbit, heck Ultra320 would be faster than 1gbit, but slower than 10gbit, so this would actually be the current fastest data storage bus =) Also what about sending the results of particle physics experiments, from what I understand that creats terabytes of data per second so needing fewer ports and wires would definitly be apreciated. I am sure there are people with needs that this fills or else it likely wouldn't be on the market, it would still be in the labs.
    • by alienmole ( 15522 ) on Sunday July 07, 2002 @08:56AM (#3836445)
      I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.

      You're thinking point-to-point, but that's not what networks are for. Imagine the backbone at a hospital with CAT scanners, MRIs, xrays all generating digital images, and doctors around the hospital accessing a database of those images. 10Gbps isn't enough for applications like this. My local dentist's office uses digital xrays, and they complain about the 1Gbps on their little LAN - and they probably don't have more than about 15 workstations.

      And as someone else mentioned, rendering and editing of digital video uses up even more bandwidth. You don't have to be Pixar to need to do stuff like this - many companies in the media business can use this.

    • 10gbit may sound excessive to you with ata133 drives barely able to push more than 1gbit in burst mode, but you would have no trouble saturating this bandwidth in a high-traffic LAN environment. Even faster if they're using higher-capacity memory and disks. PC2700, scsi, fibre channel...
    • I know somebodies going to chime in about how the data they sent isn't as important as how fast they sent it. But just think about these guys being some of the smartest in the world and they can't come up with a decent test other than what amounts to be a poorly designed client/server setup. It would be like having a server send the whole accounting database down to the client just so the CFO can pull up the details of a single transaction. That's not a practical application for that bandwidth.

      The whole point of a cluster is to have each node do part of the work, then communicate it's results elsewhere for more processing. Unlike image processing where the data can be divided and processed at a pixel level granularity, many apps will only divide up so far. From there, you can either pipeline the processing over a really fat pipe, or you can accept that you have hit the limits of scalability and buy massively expensive n-way nodes (where n>4).

      I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.

      If all you're moving is a GB, it's not that important, if you need to move 10TB, the difference adds up fast. Consider, why would you need to go a mile/minute rather than in 10 min? If you live 40 miles from work it's the difference between spending 1:20 or 12 hours a day commuting to/from work.

      In supercomputing terms, whenever it would take longer to send the data elsewhere for further crunching than it would to crunch it locally, you have hit a limit of scalability.

  • If I get ahold of this 10.6 Gigabit connection, I'll be able to get all my pr0n downloaded in a matter of days instead of months!
    • Re:This is good... (Score:3, Informative)

      by treat ( 84622 )
      If I get ahold of this 10.6 Gigabit connection

      It is 10 gigabit, not 10.6 gigabit. 10.6Gb/s was the speed they got over a PAIR of 10 gigabit ethernet connections.

      • Re:This is good... (Score:3, Insightful)

        by plumby ( 179557 )
        It is 10 gigabit, not 10.6 gigabit. 10.6Gb/s was the speed they got over a PAIR of 10 gigabit ethernet connections.

        No. It's a 10.6 gigabit connection over 2 pairs of 10 gigabit Ethernet interfaces. It doesn't matter how it's made up, the connection speed is the overall speed of the connection.

  • would be running a phat VNC cvonnection in your home... wireless 10Gb would be even better, wander around with what amounts to a dummy terminal VNCed to some massive tower. I'd like a cluster of Mac towers serving to a TiBook with lots of ram and video ram and most of the other stuff like the HD stripped away, to keep the price down-ish.
  • by barfy ( 256323 ) on Sunday July 07, 2002 @02:40AM (#3835982)
    What I really want to know is how many LOC's (Library of Congress) this is per second....
  • And still, the vast majority of computer users don't even saturate a 10mbps link, because all most of them do is browse the web and read email.

    Network traffic doesn't increase at the same rate over time as do RAM requirements, processor requirements, video card requirements, etc.

    • Actually, network bandwidth growth has stayed at 3x Moore's law. Also, the gap between your "typical" user and the needs of HPC other "power users" is considerable. I mean, how many "typical" users really need a 2Gigahertz Intel processor? How many wouldn't be able to tell if it was actually a 400MHz cheapo Cyrix processor???
  • 10 gig Ethernet is a big problem for all the ATM folks out there. IP6/10Gig Ethernet is a big big problem for ATM. Now carriers have the option of going straight Ethernet throughout the backbone. You say "what about QOS?" well.. IP6 has those bases covered.

    Rest In Peace ATM
    • Carriers use SONET and POS. Really, ATM ceased to be relevant quite a while ago. SONET will continue as it offers quite a lot of features that are interesting to telcos and that IP people seem to underestimate. So there is a WAN-PHY std. for 10GigE which will be interesting for people lighting up their own dark fiber, but I don't expect telcos to be migrating from POS/SONET solutions any time soon. 10GigE is going to be a very competative LAN solution primarily and a WAN solution for some gung-ho dark fiber regional loops.
    • ATM dead? Where will I go to get money?

      At 10 Gig, Windows may actually be able to download patches fast enough to keep it from being hacked. Sorry, that is just a pipe dream, they first need to start with a re-write of open source...

  • 10.6 gigabits is the expected evolution in networking technology. Take a look at the engineers' picture; How about an evelution in fashion and not wearing socks with sandals. That would be impressive.
  • by bruthasj ( 175228 ) <bruthasj@@@yahoo...com> on Sunday July 07, 2002 @03:03AM (#3836009) Homepage Journal
    This reminds of reading about Neural Nets in the various texts on Artificial Intelligence. They always quote Shepherd and Koch 's "The Synaptic Organisation of the Brain": "The brain incorporates 10 billion neurons and 60 trillion connections."

    When I think about these new network technologies I can't help to think it's our connections that we lack these days. Hopefully with more and more advanced technology we can utilize these connections to create things more intelligent. This appears to be on the right track.

    Maybe "Jane" will finally come out of the closet. Well, actually we got to have Instantaneous (sp) communication before that happens... Doh, well... we're making slight progress.
  • Imagine a (Score:1, Redundant)

    by Moosifer ( 168884 )
    beowulf cluster of these. Dear god - that's almost relevant. Strike that.
  • This sounds nice. The jump from 10 megabits to 100 megabits in the home office was a very substantial improvement. Moving 60 - 100 megabyte scans between machines is much more tolerable.

    And all from a 10 times improvement. This 10 gigabit/sec network is 100 times faster.

    It will come down to the disk storage. My 100 megabit network already has me waiting on hard disk reads and writes, so I'm not sure if I'd see any more benefits if this were available now.
  • by panurge ( 573432 ) on Sunday July 07, 2002 @04:08AM (#3836124)
    A big area of technological growth is digital printing. Currently A3 color digital page printers run up to about 2000 sheets per hour. A full 2-page image for one of these babies is potentially over 300Mbit, which means gigabit/s networking is already needed between the data source and the rip to get full performance. Before digital presses can reach the sizes reached by plate technology presses, 10Gb/s will surely be needed.

    And, if that's boring, think about the military applications. In order to try and cut costs and save on code duplication, the labs are building systems in which part of the application (the secret part) runs on secure systems, whereas non-secret parts run on machines using commercial code. Having a single physical pipe between the areas rather than, as at present, multiple pipes could make the security setup a lot easier, and make the design of the machine considerably cheaper. We will all sleep a lot more securely knowing that LL is able to design lots of new exciting kinds of nuclear weapon at minimal cost.

    By the way, if they use any GPLed software, does that mean they have to release the entire source code for the application? Just a question.

    • By the way, if they use any GPLed software, does that mean they have to release the entire source code for the application? Just a question.

      I work for a big defense contractor. I won't say which one, and I do not speak for them. We always release all of the source code to the government anyway - including GPL code does not pose a special problem.
    • By the way, if they use any GPLed software, does that mean they have to release the entire source code for the application? Just a question.

      Even if they release the binaries, classification takes precedence over licensing terms.

    • Having a single physical pipe between the areas rather than, as at present, multiple pipes could make the security setup a lot easier, and make the design of the machine considerably cheaper

      uh... and the military would want a single point of failure? Redundancy is an integrated part of most large networks, especially the military ones(it's the way internet was devised when it was still called DARPA - D for Defense).

  • So is this fast enough to send me a 1280x1024x32bpp image sixty times a second?

    Wallhack THIS.
    • Comment removed based on user account deletion
    • by allanj ( 151784 )

      Let's do the math:
      1280x1024 uses 1.25MPixels (that's real Mega as in 2^20) at 32 bpp gives 48MBit/frame. Multiply that with 60 frames/second yields 2880 MBit/second, or just a tad short of 3 GBit/second. That sounds well within the limits, but consider this: Assuming that in a real network this 10 Gbps network will be relatively as efficient as 10 or 100 Mbps networks (some assumption, I know) are right now, you'll get between 30% and 50% of theoretical throughput as the sustained data rate, leaving you with a very narrow margin of bandwidth. And this assumes that you're the ONLY user on the network, which pretty much zeroes out the use for 1280x1024x32bpp, since this is bound to be needed for networked games, which tend to be less than funny when you're the only player.
      But ignoring my assumptions and assuming 100% throughput sustained, you'll get 3 players max - still far less than you're likely to really want.

      • ... which all assumes your proposed game architecture sends the frames of video out to the clients.. which would be nice if you're running 386's for client machines.. its much easier to get the client to render the images.. :)

        However.. you do raise the point about sending other (ie non-game) video information over a network.. like, for instance for a movie production studio or the like.. but full-res PAL video is 720x576 is in size, so adjust the math accordingly..
        • ... which all assumes your proposed game architecture sends the frames of video out to the clients.. which would be nice if you're running 386's for client machines.. its much easier to get the client to render the images.. :)

          You're right, of course - no disagreement what so ever. I just answered the question (as I understood it, anyway).

          [snip] but full-res PAL video is 720x576 is in size, so adjust the math accordingly

          I don't know about you, but I'd like better resolution on my monitor than 720x576. Even 720x576 requires a LOT of bandwidth (too lazy to do the math right now - the sun is shining :-) though, so I guess streaming/compression will remain necessary for a LONG time yet...

  • On the surface it would seem if the two clusters were approximatly the same you could have them working on projects in parallel rather than in serial. That is, just get it to render it itself and let the other cluster do the same on another project, won't have to waste time sending/receiving data either (how ever much reduced).
    • In a typical application scenario, the clusters would not be of the same size (this was only so due to a limited number of machines for the testbed). Indeed, LBL/NERSC's primary supercomputing resource is a 3k processor 5Teraflop IBM SP2. The Power3 chip that forms the heart of the SP2 is a floating point powerhouse, but is terrible at the integer calculations that dominate visualization processing. The "visualization server/cluster" will likely be a much smaller cluster of machines with much less expensive processors for integer-intensive work.

      Also, the comment presupposes that pipelined parallelism is somehow less valuable than time-slicing. An important aspect of the application is that the visualization component of the calculation does *not* run in lockstep with the simulation code. Timeslicing this on the same set of processors in order to maintain responsiveness is far less efficient than feeding the latest data to an independent resource that can react asynchronously to user requests for interactive graphics.

  • by po8 ( 187055 ) on Sunday July 07, 2002 @04:43AM (#3836158)

    Must be the middle of the night, because I'm struck by the beautiful improbability of the phrase "real world scientific [computing] application". Looking at the article, I note that the phrase is stolen from it, and that the article also mentions that the app is "Cactus". Looking at the pages for that project, I note that Cactus is a computational framework, not an actual application. Further, there seems to be no indication which app was actually run.

    However, most of the Cactus apps seem to be in astrophysics. Sigh. Maybe it's "real-world" astrophysics. Or maybe it's just bedtime.

  • ...produce data on one cluster, and then send the resulting data across a 10 Gigabit Ethernet connection to another cluster, where it is then rendered for visualization.

    Wonder why they didn't go for a SAN solution, so first cluster writes to a volume on the SAN, and the second cluster just reads the data from the SAN, rather than squirting huge files across an IP network?

    • Um, what speed do you think SAN's are accessed at?? They would still need the 10Gbit Ethernet to connect to the SAN, otherwise they'd be limited to a lower speed. Anyway, disks are a huge bottleneck in this sort of thing - a fancy fiber channel RAID array will do 1.6Gbps, which would have seriously slowed down this test.
      • Hrm obviously you have never looked at a real SAN. They have multiple FCAL connections that generaly connect up to a large raid controller(s) with gobs of memory (especialy EMC gear 8 gigs of cache is pretty standard) and repending on the ability of the cluster nodes to make data you can further expand those connections.

        Now at the same token as FCAL pretty much takes the physical layer for ethernet as it is it would be fairly trivial to get a new line of gear out.

        But what this all comes down to is less cables you can run 10 gige connections point ot point as a backbone or one 10gige connection no single peice of commodity hardware would fill that with usefull data.
        • I'm not sure whether you're talking about using aggregated FCAL connections as the communications medium, or actually writing to and reading from disk/cache (which is what the original poster seemed to be suggesting.)

          Even with a big cache (and the 8GB you mentioned isn't much, for this sort of application), if the communications go through such a cache, then the caching box would need 20Gbps total bandwidth to support reading and writing the 10Gbps simultaneously. Even if a bundle of FCAL connections (e.g. 5 x 2Gbps to each host) is capable of that bandwidth in theory, you're going to need a pretty good box in the middle. I've seen SANs that can sustain 5Gbps, but not 10Gbps or 20Gbps. I don't doubt that some might exist, but most real SANs can't do this.

          If you're simply talking about bundling FCAL connections to reach the desired bandwidth between two machines or clusters, well duh. But Ethernet has a bunch of advantages over FCAL - in fact, the SAN industry is licking its lips waiting for this tech.

          no single peice of commodity hardware would fill that with usefull data.

          On the one hand you're talking about what you call "real" SANs that can sustain 20Gbps bandwidth, and on the other hand you're worrying about whether "commodity" hardware can saturate it? Give me a break.

          The bottom line is that communicating through a SAN doesn't really make sense today if your goal is 10Gbps point-to-point streaming communication.

    • They'd love to use a SAN. Show me a SAN that can sustain 10Gigabits (even with Bonded 1-GigE) and we'd be happy to use it.
  • by Sycraft-fu ( 314770 ) on Sunday July 07, 2002 @08:52AM (#3836436)
    The main one is for large networks. If you have a large network, gig or dual gig may just not be enough for connections to your major layer-3 switches. Well till now, gig was all you could do on the eithernet spec, you had to go with something like packet-over-sonet to get more bandwidth. 10GigE is nice because you can keep the whole network eithernet, but get more bandiwdth.

    At the university where I work I'm sure we'll start using this sonner rather than later. Right now all our distribution routers have dual gig connections to the two backbone routers. Fine, escept that each is feeding 20 to 50 buildings at 100mb each and the redundant set we are going to add will probably be gig. Those gig links to the backbone will fill up fast if each building has a gig to play with. Hence, 10gig eithernet is great since it works with our existing switches and setup, only faster.

    Really, desktop or even server use is not the main target of this at least not for a few more years. The main target is removing bottlenecks from the network that supplies those servers.
  • No body else has mentioned it so I will... the article mentions that its running two Linux clusters!

  • Quake3/Unreal/Counterstrike

    Ultimate geek frag fest! :o
  • there are numerous applications for this.

    first is the transmission of audio/video specifically hdtv. an uncompressed hdtv 1080i transmission will use up 1.5gbps (a single ge is not enough to transmit.) so it takes only 6 transmissions to saturate the bandwidth.

    second, the main function of 10ge right now is for long haul transmission. 10ge is a very cheap way of increasing the data transfer in a single fiber without the expensive sonet/sdh multiplexers. you can simply rent a dark fiber from your telco and buy a 10ge kit and walla! create your own 10ge man. note that a typical oc192/stm64 9.6gbps is very much expensive to deploy.

    third, intel has already released a 10ge server adapter!!! (i don't remember the url, just search it through their website.) with the advent of pci-x 64bit/133mhz, infiniband 4x, you can plug it in and deliver 10ge from the server. (that is you must have a really fast hdd farm.)

    fourth, 10ge will fall to the desktop in probably 8 years time. so probably right now, they are working on 40ge or 100ge (if they keep the 10x multiplier thing.)

    and fifth, 10ge will enable you to download all the porn and mp3 in a jiffy! :)
  • 10gbit stuff (Score:3, Informative)

    by mindstrm ( 20013 ) on Sunday July 07, 2002 @10:25AM (#3836639)
    okay.. I havent' read the 10Gbit standard, so I'm sort of talking out my ass.. but...
    if it follows the way 10 & 100 worked..

    This is another one of those "Why you can't transmit at 10Gbs from a single computer over 10Gbps ethernet" posts.

    First, realize that the 'speed' of ethernet reflects the maximum use of the broadcast medium at maximum capacity. A medium designed for multiple conversations going on at once.

    Example: There is a mandatory 9.6 microsecond gap between attemps to transmit frames on the ethernet (this is a minimum limit). In 10Mbps, this is 96 bits worth of time. In 100, it's 960.. if they do the same in 1000, tht's 9600 bits.

    So think of that as a per-frame penalty.

    If the frame size stays the same.. the mandatory inter-frame gap is about 6 times the size of the data payload now. (okay, so maybe it's different now..). If this IS the case... what's the point of of 10Gbps? More hosts, no switches. That's right.

    Bandwidth is the cure for switching.
    • You need to go the other way with your decimal points.

      10 Mb 96 bit times
      100 Mb .96 bit times
      GigE .096 bit times
      10 GigE .0096 bit times

      Also do not forget that anything not half duplex does not use CSMA/CD so it is not a shared medium.

      This could include all of the speeds above.

      There is a thing called jumbo frames that takes into account that faster speeds such as GigE and 10 GigE links will never be used in a shared network (other parts down the line may be but not the links that they are connected to) they allow for much larger frame sizes. The upper limit for 10Base-T and 100-BaseTX, because of backwards compatibility, is 1518 bytes long. If you make the fame size larger then you get a higher goodput. Goodput is the percentage of actual data sent in relation to bandwidth taken up by headers. Right now different vendors implement jumbo frames in their own proprietary manner because the IEEE did not make a spec for it in GigE and now 10 GigE. I think that Extreme uses something like 9,000 bytes and Cisco uses 25,000 bytes long frames. I am probably wrong on the exact numbers but just giving you an idea.

      So basically 10 GigE is very fast and does not have any extra burdens because of IPG (interpacket gap).

      AndyMcL
    • Example: There is a mandatory 9.6 microsecond gap between attemps to transmit frames on the ethernet (this is a minimum limit). In 10Mbps, this is 96 bits worth of time. In 100, it's 960..

      Wrong. It's constant in bit times, so it's 96 bits whatever the speed. At 100MB/s it's 0.96 microseconds etc.

  • Just as you can logically bundle 8 fastethernet ports (Fast Etherchannel in Cisco) and you can make 8 Gig ports into one big pipe (Gigachannel) I wonder when we will be able to put together 8 10 GigE ports to make 80 Gig! Now that is a backbone! That is double the speed of the not yet released OC-768.

    Sorry I am just one of those guys who likes Big Pipes.

    AndyMcL (speed junkie)
    • Indeed, for this demo, two 10GigE channels were bonded together to form a 20Gigabit link. This made it easier to use the line monitoring equipment to determine if the application had really achieved 10Gigabits+ performance. In the lab, Force10 has demonstrated 280Gigabits/sec throughput, so you can expect by Sept. this demonstration will resurface with 28-bonded 10Gigabit links.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...