Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Network The Internet Science

Caltech and UVic Set 339Gbps Internet Speed Record 79

MrSeb writes "Engineers at Caltech and the University of Victoria in Canada have smashed their own internet speed records, achieving a memory-to-memory transfer rate of 339 gigabits per second (53GB/s), 187Gbps (29GB/s) over a single duplex 100-gigabit connection, and a max disk-to-disk transfer speed of 96Gbps (15GB/s). At a sustained rate of 339Gbps, such a network could transfer four million gigabytes (4PB) of data per day — or around 200,000 Blu-ray movie rips. These speed records are all very impressive, but what's the point? Put simply, the scientific world deals with vasts amount of data — and that data needs to be moved around the world quickly. The most obvious example of this is CERN's Large Hadron Collider; in the past year, the high-speed academic networks connecting CERN to the outside world have transferred more than 100 petabytes of data. It is because of these networks that we can discover new particles, such as the Higgs boson. In essence, Caltech and the University of Victoria have taken it upon themselves to ride the bleeding edge of high-speed networks so that science can continue to prosper."
This discussion has been archived. No new comments can be posted.

Caltech and UVic Set 339Gbps Internet Speed Record

Comments Filter:
  • by nevermindme ( 912672 ) on Wednesday November 28, 2012 @09:27PM (#42125447)
    At this point it is just a matter of how much you are willing to spend with 10Gig Ethernet becoming the standard method for handoff from the Telco demark. Just installed a L3 endpoint with well over 50Gig/sec of capacity with each gig/sec CIR costing less than it takes my company to write the check and split it out to the customers invoices. Storage and Power/Cooling are the last expensive item in the datacenter.
  • That math is wonky (Score:2, Interesting)

    by Anonymous Coward on Wednesday November 28, 2012 @10:05PM (#42125719)

    How is 339 gigabits per second equal to 53 gigabytes per second?
    How is 187 gigabits per second equal to 29 gigabytes per second?
    How is 96 gigabits per second equal to 15 gigabytes per second?

    Glad to hear they're using 6.4 bits per byte. Next week they'll drop down to 2 bits per byte and push out another press release.

  • Re:The point (Score:4, Interesting)

    by symbolset ( 646467 ) * on Thursday November 29, 2012 @02:46AM (#42127057) Journal

    I was taking a more general application of "what's the point?" The first connection to what would become the Internet was made between UCLA and SRI in Menlo Park, CA after all. That was a big deal for them, but a bigger deal for us. What the point of that was is rather subjective.

    100PB seems like a lot of data today - 3,000 times the 3TB storage available in a standard PC. But I am so old I wear an onion on my belt, as was the fashion in my day. 1/3000th of that 3TB is 1GB. I can remember when to have 1GB of storage in your PC was an undreamt of wealth of storage richness: a bottomless well that might never be filled. Hell, I can remember a day when 3TB of digital info storage was more storage than there was - everywhere on Earth. In fact in my early days there was talk of a terabyte being the sum of human knowledge (silly, I know). It's reasonable to expect that when that much more time has passed again, 100PB will not be a big deal either.

    So now we carry around a 1TB 2.5" USB drive in our shirt pocket like it's no big deal. And when guys like this do things like this we talk about what it means to them - and that's fine. But there is a larger story, like there was a larger story at UCLA - and that is "what does this mean to the rest of us?"

    Now 339Gbps isn't such a big deal. NEC wizards have already passed 101 Tbps - 300 times as much over a single fiber [wikipedia.org], though not to this distance. That's enough bandwidth to pass your 100PB in 20 minutes, over a single strand of glass fiber.

    The LA Metro area is about 15 million people, or 3 million homes. To deliver 1Gbps to a reasonable half of 3 million homes and mesh it out for global distribution is going to require a lot of these links. The aggregate demand would probably be under 1% of peak potential of 3,000 Tbps or about 30Tbps. 100 times the bandwidth of this link. Using CDNs(*) - particularly for YouTube, CableTV, the usual porn suspects and BitTorrent you could diminish the need for wider bandwidth considerably but you still need a wide pipe to the outside world. And all the Internet servers in the world would need to be updated to support the crushing demand with higher performance, SSD storage and the like. And that's great for the economy, and it's just LA.

    These innovations are neat, but they're neater still when they come home to all of us.

    TL;DR: Get off my lawn.

    /(*) Define CDN: A CDN, or Content Delivery Network [wikipedia.org] is a facility for moving high bandwidth, high demand or high transaction content closer to a nexus of consumers. An example would be Netflix, which delivers streaming video content to 21 million subscribers, comprising by some estimates a full third of Internet traffic. Netflix provides for free to Internet providers BackBlaze boxes [gigaom.com] that move Netflix content closer to the end user, reducing backbone usage. Similar boxes are provided by advertising networks and other content providers.

  • Re:The point (Score:3, Interesting)

    by Anonymous Coward on Thursday November 29, 2012 @06:50AM (#42128049)

    (same AC as above)

    That's true, but that ideal will break sooner than you'd like unless you're going to custom-design and build all your equipment yourself, which at that level is pretty tough, even for a company like Google. That equipment is not comparable to what you can get in your local computer shop, only very few companies worldwide can produce all devices you need from end to end.
      - After some 160km at most you're going to need amplification, which limits the wavelengths you can use (there are only a couple of bands commonly used in fiber communication.)
      - After a few amplification points, you're going to need reconditioning equipment, matched to the signal you're sending
      - Eventually you're going to have to convert back to Ethernet, POS or ATM to connect anything useful to it and you'll probably want to run IP over it so you can talk to other systems on the Internet. You can of course make your own NICs, but in that case you'll still have to interface to some PCI standard, unless you're going to design your complete end node yourself too. In that case, you'll also have to write a custom OS to drive your custom hardware.

    Existing systems that can do this on a large scale (i.e. high bandwidth) are _much_ more affordable (a rough guess is some tens of millions of dollars for a very simple small-diameter network with a couple of routers) than completely designing it just for yourself, so you'll have to live with the standards everyone has agreed on unless you really have significant disposable income as a company.

  • by Anonymous Coward on Thursday November 29, 2012 @10:21AM (#42129383)

    That limit doesn't really have to do with offsite transfer speed, but just about local speed of getting it onto a disk a few meters away from the machine. Faster computers and storage technology is needed for that instead of faster long distance communication (unless a particular advance in the latter helps with dedicated lines over very short distances relatively speaking). Fast long distance operations improve the rate at which researchers can query and run tests on the large data set instead.

    Also, they had some idea what they are throwing away. The same filter that decides if something is interesting or not, will with a predetermined frequency let some collisions' data through without any filtering so they can check the effectiveness of the filtering.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...