An Application For 10-Gigabit Networking 160
Chip Smith sent us a short excerpt from a news article on Supercomputing Online:
"Just yesterday Lawrence Berkeley National Laboratory and several key partners put together a demonstration system running a real-world
scientific application to produce data on one cluster, and then send
the resulting data across a 10 Gigabit Ethernet connection to another
cluster, where it is then rendered for visualization." Here's the link to follow if you'd like to read more on this experiment.
Amazing! (Score:4, Funny)
Nobody will ever need 10-gigabit networking... (Score:1, Redundant)
[/typical_karmawhoring_kneejerk_reaction]
Re:Nobody will ever need 10-gigabit networking... (Score:1)
Re:Nobody will ever need 10-gigabit networking... (Score:1)
I also remeber that 12Mhz is enough for anyone, and you'll never need a computer faster then a 486DX4/100.
Re:Nobody will ever need 10-gigabit networking... (Score:2)
My company has a real use for this, unfortunately NDA prevents me from talking about it. Let's just say that if you ever want to do anything in real time, there's no such thing as 'too much bandwidth'.
Re:Nobody will ever need 10-gigabit networking... (Score:2)
Re:Nobody will ever need 10-gigabit networking... (Score:2)
Re:Nobody will ever need 10-gigabit networking... (Score:1)
Re:Nobody will ever need 10-gigabit networking... (Score:1)
Re:Nobody will ever need 10-gigabit networking... (Score:1)
Re:Nobody will ever need 10-gigabit networking... (Score:1)
A real use (Score:4, Funny)
Re:A real use (Score:2)
How long would 20GB take? It takes around 25-30 minutes on my 100Mbps network. Let's just say there are a few computers with my 3000 mp3 collection on them.
In practice, you will be bottlenecked by the speed of the media and the bandwidth of your PCI bus. Even a PCI-X bus can only do 8.5Gbps. Ultra SCSI-320 = 2.56Gbps.
In practice, reaching those limits is unlikely due to latency in DMA setup unless you use a specialized driver.
The big benefit of 10Gig Ethernet is for switch uplinks at the moment (though that's a significant benefit for some applications).
nice (Score:2, Funny)
Re:nice (Score:2, Informative)
Re:nice (Score:2)
Re:nice (Score:2)
Re:nice (Score:2)
Last I looked Quake didn't use multicast?
Can I get a... (Score:2, Funny)
Re:Can I get a... (Score:1)
Re:Can I get a... (Score:1, Flamebait)
Re:10 gigabit PER SECOND please (Score:1)
IEEE this n' that (Score:1)
Is it just me, or is the abbreviation 'IEEE' appears a lil' bit too much in that paragraph?
Re:IEEE this n' that (Score:1)
That and the word Standards.
the IEEE Standards Association (IEEE-SA) Standards Board, which Standardises the Standardisation of Standards.
The question remains... (Score:2, Insightful)
Re:The question remains... (Score:2)
True, but drives which are faster than 1 Gbps will be around a few years from now (current speeds are around 400-500 Mbps), and RAID arrays can already put out several Gbps.
It will indeed be a while before people commonly saturate 10Gbps ethernet, but when the technology progresses by factor-of-ten steps, that's rather inevitable.
Re:The question remains... (Score:1)
For a normal office LAN, 1Gb ethernet up to the switches, then fast ethernet to the boxes is enough.
But I wouldn't refuse a 10Gbit fiber up to my box, tho'
Multimedia: video, large scans (Score:4, Interesting)
The throughput from a single hard disk is not that important: in these environments, RAID arrays (typically fiber channel) optimized for read performance allow overall disk performance at e.g. 1.6Gbps. If you have multiple such servers on your network, with many workstations trying to get at the data, 1Gbps Ethernet starts looking a little slow, especially for the backbones.
ISP/Carrier POPs (Score:1)
Besides, it's not that far from regular data centers either. Servers typically come with gigabit cards these days, and you need to be able to aggregate all these gigabit connections somehow. Clustering, backups, database duplication etc. all benefit from big pipes.
The possibilities are endless (Score:2, Funny)
Re:The possibilities are endless (Score:1)
Re:The possibilities are endless (Score:2, Interesting)
You could have a 10Gbit/s pipe to your ISP, although I'm not sure if they would be able to handle that
I'm not that impressed (Score:1)
Now of course this is better than what we have now, but look at the experiment they did. They sent gigabytes of data across the wire so it can be processed and rendered on another machine. Wouldn't it be just as easy to process the data on the originating machine and just send the bitmap over. You can see the image in the background of the photos in the article. It's mostly blue with a spot of green in the middle.
I know somebodies going to chime in about how the data they sent isn't as important as how fast they sent it. But just think about these guys being some of the smartest in the world and they can't come up with a decent test other than what amounts to be a poorly designed client/server setup. It would be like having a server send the whole accounting database down to the client just so the CFO can pull up the details of a single transaction. That's not a practical application for that bandwidth.
I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.
Re:I'm not that impressed (Score:1)
Re:I'm not that impressed (Score:1)
Re:I'm not that impressed (Score:2)
Re:I'm not that impressed (Score:1)
Yeah! Richt! I don't need no 10 Gigabit connection to create green spots on a blue background! MS Paint could do that years ago!
Imagine two Boeing engineers that have to set up a simulation of their newest model. "Hey, if you start that simulation, it will decrease my FPS in Quake!" "Don't worry, Gimp filters are great at these thingies. I wonder who ever looks at these silly images..."
Sorry if this joke was silly. I haven't slept much, and I'm going to Switserland tomorrow. Let's hope the controllers don't take breaks.
Re:I'm not that impressed (Score:1)
Re:I'm not that impressed (Score:3, Informative)
It is logically the next step in the evolution of networking, but it currently outstrips even the memory bandwidth of most systems. Even gigabit today is rarely necessary. In both cases, even if you don't use anywhere near your full bandwidth, you still win if you are using more than the 10 times slower generation provides. There are still, however, some compelling applications that immediately present themselves.
As an uplink between network switches, more bandwidth is always needed. This is what was done in the experiment. If you can't imagine a network that busy, consider switches with 256 100MB ethernet ports, or only a few gigabit ports. It is always better, if it all possible, to be able to guarantee that the switch uplink will never become a bottleneck. If they need to do this with many gigabit ports today, be it ethernet or fibre channel, it is done by using multiple gigabit uplinks.
In any cluster where there is substantial communcation between the nodes, more bandwidth and less latency is always useful. Sufficiently low latency and high bandwidth makes many types of computation feasible that would not otherwise be feasible. Shared memory within a cluster can always benefit from decreased latency. A cluster is always better (meaning VASTLY cheaper and VASTLY more reliable) than a many-many CPU system. Anything that makes the cluster nodes seem closer together will make it easier to move existing software onto cluster-based systems.
Re:I'm not that impressed (Score:1)
Movies like Final Fantasy would have greatly benefitted from 10 gigabit networking. They had over 1,000 machines rendering at a time. The resolution was enormous. (I apologize, I don't remember the exact resolution.) The animation was heavily layered, each frame having a miniumum of 10 layers and some hit as high as 100.
I can imagine that whenever they did the compositing, they didn't do it over the network. Rather, they copied alllll of the frames of each layer of the section they were working on to the compositing machine. I betcha anything they found it was quicker to carry around hot-swappable firewire drives in order to ferry data around faster than the network could move it.
If they had 10-gigabit capability, it would likely have drastically altered their workflow. They would probably have a central, heavily raided server holding all the source images. Then the compositing artists control the compositing from their machine, but the data stays on the server via network.
This would definitely have saved them ooodles of time. Even if their workflow was exactly the same, the huge increase in bandwidth would have greatly reduced their networking overhead.
A studio like that'd happily pay big $$$ for a network like that. The investment would have paid off right away.
Re:I'm not that impressed (Score:1)
iSCSI? Yeah I would want my stack of jbod's for the database talking to it over 10gbit ethernet instead of simply gbit, heck Ultra320 would be faster than 1gbit, but slower than 10gbit, so this would actually be the current fastest data storage bus =) Also what about sending the results of particle physics experiments, from what I understand that creats terabytes of data per second so needing fewer ports and wires would definitly be apreciated. I am sure there are people with needs that this fills or else it likely wouldn't be on the market, it would still be in the labs.
Re:I'm not that impressed (Score:4, Informative)
You're thinking point-to-point, but that's not what networks are for. Imagine the backbone at a hospital with CAT scanners, MRIs, xrays all generating digital images, and doctors around the hospital accessing a database of those images. 10Gbps isn't enough for applications like this. My local dentist's office uses digital xrays, and they complain about the 1Gbps on their little LAN - and they probably don't have more than about 15 workstations.
And as someone else mentioned, rendering and editing of digital video uses up even more bandwidth. You don't have to be Pixar to need to do stuff like this - many companies in the media business can use this.
Re:I'm not that impressed (Score:1)
Re:I'm not that impressed (Score:2)
I know somebodies going to chime in about how the data they sent isn't as important as how fast they sent it. But just think about these guys being some of the smartest in the world and they can't come up with a decent test other than what amounts to be a poorly designed client/server setup. It would be like having a server send the whole accounting database down to the client just so the CFO can pull up the details of a single transaction. That's not a practical application for that bandwidth.
The whole point of a cluster is to have each node do part of the work, then communicate it's results elsewhere for more processing. Unlike image processing where the data can be divided and processed at a pixel level granularity, many apps will only divide up so far. From there, you can either pipeline the processing over a really fat pipe, or you can accept that you have hit the limits of scalability and buy massively expensive n-way nodes (where n>4).
I can't think of a practical situation, but if somebody could explain why you would need to send a gigabyte of data in one second vs. 8 second I'd be more impressed.
If all you're moving is a GB, it's not that important, if you need to move 10TB, the difference adds up fast. Consider, why would you need to go a mile/minute rather than in 10 min? If you live 40 miles from work it's the difference between spending 1:20 or 12 hours a day commuting to/from work.
In supercomputing terms, whenever it would take longer to send the data elsewhere for further crunching than it would to crunch it locally, you have hit a limit of scalability.
This is good... (Score:1)
Re:This is good... (Score:3, Informative)
It is 10 gigabit, not 10.6 gigabit. 10.6Gb/s was the speed they got over a PAIR of 10 gigabit ethernet connections.
Re:This is good... (Score:3, Insightful)
No. It's a 10.6 gigabit connection over 2 pairs of 10 gigabit Ethernet interfaces. It doesn't matter how it's made up, the connection speed is the overall speed of the connection.
another great use (Score:2)
Re:another great use (Score:4, Funny)
Great, then someone driving by could steal all of your MP3s in under a minute.
Re:another great use (Score:1)
Re:another great use (Score:1)
P2C? (Score:2)
Brilliant! Better pattent it now. = )
Re:another great use (Score:1)
They didn't steal you stuff if you still have it, did they?
Re:another great use (Score:1)
Um, do you realize how much power you are going to have to pump out for that?
What a useless speed measurement (Score:3, Funny)
Re: (Score:1)
Wow, that's nice (Score:1)
Network traffic doesn't increase at the same rate over time as do RAM requirements, processor requirements, video card requirements, etc.
Re:Wow, that's nice (Score:1)
I've said it before (Score:2, Insightful)
Rest In Peace ATM
Re:I've said it before (Score:1)
Re:I've said it before (Score:1)
At 10 Gig, Windows may actually be able to download patches fast enough to keep it from being hacked. Sorry, that is just a pipe dream, they first need to start with a re-write of open source...
Fashion Evolution (Score:2, Funny)
On a more serious note... (Score:4, Interesting)
When I think about these new network technologies I can't help to think it's our connections that we lack these days. Hopefully with more and more advanced technology we can utilize these connections to create things more intelligent. This appears to be on the right track.
Maybe "Jane" will finally come out of the closet. Well, actually we got to have Instantaneous (sp) communication before that happens... Doh, well... we're making slight progress.
Imagine a (Score:1, Redundant)
You got FP! (Score:1)
Bring It On (Score:2)
And all from a 10 times improvement. This 10 gigabit/sec network is 100 times faster.
It will come down to the disk storage. My 100 megabit network already has me waiting on hard disk reads and writes, so I'm not sure if I'd see any more benefits if this were available now.
There are real world applications... (Score:5, Interesting)
And, if that's boring, think about the military applications. In order to try and cut costs and save on code duplication, the labs are building systems in which part of the application (the secret part) runs on secure systems, whereas non-secret parts run on machines using commercial code. Having a single physical pipe between the areas rather than, as at present, multiple pipes could make the security setup a lot easier, and make the design of the machine considerably cheaper. We will all sleep a lot more securely knowing that LL is able to design lots of new exciting kinds of nuclear weapon at minimal cost.
By the way, if they use any GPLed software, does that mean they have to release the entire source code for the application? Just a question.
Re:There are real world applications... (Score:2)
I work for a big defense contractor. I won't say which one, and I do not speak for them. We always release all of the source code to the government anyway - including GPL code does not pose a special problem.
Re:There are real world applications... (Score:1)
By the way, if they use any GPLed software, does that mean they have to release the entire source code for the application? Just a question.
Even if they release the binaries, classification takes precedence over licensing terms.
Re:There are real world applications... (Score:1)
uh... and the military would want a single point of failure? Redundancy is an integrated part of most large networks, especially the military ones(it's the way internet was devised when it was still called DARPA - D for Defense).
Game architecture (Score:1)
Wallhack THIS.
Re: (Score:1)
Re:Game architecture (Score:3, Insightful)
Let's do the math:
1280x1024 uses 1.25MPixels (that's real Mega as in 2^20) at 32 bpp gives 48MBit/frame. Multiply that with 60 frames/second yields 2880 MBit/second, or just a tad short of 3 GBit/second. That sounds well within the limits, but consider this: Assuming that in a real network this 10 Gbps network will be relatively as efficient as 10 or 100 Mbps networks (some assumption, I know) are right now, you'll get between 30% and 50% of theoretical throughput as the sustained data rate, leaving you with a very narrow margin of bandwidth. And this assumes that you're the ONLY user on the network, which pretty much zeroes out the use for 1280x1024x32bpp, since this is bound to be needed for networked games, which tend to be less than funny when you're the only player.
But ignoring my assumptions and assuming 100% throughput sustained, you'll get 3 players max - still far less than you're likely to really want.
Re:Game architecture (Score:1)
However.. you do raise the point about sending other (ie non-game) video information over a network.. like, for instance for a movie production studio or the like.. but full-res PAL video is 720x576 is in size, so adjust the math accordingly..
Re:Game architecture (Score:2)
You're right, of course - no disagreement what so ever. I just answered the question (as I understood it, anyway).
[snip] but full-res PAL video is 720x576 is in size, so adjust the math accordingly
I don't know about you, but I'd like better resolution on my monitor than 720x576. Even 720x576 requires a LOT of bandwidth (too lazy to do the math right now - the sun is shining :-) though, so I guess streaming/compression will remain necessary for a LONG time yet...
Real Application? (Score:1)
Re:Real Application? (Score:1)
Also, the comment presupposes that pipelined parallelism is somehow less valuable than time-slicing. An important aspect of the application is that the visualization component of the calculation does *not* run in lockstep with the simulation code. Timeslicing this on the same set of processors in order to maintain responsiveness is far less efficient than feeding the latest data to an independent resource that can react asynchronously to user requests for interactive graphics.
"Real-world scientific application" (Score:3, Interesting)
Must be the middle of the night, because I'm struck by the beautiful improbability of the phrase "real world scientific [computing] application". Looking at the article, I note that the phrase is stolen from it, and that the article also mentions that the app is "Cactus". Looking at the pages for that project, I note that Cactus is a computational framework, not an actual application. Further, there seems to be no indication which app was actually run.
However, most of the Cactus apps seem to be in astrophysics. Sigh. Maybe it's "real-world" astrophysics. Or maybe it's just bedtime.
Re:"Real-world scientific application" (Score:1)
Re:"Real-world scientific application" (Score:1)
Thanks, I missed this in the article. I guess this is real-world binary black-hole coalescence in action. :-)
Why not use a SAN? (Score:1)
Wonder why they didn't go for a SAN solution, so first cluster writes to a volume on the SAN, and the second cluster just reads the data from the SAN, rather than squirting huge files across an IP network?
Re:Why not use a SAN? (Score:1)
Re:Why not use a SAN? (Score:1)
Now at the same token as FCAL pretty much takes the physical layer for ethernet as it is it would be fairly trivial to get a new line of gear out.
But what this all comes down to is less cables you can run 10 gige connections point ot point as a backbone or one 10gige connection no single peice of commodity hardware would fill that with usefull data.
Re:Why not use a SAN? (Score:2)
Even with a big cache (and the 8GB you mentioned isn't much, for this sort of application), if the communications go through such a cache, then the caching box would need 20Gbps total bandwidth to support reading and writing the 10Gbps simultaneously. Even if a bundle of FCAL connections (e.g. 5 x 2Gbps to each host) is capable of that bandwidth in theory, you're going to need a pretty good box in the middle. I've seen SANs that can sustain 5Gbps, but not 10Gbps or 20Gbps. I don't doubt that some might exist, but most real SANs can't do this.
If you're simply talking about bundling FCAL connections to reach the desired bandwidth between two machines or clusters, well duh. But Ethernet has a bunch of advantages over FCAL - in fact, the SAN industry is licking its lips waiting for this tech.
no single peice of commodity hardware would fill that with usefull data.
On the one hand you're talking about what you call "real" SANs that can sustain 20Gbps bandwidth, and on the other hand you're worrying about whether "commodity" hardware can saturate it? Give me a break.
The bottom line is that communicating through a SAN doesn't really make sense today if your goal is 10Gbps point-to-point streaming communication.
Re:Why not use a SAN? (Score:1)
There are plenty of real world applications (Score:4, Interesting)
At the university where I work I'm sure we'll start using this sonner rather than later. Right now all our distribution routers have dual gig connections to the two backbone routers. Fine, escept that each is feeding 20 to 50 buildings at 100mb each and the redundant set we are going to add will probably be gig. Those gig links to the backbone will fill up fast if each building has a gig to play with. Hence, 10gig eithernet is great since it works with our existing switches and setup, only faster.
Really, desktop or even server use is not the main target of this at least not for a few more years. The main target is removing bottlenecks from the network that supplies those servers.
Running Linux (Score:1)
The application is obvous... (Score:1)
Quake3/Unreal/Counterstrike
Ultimate geek frag fest!
Re:The application is obvous... (Score:1)
applications for 10ge. (Score:1)
first is the transmission of audio/video specifically hdtv. an uncompressed hdtv 1080i transmission will use up 1.5gbps (a single ge is not enough to transmit.) so it takes only 6 transmissions to saturate the bandwidth.
second, the main function of 10ge right now is for long haul transmission. 10ge is a very cheap way of increasing the data transfer in a single fiber without the expensive sonet/sdh multiplexers. you can simply rent a dark fiber from your telco and buy a 10ge kit and walla! create your own 10ge man. note that a typical oc192/stm64 9.6gbps is very much expensive to deploy.
third, intel has already released a 10ge server adapter!!! (i don't remember the url, just search it through their website.) with the advent of pci-x 64bit/133mhz, infiniband 4x, you can plug it in and deliver 10ge from the server. (that is you must have a really fast hdd farm.)
fourth, 10ge will fall to the desktop in probably 8 years time. so probably right now, they are working on 40ge or 100ge (if they keep the 10x multiplier thing.)
and fifth, 10ge will enable you to download all the porn and mp3 in a jiffy!
10gbit stuff (Score:3, Informative)
if it follows the way 10 & 100 worked..
This is another one of those "Why you can't transmit at 10Gbs from a single computer over 10Gbps ethernet" posts.
First, realize that the 'speed' of ethernet reflects the maximum use of the broadcast medium at maximum capacity. A medium designed for multiple conversations going on at once.
Example: There is a mandatory 9.6 microsecond gap between attemps to transmit frames on the ethernet (this is a minimum limit). In 10Mbps, this is 96 bits worth of time. In 100, it's 960.. if they do the same in 1000, tht's 9600 bits.
So think of that as a per-frame penalty.
If the frame size stays the same.. the mandatory inter-frame gap is about 6 times the size of the data payload now. (okay, so maybe it's different now..). If this IS the case... what's the point of of 10Gbps? More hosts, no switches. That's right.
Bandwidth is the cure for switching.
Decimal points wrong (Score:1)
10 Mb 96 bit times
100 Mb
GigE
10 GigE
Also do not forget that anything not half duplex does not use CSMA/CD so it is not a shared medium.
This could include all of the speeds above.
There is a thing called jumbo frames that takes into account that faster speeds such as GigE and 10 GigE links will never be used in a shared network (other parts down the line may be but not the links that they are connected to) they allow for much larger frame sizes. The upper limit for 10Base-T and 100-BaseTX, because of backwards compatibility, is 1518 bytes long. If you make the fame size larger then you get a higher goodput. Goodput is the percentage of actual data sent in relation to bandwidth taken up by headers. Right now different vendors implement jumbo frames in their own proprietary manner because the IEEE did not make a spec for it in GigE and now 10 GigE. I think that Extreme uses something like 9,000 bytes and Cisco uses 25,000 bytes long frames. I am probably wrong on the exact numbers but just giving you an idea.
So basically 10 GigE is very fast and does not have any extra burdens because of IPG (interpacket gap).
AndyMcL
Re:10gbit stuff (Score:1)
Wrong. It's constant in bit times, so it's 96 bits whatever the speed. At 100MB/s it's 0.96 microseconds etc.
80 Gigbits?? (Score:1)
Sorry I am just one of those guys who likes Big Pipes.
AndyMcL (speed junkie)
Re:80 Gigbits?? (Score:1)
Re:This isnt new. (Score:1)
I don't see how what you just wrote is any relevant. Just like anything you talk about, even in school