Data Sorting World Record — 1 Terabyte, 1 Minute 129
An anonymous reader writes "Computer scientists from the University of California, San Diego have broken the 'terabyte barrier' — and a world record — when they sorted more than a trillion bytes of data in 60 seconds. During this 2010 'Sort Benchmark' competition, a sort of 'World Cup of data sorting,' the UCSD team also tied a world record for fastest data sorting rate, sifting through one trillion data records in 172 minutes — and did so using just a quarter of the computing resources of the other record holder."
Re:1-byte records? (Score:5, Informative)
Ah, my mistake - the "trillion data records" refers to a different benchmark. In the 1TB benchmark, records are 100 bytes long.
Re:Only 52 nodes (Score:4, Informative)
I don't know much about them, but perhaps most supercomputers break this rule: "The hardware used should be commercially available (off-the-shelf), and unmodified (e.g. no processor over or under clocking)"
Re:Only 52 nodes (Score:3, Informative)
Re:Only 52 nodes (Score:5, Informative)
1 Trillion Records Sorted (Score:5, Informative)
Re:1-byte records? (Score:1, Informative)
Doesn't seem like that requires any real computation - you just go through the data maintaining a count of each of the 256 possible values (an embarassingly parallel problem), then write it back out with 256 memsets (likewise trivially parallelisable).
That technique is called bucked sort BTW.
Re:1-byte records? (Score:2, Informative)
Re:I used to think it was great (Score:2, Informative)
Re:Been doing this for years... (Score:2, Informative)
Why is sorting 1TB so hard?
I can just instruct my tape library to put the tapes in the library in alphabetical order in the slots... Y'know AA0000 AA0001 AA0002... moves a hell of a lot more than 1TB.
That's not sorting 1TB. That's sorting n records where n = the number of tapes.
((Triton)|(Gray)|(Minute))Sort (Score:5, Informative)
A paper describing the test is here [sortbenchmark.org]. TritonSort is the abstract method; GraySort and MinuteSort are concrete/benchmarked variations of TritonSort.
As TFA states, this is more about balancing system resources than anything else. The actual sort method used was "a variant of radix sort" (page two of the linked PDF). Everything else was an exercise in how to break up the data into manageable chunks (850MB), distribute the chunks to workers, then merge the results. That was the real breakthrough, I think. But I wonder how much is truly a breakthrough and how much was just taking advantage of modern hardware, since one of the major changes in MinuteSort is simply avoiding a couple of disk writes by keeping everything in RAM (a feat possible because that much RAM is readily available to a single worker).
Regardless, this kind of work is very cool. If web framework developers (for example) paid greater attention to balancing data flow and work performed, we could probably get away with half the hardware footprint in our data centers. As it stands now, too many COTS -- and hence, often-used -- frameworks are out of balance. They read too much data for the task, don't allow processing until it's all read in, etc.. This causes implementors to add hardware to scale up.