Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Math

Data Sorting World Record — 1 Terabyte, 1 Minute 129

An anonymous reader writes "Computer scientists from the University of California, San Diego have broken the 'terabyte barrier' — and a world record — when they sorted more than a trillion bytes of data in 60 seconds. During this 2010 'Sort Benchmark' competition, a sort of 'World Cup of data sorting,' the UCSD team also tied a world record for fastest data sorting rate, sifting through one trillion data records in 172 minutes — and did so using just a quarter of the computing resources of the other record holder."
This discussion has been archived. No new comments can be posted.

Data Sorting World Record — 1 Terabyte, 1 Minute

Comments Filter:
  • by blai ( 1380673 )
    I have one record of size one terabyte.

    Sort by size.
    It is already sorted.
    Sort finished. time: 0s! WTH!
  • by Minwee ( 522556 ) <dcr@neverwhen.org> on Tuesday July 27, 2010 @10:36PM (#33053352) Homepage
    As long as you use Intelligent Design Sort [dangermouse.net].
  • by MichaelSmith ( 789609 ) on Tuesday July 27, 2010 @10:36PM (#33053356) Homepage Journal

    I had a 6502 system with BASIC in ROM and a machine code monitor. The idea is to copy a page (256 bytes) from the BASIC ROM to the video card address space. This puts random characters into one quarter of the screen. Then bubble sort the 256 bytes. It took about one second.

    For extra difficulty do it again with the full 1K of video. Thats harder with the 6502 because you have to use vectors in RAM for the addresses. So reads and writes are a two step operation, as is incrementing the address. You have to test for carry. But the result was spectacular.

    • Re: (Score:2, Insightful)

      by DerekLyons ( 302214 )

      The idea is to copy a page (256 bytes) from the BASIC ROM to the video card address space. This puts random characters into one quarter of the screen.

      Well, no. It puts the decidedly non-random contents of the BASIC ROM on the screen.

      Then bubble sort the 256 bytes. It took about one second.

      Which is roughly as surprising and unexpected as the Sun coming up in the East in the morning. Since much of (the non random and limited character set) data is repeated, many of the bytes don't have to move very

    • That must have been in basic yes?

      A 6502, in assembly language, can sort 256 adjacent byte, faster than that.
      Even with a trivial bubble sort.
      they are not fast,

      • Re: (Score:2, Informative)

        A 256 byte bubble sort requires about 33,000 operations. At 1 MHz, that means every operation (compare + possible swap) took 30 cycles. While somewhat slow, this is definitely much faster than what basic could have achieved.
      • No, it was in hand assembled machine code. The CPU ran at 1Mhz but I think it was the bubble sort algorithm which made it slow. For each iteration it looped through the full 256 bytes.

    • Yeah, vectors to RAM addresses, having to use Y for indexing. The cycle count was one greater than normal indexing, pretty good. Wait, what were we talking about again?
    • Re: (Score:3, Insightful)

      by hey! ( 33014 )

      I always preferred Shell Sort to Bubble Sort. It's just as easy to code as Bubble Sort. Although it's not asymptotically better that Bubble sort, it makes fewer exchanges and so tends to run faster in the average case. In my experience, it can be dramatically faster.

      Basically Shell Sort is Insertion Sort, but it starts by comparing keys that are far apart. The heuristic is that in an unsorted array, keys usually have pretty far to go before they are in their final position. The algorithm removes large

      • Re: (Score:3, Insightful)

        Hell, just about anything is better than Bubble Sort. I'm not sure why it's even taught. Whenever someone is left to come up with a sorting algorithm of their own for the first time they'll usually re-create Selection Sort, and that's better than Bubble Sort.

        This might surprise you (I know it did me), but Shell Sort can do better than O(n^2) when implemented the correct way, making it asymptotically better than Bubble Sort as well.

        • Hell, just about anything is better than Bubble Sort. I'm not sure why it's even taught.

          I once read that bubble sort is the optimal sort on a Turing machine. Too bad it wasn't named tapeworm sort, then every student would understand immediately. It would be analyzed alongside dining-philosopher chopstick sort, where the dining philosopher won't each (finicky like Tesla) unless holding a matched pair, with the option of swapping hands if holding a pair of sticks before release.

          If the purpose is to teach an

  • 1-byte records? (Score:2, Insightful)

    by mike260 ( 224212 )

    TFA variously refers to 1 trillion records and 1Tb of data. So each record is 1 byte?
    Doesn't seem like that requires any real computation - you just go through the data maintaining a count of each of the 256 possible values (an embarassingly parallel problem), then write it back out with 256 memsets (likewise trivially parallelisable).

    • Re:1-byte records? (Score:5, Informative)

      by mike260 ( 224212 ) on Tuesday July 27, 2010 @10:45PM (#33053396)

      Ah, my mistake - the "trillion data records" refers to a different benchmark. In the 1TB benchmark, records are 100 bytes long.

      • by cffrost ( 885375 )

        In the 1TB benchmark, records are 100 bytes long.

        "100 bytes?" Why this arbitrary, ugly, sorry-ass excuse of a number? An elegant, round number like 64, 128... hell, even 96 would have been a sensible and far superior choice.

        • "100 bytes?" Why this arbitrary, ugly, sorry-ass excuse of a number? An elegant, round number like 64, 128... hell, even 96 would have been a sensible and far superior choice.

          Because those are only round(ish) numbers in binary. Might make sense if you were sorting 1TiB of data, but they were not. Per TFA:

          one terabyte of data (1,000 gigabytes or 1 million megabytes)

          This clarifies that they were working with the SI prefix "Tera", meaning that powers of ten much more evenly divide. To whit, 10^2 byte records in a 10^12 byte dataset = exactly 10^10 (ten billion) specific records. 64 or 128 byte records would divide evenly into a non-round number of records, 96 byte records would not divide evenly.

          This of course also assumes precisely 1TB of d

        • by hawkfish ( 8978 )

          In the 1TB benchmark, records are 100 bytes long.

          "100 bytes?" Why this arbitrary, ugly, sorry-ass excuse of a number? An elegant, round number like 64, 128... hell, even 96 would have been a sensible and far superior choice.

          Because then you can't use all the elegant tricks involving cache lines that you were contemplating?

          A prime number might have been even better (100 is divisible by 4).

    • Re: (Score:1, Informative)

      by topcoder ( 1662257 )

      Doesn't seem like that requires any real computation - you just go through the data maintaining a count of each of the 256 possible values (an embarassingly parallel problem), then write it back out with 256 memsets (likewise trivially parallelisable).

      That technique is called bucked sort BTW.

  • by SuperKendall ( 25149 ) on Tuesday July 27, 2010 @10:43PM (#33053388)

    You almost think at this point that with super fast systems attention to algorithmic complexity hardly matters, it's good to see research still advancing and that there are areas where carefully designed algorithms make a huge difference. I look forward to seeing how they fare against the Daytona set.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I work in the OLAP realm. Trust me, it matters. Being able to run an adhoc query across terabytes of data with near real-time results is the holy grail of what we do. The industry has known for a while that parallel computing is the way to go, but only recently has the technology become cheap enough to consider deploying on a large scale. (Though Oracle will still happily take millions from you for Exadata if you want the expensive solution.)

      • One other area... (Score:2, Interesting)

        by SuperKendall ( 25149 )

        Come to think of it, one area where it also matters currently is in mobile development. If you aren't considering memory or processor usage you can quickly lead yourself into some really bad performance, thinking hard about how to make use of what little you have really matters in that space too.

        So only desktop or smallish backend development can generally remain unconcerned these days with algorithmic performance...

        I had to work with large datasets in my previous life as a backend IT guy, but nothing at

        • by raddan ( 519638 ) *
          Algorithmic efficiency also matters in the mobile space as designers get closer to building low-power systems that harvest energy directly from their environments. Just yesterday I was speaking with a researcher who had worked on animal tracking devices (e.g., for turtles) that collected power from small solar arrays. Any efficiencies that they could take advantage of, they used, from smart scheduling of their GPS units (don't madly run the unit if you don't have the power) to a delay-tolerant networking
  • by the_other_one ( 178565 ) on Tuesday July 27, 2010 @10:44PM (#33053394) Homepage
    How many parallel predicting octopuses were required to predict their victory?
    • Re: (Score:3, Funny)

      by Thanshin ( 1188877 )

      How many parallel predicting octopuses were required to predict their victory?

      Infinite, but one of them wrote some pretty good stories about a ghost.

    • f = n / x

      where n is the number of records, and x the number of tentacles per octopus.

      So 1000 records only need 125 parallel 8-legged octopi. This is the optimal amount, adding any more will invoke Brooks's Law [wikipedia.org]

  • Only 52 nodes (Score:5, Interesting)

    by Gr8Apes ( 679165 ) on Tuesday July 27, 2010 @10:45PM (#33053406)

    You've got to be kidding me. Each node was only 2 quad core processors, with 16 500GB drives (big potential disk IO per node) but this system doesn't even begin to scratch the very bottom of the top 500 list.

    I just can't image that if even the bottom rung of the top 500 was even slightly interested in this record, that they wouldn't blow this team out of the water.

    • Re:Only 52 nodes (Score:4, Informative)

      by rm999 ( 775449 ) on Tuesday July 27, 2010 @10:49PM (#33053426)

      I don't know much about them, but perhaps most supercomputers break this rule: "The hardware used should be commercially available (off-the-shelf), and unmodified (e.g. no processor over or under clocking)"

      • Re: (Score:3, Informative)

        by afidel ( 530433 )
        Many of the systems on the TOP500 list are simple COTS clusters with perhaps an interesting interconnect. Remember the Appleseed cluster that was ranked in the top10 a few years ago, it was nothing more than a bunch of Apple desktops with an interesting gigabit mesh network. Regardless they hardly used optimal hardware, hex core Xeon's with 96GB's of ram per node and a better I/O subsystem could have crushed this result.
        • Re: (Score:3, Insightful)

          by Anonymous Coward
          You say that in a way where someone could misinterpret the interconnect as not being a big deal. Poor interconnects are why EC2 wasn't useful for HPC (their new option sounds somewhat better, but is limited to 16 nodes, IIRC) and why iPod Touches or similar devices have very few possible applications for clustering. If good interconnects weren't important, then clustering portable systems would be inevitable -- e.g., if you wanted more computing power for your laptop, you could just plug in your phone or mp
      • Re:Only 52 nodes (Score:5, Informative)

        by straponego ( 521991 ) on Tuesday July 27, 2010 @11:24PM (#33053572)
        No, most Top500 machines are composed of commercially available, unmodified parts. More than half use GigE interconnects, though Infiniband has nearly caught up. I'm not sure if you'd count Infiniband as COTS-- at least a couple of years ago, it was fairly involved to configure, and it's not cheap. But anybody who has the money can buy it.
      • Re: (Score:3, Insightful)

        Doubtful. The biggest bottlenecks for contests like this are I/O, network, and memory latency. While faster CPUs are always welcome increases in CPU speed rarely make a HUGE differences, esp. when you are talking about the relatively small gains you get from overclocking. Considering the drawbacks of overclocking, namely increased heat production and decreased reliability, it doesn't really seem all that compelling for supercomputer operators to employ it.
        • Re: (Score:3, Insightful)

          by arth1 ( 260657 )

          Doubtful. The biggest bottlenecks for contests like this are I/O, network, and memory latency.

          No, the biggest bottleneck is the algorithm. Which seldom is machine specific (there are exceptions, of course), and thus what's fastest on these low-cost machines will also contribute to higher speeds on the really hot irons, once the best algorithms have been adopted.

          Quite frankly, I would like to see even lower max specs for competitions like this, which would allow competitors who can't afford the equipment to

    • when they sorted more than one terabyte of data (1,000 gigabytes or 1 million megabytes) in just 60 seconds.

      Each node is a commodity server with two quad-core processors, 24 gigabytes (GB) memory and sixteen 500 GB disks

      Ok, let's do the math. 52 computers X 24 GB RAM each = 1248 GB. Letting aside the RAM taken by OS, that's roughly 1 TB of RAM to dedicate for the sorting.
      Le'me guess: the main difficulty was to partition the data and perform a merging of the in-memory sorted partitions?

      (of course I'm posting before reading the details of the method they used, it is /. after all. I guess I even merit a price for the weirdness of RTFA article before posting)

    • This would be a lot more interesting if they all ran on the same hardware?
  • by Lord_of_the_nerf ( 895604 ) on Tuesday July 27, 2010 @10:46PM (#33053416)
    LARPers > Fan-fiction writers > Professional Data Sorting Competitors > Furries
  • "a sort of 'World Cup of data sorting"

    It ended in a 0-0 tie?

  • by HockeyPuck ( 141947 ) on Tuesday July 27, 2010 @11:13PM (#33053532)

    Why is sorting 1TB so hard?

    I can just instruct my tape library to put the tapes in the library in alphabetical order in the slots... Y'know AA0000 AA0001 AA0002... moves a hell of a lot more than 1TB.

    • Re: (Score:2, Informative)

      by Carthag ( 643047 )

      Why is sorting 1TB so hard?

      I can just instruct my tape library to put the tapes in the library in alphabetical order in the slots... Y'know AA0000 AA0001 AA0002... moves a hell of a lot more than 1TB.

      That's not sorting 1TB. That's sorting n records where n = the number of tapes.

  • Did they sort the best case input and claim record? Or is it the average or worst case? Or the algo's best case time == worst case time?
  • by amirulbahr ( 1216502 ) on Tuesday July 27, 2010 @11:36PM (#33053620)
    When I read the summary I thought what's the big deal if the 1 TB of data only contained two records 0.5 TB each. Then I saw that kdawson wrote the summary. So I browsed over here [sortbenchmark.org] and saw that the impressive thing is that they sorted 1,000,000,000,000 records of 100 bytes each with 10 byte keys.
  • Considering that the previous terabyte sort record was held by Hadoop, it's not surprising that another system can do the same job in a fraction of the time with a fraction of the resources.

    Hadoop a useful framework, but it's a horribly inefficient resource hog. Who was the f*ing genius who thought it was a good idea to write a high performance computing application in Java?!?

    • Java was slow 15 years ago , not now !
    • Who was the f*ing genius who thought it was a good idea to write a high performance computing application in Java?!?"

      My Boss: "I Agree. I personally would have copy-pasted the records into this slick little Excel sheet i made, and clicked the "sort A-Z" button. Done and done. This programming stuff is easy!"

  • by pdxp ( 1213906 ) on Wednesday July 28, 2010 @12:58AM (#33053776)
    ... to make it clear that you won't be doing it (TritonSort, thanks for leaving that out kdawson) on your desktop at home:
    • 10Gbps Networking
    • 52 servers x 8 cores each = 416 CPUs
    • 24 GB RAM per server = 1,248 GB
    • ext4 filesystem on each, presumably with hardware raid

    I think this is cool, but.... how fast is it in a more practical situation?

    source [sortbenchmark.org]

  • by Patch86 ( 1465427 ) on Wednesday July 28, 2010 @01:34AM (#33053846)

    Impressive and all, but I take umbrage at calling it a "1 TB barrier". Is it disproportionately more difficult than sorting 0.99 TB?

    Breaking "the sound barrier" was hard because of the inherent difficulty of going faster that sound in an atmosphere (sonic booms and whatnot). It was harder than simply travelling that fast would have been.

    If this is just further evolution of sorting speed, it makes it a milestone, not a barrier.

    • by maxwell demon ( 590494 ) on Wednesday July 28, 2010 @02:38AM (#33054024) Journal

      Impressive and all, but I take umbrage at calling it a "1 TB barrier". Is it disproportionately more difficult than sorting 0.99 TB?

      At 1TB the attraction of the 1-bits gets so large that if you are not careful, your data collapses into a black hole.

    • It is psychological. It adds another digit to the size of the number.

      For the rest it is as much of a barrier as "the stock market has broken through the 20,000 point barrier", that idea.

      Breaking the light speed barrier now that would be an achievement.

    • kdawson thinks in an esoteric language where numbers go "1, 2, 3... 999,999,999, many."

    • People were obviously scared of a whole terror byte.
    • Impressive and all, but I take umbrage at calling it a "1 TB barrier". Is it disproportionately more difficult than sorting 0.99 TB?

      Breaking "the sound barrier" was hard because of the inherent difficulty of going faster that sound in an atmosphere (sonic booms and whatnot). It was harder than simply travelling that fast would have been.

      If this is just further evolution of sorting speed, it makes it a milestone, not a barrier.

      Just think of it like the 4 minute mile. Nothing particularly sacred about runni

  • I have to admit that I'm genuinely impressed. This rate of data processing is almost as efficient as my girlfriend when she hears, evaluates and demolishes my alibis.

  • by melted ( 227442 ) on Wednesday July 28, 2010 @03:14AM (#33054128) Homepage

    Let's consider 100TB in 172 minute thing they also did. 52 nodes, 16 spindles per node is 832 spindles total and 120GB of data per spindle. 120GB of data can be read in 20 minutes and transfered in another 15 to the target spindles (assuming uniform distribution of keys). You can then break it down into 2GB chunks locally (again by key) as you reduce. Then you spend another hour and a half reading individual chunks, sorting them in memory, concatenating and writing.

    Of course this only works well if the keys are uniformly distributed (which they often are) and if data is already on the spindles (which it often isn't).

  • In third place, with a time of 01:02.305 is Sort United.

    In second place, with a time of 00:58.112 is Team Sorted.

    And in first place, with a time of 00:59:836 is UCSD!
    • In third place, with a time of 01:02.305 is Sort United.

      In second place, with a time of 00:58.112 is Team Sorted.

      And in first place, with a time of 00:59:836 is UCSD! .. which also designed the winner sorting algorithm

      Fix'd

  • Assuming UCSD gives the algorithm to SETI, anyway, and doesn't instead go to Wall Street to be used in detecting market trends and major share movements as part of their never-ending quest to separate the small investor from their money faster.
  • by jackd ( 64557 )
    Wow, I realize I am seriously a nerd, getting genuinely excited about someone sorting large amounts of data. Please don't tell my girlfriend, she'll realize I've been faking being a semi-normal cool guy for years, and leave me.
    • by Dunbal ( 464142 ) * on Wednesday July 28, 2010 @07:17AM (#33054856)

      Please don't tell my girlfriend, she'll realize I've been faking being a semi-normal cool guy for years, and leave me.

            Nah, she'll realize that you're more turned on by a new processor than by a pair of tits, and stay with you forever knowing you'll never cheat on her (except when you go to that site to download algorithms). You might have to give in and move out of the basement though. But not to worry - a good set of blackout curtains and it's almost the same. Plus as you get older the lack of dampness is easier on your rheumatism...

  • when are we going to have access to this technology? It is always nice to hear new breakthroughs, but governments always control access to these new technologies, and make it 5 to 10 years before we see any of it, while somewhere like japan, you already have it available...

  • by Lord Grey ( 463613 ) * on Wednesday July 28, 2010 @07:35AM (#33054976)

    A paper describing the test is here [sortbenchmark.org]. TritonSort is the abstract method; GraySort and MinuteSort are concrete/benchmarked variations of TritonSort.

    As TFA states, this is more about balancing system resources than anything else. The actual sort method used was "a variant of radix sort" (page two of the linked PDF). Everything else was an exercise in how to break up the data into manageable chunks (850MB), distribute the chunks to workers, then merge the results. That was the real breakthrough, I think. But I wonder how much is truly a breakthrough and how much was just taking advantage of modern hardware, since one of the major changes in MinuteSort is simply avoiding a couple of disk writes by keeping everything in RAM (a feat possible because that much RAM is readily available to a single worker).

    Regardless, this kind of work is very cool. If web framework developers (for example) paid greater attention to balancing data flow and work performed, we could probably get away with half the hardware footprint in our data centers. As it stands now, too many COTS -- and hence, often-used -- frameworks are out of balance. They read too much data for the task, don't allow processing until it's all read in, etc.. This causes implementors to add hardware to scale up.

  • Who cares about the time it took?

    Show me the big-oh notation for the algorithm or I am simply not interested.

    • by JustNiz ( 692889 )

      totally agree.
      The unspoken implication here is that they have invented a great search algorithm, however by just giving timings of bytes processed per second, it could also just mean thay have a bubble sort running on some big iron.

  • It's not the number of bytes that's important, it's the number of records and the size of the sort keys.

    I can take two records of 0.5TB each and sort them quickly - very quickly if their sort keys are short.

    The summary and article headline said "a terabyte" but the article says "They sorted one trillion data records." This is impressive with today's hardware. It will be child's play with hardware from "the day after tomorrow."

  • abeertty There. 15 seconds.

You can tell how far we have to go, when FORTRAN is the language of supercomputers. -- Steven Feiner

Working...