Forgot your password?
typodupeerror
Space Supercomputing Science IT

GPU Supercomputer Could Crunch Exabyte of Data Daily For Square Kilometer Array 40

Posted by Soulskill
from the maybe-they-should-process-it-instead dept.
An anonymous reader writes "Researchers on the Square Kilometer Array project to build the world's largest radio telescope believe that a GPU cluster could be suited to stitching together the more than an exabyte of data that will be gathered by the telescope each day after its completion in 2024. One of the project heads said that graphics cards could be cut out for the job because of their high I/O and core count, adding that a conventional CPU-based supercomputer doesn't have the necessary I/O bandwidth to do the work."
This discussion has been archived. No new comments can be posted.

GPU Supercomputer Could Crunch Exabyte of Data Daily For Square Kilometer Array

Comments Filter:
  • Re:Computations (Score:3, Interesting)

    by girlintrainingpants (1954872) on Saturday August 04, 2012 @10:59AM (#40877585)

    Lots of 3D (fast) Fourier transforms

    Again, interesting, we do a lot of this at work. Complex 3D FFT transforms. I write my plan and processing code using CUFFT. I'm curious as to whether they'd be using fully custom code for such a large computer. We're only using 8x Tesla cards at work.

  • by DishpanMan (2487234) on Saturday August 04, 2012 @11:12AM (#40877647)
    We use GPU cards for computed tomography, and large reconstructions went from taking days, to hours to minutes. OpenCL should be mature in 12 years so they can go with that instead of CUDA, and by then GPGPU computing will probably be using the hybrid APU chips that AMD is starting to market. The bandwidth on the Tesla cards right now is the bottleneck as the PCI bus transfer speeds can cause huge wait times for large data sets. Plus even the biggest Tesla cards only have 4GB or on-board memory, which is not enough. I'd rather have the chips be on-board and have direct access to 512GB of ram for large data sets. Although I can't wit for the Kepler chips to come out, they'll probably reduce computation times by another factor of 3 for our image processing problems.
  • by loufoque (1400831) on Saturday August 04, 2012 @05:44PM (#40880529)

    Yes, that's why every industrial and medical CT system comes with GPU reconstruction routines unlike 5 years ago.

    I didn't say GPUs were not faster, I said they were not as much faster as people claimed.
    The OP said that his code went from taking days to taking minutes. That's an acceleration of the order of several thousand times. A GPU is simply not that much faster than a CPU. If it was made so much faster, it's simply that it was rewritten by competent people that knew how to make it fast, while the original CPU version was not.

    your ignorant post

    alas, you have no clue or experience about this topic

    Just so you know, I am the CEO of a company that edits compilers and libraries for parallel computing. We mostly work in two industries : image/video/multimedia/computer vision (a bit of medical imaging too) and banking/insurance/financial. Our people are seasoned computer architecture experts, many of which also sport a phd in various fields, including mathematics, robotics, and computer science. We have strong partnerships not only with NVIDIA, but also with AMD and Intel, which give us future products for evaluation. I myself contribute to the evolution of parallel programming in C++ as an HPC expert at the C++ standards committee.
    If you feel like you'd want to apply for some consulting to have us help you improve the performance of your filtered backprojection -- I myself have no knowledge of that field, but I assume it's similar to tomography for which we have good results already deployed in the industry --, I'm sure our team would be delighted to help you.

Help me, I'm a prisoner in a Fortune cookie file!

Working...