GPU Supercomputer Could Crunch Exabyte of Data Daily For Square Kilometer Array 40
An anonymous reader writes "Researchers on the Square Kilometer Array project to build the world's largest radio telescope believe that a GPU cluster could be suited to stitching together the more than an exabyte of data that will be gathered by the telescope each day after its completion in 2024. One of the project heads said that graphics cards could be cut out for the job because of their high I/O and core count, adding that a conventional CPU-based supercomputer doesn't have the necessary I/O bandwidth to do the work."
Computations (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:3, Interesting)
Lots of 3D (fast) Fourier transforms
Again, interesting, we do a lot of this at work. Complex 3D FFT transforms. I write my plan and processing code using CUFFT. I'm curious as to whether they'd be using fully custom code for such a large computer. We're only using 8x Tesla cards at work.
Re: (Score:2)
There is better than cufft, especially when multi gpu is involved
Re: (Score:2)
If you are just doing FFT, a single FPGA is even better than many GPUs.
Re:Computations (Score:5, Informative)
The SKA will have digitised signals coming from one or more receiving heads and radio receivers mounted on each of the 3000 radio telescopes that form the array. There's a massive amount of data that that needs to be time correlated to within a nanosecond or so (over transmission distances > 1000km), corrected for known system distortions, subject to beam forming, corrected for rotation and atmospheric effects, passed through Fourier analysis, analysed for polarisation, filtered, binned, summarised and stored in useful ways. Some of the tasks need to be done in real time, others can wait. Some of those tasks are heavy on the floating point work and easy to parallelise. Much can be done with dedicated hardware but that is much less flexible over the longer term than a programmable device.
2024 (Score:2, Insightful)
Re: (Score:3)
Read the paper. They make rather optimistic assumptions about Moore's law.
Re:2024 (Score:5, Informative)
And for good measure, now the actual paper:
http://www.skatelescope.org/uploaded/31235_139_Memo_Ford.pdf [skatelescope.org]
Funny thing, I was reading this last night.
Not well explained (Score:5, Informative)
I guess they did not get anyone that technical to write that article or the summary.
For I/O I guess they mean memory bandwidth. GPUs have a LOT of memory bandwidth from their cache memory, the problem is that they sit at the end of a PCIe bus from the CPU and the CPU has to handle most of the book keeping (and the actual IO, i.e. taking data from an external source).
So what is important is the compute density i.e. how much computation you do for each piece of data. Getting stuff into the GPU is slow, getting stuff out is slow, but doing stuff on the data is very very fast (because you have so many compute units and so much memory bandwidth).
That is also the way they are programmed, with the main code running on the CPU, and then the kernals getting launched on the GPU with explicit or implict transfer of data from the CPU memory to the GPU memory and back again.
I do have high hopes for stuff like Fusion ( http://en.wikipedia.org/wiki/AMD_Fusion [wikipedia.org] ) which gets rid of the PCIe bus, and make it a lot easier to get data to the GPU cores and back again.
And if you are going to mention GPU machines, why not mention titan ? ( http://www.olcf.ornl.gov/computing-resources/titan/ [ornl.gov] )
GPU perfect for image analysis (Score:4, Interesting)
Re: (Score:2)
A gpu is at most 20 times faster than a cpu while costing 4 times as much. If your code is so much faster on gpu, it's just because your cpu version was crap and not optimized.
Re: (Score:3)
You have no idea what you are talking about.
Re: (Score:3)
Re:GPU perfect for image analysis (Score:5, Informative)
You are the one without a clue what you are talking about. Let's look at the fastest shipping devices from Intel and Nvidia.
Intel SandyBridge CPU (8c 2.6GHz) has a peak compute of 166 DP GFLOPS, peak memory bandwidth of 51.2GB/s.
GF110 based tesla has peak compute of 666 DP GFLOPS, peak memory bandwidth of 177.4GB/s.
It has 4 times the raw DP compute, and 3.5x the raw memory bandwidth.
Now this is best case for the GPU. In reality, they tend to have far lower efficiency (actual versus peak) numbers for a few reasons. Firstly, they are harder to program and need to be driven by a CPU. Secondly, they require far higher parallelism which can fall afoul of Amdahl's law. Thirdly, they have limited memory capacities and relatively slow and high latency PCI connection to main memory which must be used to copy data from and copy results back to. Fourthly, the SandyBridge CPU has far greater capabilities to extract performance, it is aggressively out of order, and has several levels of large fast caches.
Look at the numbers on top500 supercomputers. Linpack (which is very easy and incredibly parallel, i.e., a great case for GPUs). The top Xeon result achieves 91% efficiency. The top NVIDIA result got 54.5%.
So in a *real* workload when comparing a properly optimized CPU implementation with an optimized GPU implementation, you would be very lucky to see a 4x increase with the GPU. Very lucky indeed. Somewhere around 2-3x would be more typical. No matter how much you stick your head in the sand, you can't get away from the reality of these numbers.
Now there are some other cases where fixed function units on the GPU have been used to provide a larger speedup. That's all well and good, but it tends to be rather limited. It may be akin to comparing a load using the CPU's encryption or random number acceleration functions.
Here is some further reading if you're interested.
www.cs.utexas.edu/users/ckkim/papers/isca10_ckkim.pdf
www.realworldtech.com/compute-efficiency-2012/
top500.org
Re: (Score:2)
This is a very insightful post, too bad it is not rated higher.
Re: (Score:1)
Re: (Score:1)
A gpu is at most 20 times faster than a cpu while costing 4 times as much. If your code is so much faster on gpu, it's just because your cpu version was crap and not optimized.
Yes, that's why every industrial and medical CT system comes with GPU reconstruction routines unlike 5 years ago. But don't let that little fact stop you from your ignorant post. Please do write your efficient CPU based reconstruction code for your custom hardware, and sell it for cheaper than others in the industry. I would buy it and you would make money. But alas, you have no clue or experience about this topic because you have no idea what a filtered backprojection is, or how to write CUDA code, nor
Re:GPU perfect for image analysis (Score:4, Interesting)
I didn't say GPUs were not faster, I said they were not as much faster as people claimed.
The OP said that his code went from taking days to taking minutes. That's an acceleration of the order of several thousand times. A GPU is simply not that much faster than a CPU. If it was made so much faster, it's simply that it was rewritten by competent people that knew how to make it fast, while the original CPU version was not.
Just so you know, I am the CEO of a company that edits compilers and libraries for parallel computing. We mostly work in two industries : image/video/multimedia/computer vision (a bit of medical imaging too) and banking/insurance/financial. Our people are seasoned computer architecture experts, many of which also sport a phd in various fields, including mathematics, robotics, and computer science. We have strong partnerships not only with NVIDIA, but also with AMD and Intel, which give us future products for evaluation. I myself contribute to the evolution of parallel programming in C++ as an HPC expert at the C++ standards committee.
If you feel like you'd want to apply for some consulting to have us help you improve the performance of your filtered backprojection -- I myself have no knowledge of that field, but I assume it's similar to tomography for which we have good results already deployed in the industry --, I'm sure our team would be delighted to help you.
Re: (Score:3)
Yawn. GPU's are good for CT because you have a medium amount of data and a lot of processing to compensate for crap detectors and low ionizing radiation levels. You're staking the future of your CT on AMD? Good luck with that. I had people from one of the big 3 CT vendors evangelizing to me about the GPU and AMD. My group chose not to work with them because they were being stubborn on their religious choices of product and not on solving the problems. That and stealing some of our IP.
For an array, and
Re: (Score:2)
And I'll have 32,000 cores to spare. (Score:1)
> believe that a GPU cluster could be suited to stitching together the more than
> an exabyte of data that will be gathered by the telescope each day after its
> completion in 2024
Nah, I'll let you use an app on my Galaxy S12
Faux Supercomputers (Score:1)
GPU I/O bandwidth? (Score:3, Informative)
a conventional CPU-based supercomputer doesn't have the necessary I/O bandwidth to do the work.
I work in HPC and the trend is towards heterogeneous architectures ( CPU+accelerators). Moore's law, power requirements and economics are dictating that trend. It's definitely a stretch to claim that you get better I/O bandwidth with GPUs. Even with PCI Gen 3, the effective bandwidth you get per CPU core is greater than that of an 'equivalent' GPU core.
Re: (Score:1)
Re: (Score:1)
Probably not (Score:1)
GPUs have small memory footprints. SKA will be processing HUGE images and data sets. And the image creation cannot be broken up into discrete independent chunks. So the I/O between GPUs is a real problem. Obviously CPUs have the same problem as the on chip memory is (relatively) tiny, but they are designed to pull on the much larger system memory which should be adequate.
Image analysis may well be a different kettle of fish.