GPUs Helping To Lower CT Scan Radiation 77
Gwmaw writes with news out of the University of California, San Diego, on the use of GPUs to process CT scan data. Faster processing of noisy data allows doctors to lower the total radiation dose needed for a scan. "A new approach to processing X-ray data could lower by a factor of ten or more the amount of radiation patients receive during cone beam CT scans... With only 20 to 40 total number of X-ray projections and 0.1 mAs per projection, the team achieved images clear enough for image-guided radiation therapy. The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections — an estimated 100 times faster than similar iterative reconstruction approaches... Compared to the currently widely used scanning protocol of about 360 projections with 0.4 mAs per projection, [the researcher] says the new processing method resulted in 36 to 72 times less radiation exposure for patients."
Re:CPU speed determines req. radiation amount? (Score:4, Interesting)
Pretty much.
So in essence they have built a parallel optimised calculation system rather than an iterative one, and we all know the one thing CUDA and OpenCL do VERY well is parallel processing.
It seems the real win here is the new code, it could run on a TI-82 calculator and still require that level of radiation, its just that its very well suited to GPU to crunch.
Re:CPU speed determines req. radiation amount? (Score:2, Interesting)
The real breakthrough is the development of Compressed Sensing/Compressive Sampling algorithms; this is just an application.
Re:CPU speed determines req. radiation amount? (Score:3, Interesting)
Re:CPU speed determines req. radiation amount? (Score:3, Interesting)
What's going on is that instead of taking a clear picture they take a crappy picture and have the ludicrously fast GPU clean it up for them. While you could have done that by just putting 50 CPU's in parallel the GPU makes it quite simple.
The speed is important because their imaging is iterative, with the GPU they're apparently waiting 1-2 minutes, without the GPU it takes them 2-3 hours which is a rather long time to wait between scans.
Re:Funny what drives the HPC market... (Score:3, Interesting)
You and a couple of others in this sub-thread are defining the problem backwards. As near as I can tell you're approach is to look at computer A and computer B and then to say "B is five times faster than A, therefore I need B." The correct way is to lay out your requirements: technical, financial and SLAs for delivery of your "product." Then to identify the system you need.
While it's nice to be able to cache gigabytes of data, the reality is is that 2 GB is a fuckload of memory. Say you have a 21 MP camera (a 5D Mark II for example) and want to do some imaging work. Give up 1 GB of your RAM to your OS and apps. The remaining 1 GB can hold more than six complete copies of your images at 16 bits per channel + 16 bits of alpha.If you've got 8 bits per channel then you can have twelve copies. A 10 megapixel/8 bits per channel image (sufficient for most commercial work), in that case 1 GB is enough space for twenty-five images in RAM simultaneously. For the vast majority of users that's enough. Yes, it's possible to have that not be enough, but that says more about the user than the system.