Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Medicine

GPUs Helping To Lower CT Scan Radiation 77

Gwmaw writes with news out of the University of California, San Diego, on the use of GPUs to process CT scan data. Faster processing of noisy data allows doctors to lower the total radiation dose needed for a scan. "A new approach to processing X-ray data could lower by a factor of ten or more the amount of radiation patients receive during cone beam CT scans... With only 20 to 40 total number of X-ray projections and 0.1 mAs per projection, the team achieved images clear enough for image-guided radiation therapy. The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections — an estimated 100 times faster than similar iterative reconstruction approaches... Compared to the currently widely used scanning protocol of about 360 projections with 0.4 mAs per projection, [the researcher] says the new processing method resulted in 36 to 72 times less radiation exposure for patients."
This discussion has been archived. No new comments can be posted.

GPUs Helping To Lower CT Scan Radiation

Comments Filter:
  • by Barny ( 103770 ) on Friday July 23, 2010 @11:25AM (#33003468) Journal

    Pretty much.

    The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections –-- an estimated 100 times faster than similar iterative reconstruction approaches, says Jia.

    So in essence they have built a parallel optimised calculation system rather than an iterative one, and we all know the one thing CUDA and OpenCL do VERY well is parallel processing.

    It seems the real win here is the new code, it could run on a TI-82 calculator and still require that level of radiation, its just that its very well suited to GPU to crunch.

  • by Anonymous Coward on Friday July 23, 2010 @11:26AM (#33003492)

    The real breakthrough is the development of Compressed Sensing/Compressive Sampling algorithms; this is just an application.

  • by jandrese ( 485 ) <kensama@vt.edu> on Friday July 23, 2010 @11:28AM (#33003514) Homepage Journal
    My guess is that each scan requires a considerable amount of processing to render into something we can read on the screen. Probably billions of FFTs or something. You can make a tradeoff between more radiation (cleaner signal) and more math, but previously you would have needed a million dollar supercomputer to do what you can do with $10k worth of GPUs these days, which is how they're saving on radiation.
  • by Zironic ( 1112127 ) on Friday July 23, 2010 @11:28AM (#33003520)

    What's going on is that instead of taking a clear picture they take a crappy picture and have the ludicrously fast GPU clean it up for them. While you could have done that by just putting 50 CPU's in parallel the GPU makes it quite simple.

    The speed is important because their imaging is iterative, with the GPU they're apparently waiting 1-2 minutes, without the GPU it takes them 2-3 hours which is a rather long time to wait between scans.

  • by Score Whore ( 32328 ) on Friday July 23, 2010 @01:04PM (#33004666)

    You and a couple of others in this sub-thread are defining the problem backwards. As near as I can tell you're approach is to look at computer A and computer B and then to say "B is five times faster than A, therefore I need B." The correct way is to lay out your requirements: technical, financial and SLAs for delivery of your "product." Then to identify the system you need.

    While it's nice to be able to cache gigabytes of data, the reality is is that 2 GB is a fuckload of memory. Say you have a 21 MP camera (a 5D Mark II for example) and want to do some imaging work. Give up 1 GB of your RAM to your OS and apps. The remaining 1 GB can hold more than six complete copies of your images at 16 bits per channel + 16 bits of alpha.If you've got 8 bits per channel then you can have twelve copies. A 10 megapixel/8 bits per channel image (sufficient for most commercial work), in that case 1 GB is enough space for twenty-five images in RAM simultaneously. For the vast majority of users that's enough. Yes, it's possible to have that not be enough, but that says more about the user than the system.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...