New Algorithm Could Substantially Speed Up MRI Scans 115
An anonymous reader writes "In a paper to be published in the journal Magnetic Resonance in Medicine, researchers detail an algorithm they have developed to dramatically speed up the process of producing MRI scans. The algorithm uses information gained from the first contrast scan to help it produce the subsequent images. In this way, the scanner does not have to start from scratch each time it produces a different image from the raw data, but already has a basic outline to work from, considerably shortening the time it takes to acquire each later scan."
Re:Why wait? (Score:5, Informative)
I work at an MRI research lab at UPenn and I can comment to this article (surprised it is on the front page BTW). During a normal clinical scan, the imaging protocol typically requires several different image types to be acquired (T1-weighted, T2-weighted, PD-weighted, T1rho maps, ASL, DTI, etc). Wikipedia can explain what those mean. So with a slew of different image types, the clinician can make a diagnosis based on different contrasts from each image type. The contrasts are based on the inherent T1, T2, T2*, T1rho, diffusion, etc. of the tissue. When creating the image, the scanner plays out a series of RF pulses (between 65-300 Hz depending on MRI field strength - look up Larmor Frequency). The image reconstruction essentially is a 2D or 3D Fourier Transform of a series of data points (which are actually electrical signals picked up by the MRI equipment). The raw data (k-Space in MRI terms or Fourier space everywhere else) consists of high-frequency and low frequency data. The high-frequency data ends up creating edge borders in the images and the low frequency generates the actual gray-scale contrast between tissues. Without reading the article, I assume that the researchers are copying the high-frequency data from one set to another thereby eliminating the need to generate edges. Without the edges, the images would be a big blur. The issue with this is if there is patient movement from one data set to the next it is difficult to accurately guess the borders (can potentially use navigators to correct). The more data you copy the more the contrast does not change from one set to the next so there is a trade-off from speed to accuracy.
As for doing the data reconstruction offline, 2D or 3D Fourier transforms take the computer like 1 second to do so it is not a reconstruction issue.
Sorry for the length.
Re:Patents etc. (Score:5, Informative)
Not to say the patent system isn't unfair/broken, but it's often been said that we shouldn't patent algorithms, and they should be OS etc. But how about if it's someone's livelihood, and years of research went into the algorithm, like I bet this did? Should programmers be given less due just because they're not working with real materials or making physical inventions?
I've worked as a "programmer" (at least, in part) for 19/22 years in my career, I have over a dozen patents issued - I got a thousand dollar bonus each for maybe three of them, any employment contract I have ever signed immediately transfers ownership of any of "my" inventions to the company, including future royalties, etc. etc. I think my working conditions, with respect to IP ownership, are representative of >90% of programmers out there.
Of those rare programmers who might invent something patentable while not under contract to their employer to transfer ownership of the IP, probably >90% of them can't afford the time and expense of prosecuting a patent application.
Of those especially rare programmers who might successfully get a patent or two of their own, most of them could not afford to do anything about it if a corporation of any size infringed their patent, at best they might hope to sell their IP to a megacorp, but they wouldn't be in much of a position to negotiate its value. An exception to this is when working for ultra-small startups, but 19/22 of my years have been at something like 6 different startups varying from 6-25 people in size, any smaller would be very hard from an income security standpoint - the thing I have never done is sign-on at the moment of inception when company shares are being handed out like toilet paper (hint: you'd usually get better value out of toilet paper than shares in startups that green.)
Of those exceedingly rare programmers who might have the means to negotiate fair value for their own issued patents, most of them probably don't need the money anyway and have better things to do with their time.
Physicians seem to have just enough personal financial juice to get something out of the patent system, but even they are kind of in a commodity market - I've heard it said among investors "these medical device startups, if they have all their IP in order and something worth something to someone somewhere, they pretty much have a standard value of about $3M if you can find an exit buyer." The physicians and their friends tend to sink $500K to $1M into the company "building it up" to a point where it is interesting to the bigger players, getting their FDA clearances, etc. Some get lucky and get bought, many just fizzle out after the early round investors get tired of pumping money in.
Oh, for what it's worth, 80%+ of my patents have nothing to do with software / algorithms, and none of the ones that got me the bonuses did. Sadly, several of them are crap, mostly the ones I got the bonus money for. In the non-bonus environment, if an idea was crap, I'd have little enthusiasm for it, which would tend to turn off everyone else and it would die in development. However, in the bonus driven environment, you'd tend to "read" the rest of the team (including upper management who were rarely present at meetings), and if they liked it, hell yeah, I like it too, let's get this application done and get that bonus!