Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Medicine Science

Nvidia Researchers Generate Synthetic Brain MRI Images For AI Research (zdnet.com) 48

AI holds a great deal of promise for medical professionals who want to get the most out of medical imaging. However, when it comes to studying brain tumors, there's an inherent problem with the data: abnormal brain images are, by definition, uncommon. New research from Nvidia aims to solve that. From a report: A group of researchers from Nvidia, the Mayo Clinic, and the MGH & BWH Center for Clinical Data Science this weekend are presenting a paper on their work using generative adversarial networks (GANs) to create synthetic brain MRI images. GANs are effectively two AI systems that are pitted against each other -- one that creates synthetic results within a category, and one that identifies the fake results. Working against each other, they both improve. GANs could help expand the data sets that doctors and researchers have to work with, especially when it comes to particularly rare brain diseases.
This discussion has been archived. No new comments can be posted.

Nvidia Researchers Generate Synthetic Brain MRI Images For AI Research

Comments Filter:
  • Link to Paper (Score:2, Interesting)

    by Anonymous Coward

    On GAN's generally, since no actual research is linked to, here: https://arxiv.org/pdf/1406.2661.pdf

  • by tempmpi ( 233132 ) on Monday September 17, 2018 @12:54AM (#57326038)

    I wonder if this can actually work. If you don't have enough images to train a classifier, why would training a GAN work? And even if training a GAN works, those images won't contain any information about tumors that were not already contained in the original images.

    • Using a GAN to generate fakes won't produce novel data that wasn't in the originals, but what it can do is pull out features common to the particular original dataset to throw into new falsified pseudo scans which can be used for further training or study. It kind of seems tit-about-ass, but this is a job GANs have proven adept at in other fields.

      Think of it this way;-

      Algorithm 1 says "Hey, I'm having trouble understanding these diagrams. I dont have enough examples to study."

      Algorithm 2 says: "No worry lit

      • Yes, and a GAN can easily combine features in unnatural ways that never existed in nature.

        Unless the natural probability distribution of each clinically important feature can be accurately modeled by the deep net, combinatoric perversity is all too likely. (Features combos will arise synthetically that do not occur together in nature, but these images won't be excluded due to the lack of counterexamples in the training set).

        I work in medical image analysis professionally, and I'm certain no physician or bio

        • by mhesd ( 698429 )

          but these images won't be excluded due to the lack of counterexamples in the training set.

          They will be excluded because the discriminator network of the GAN will learn to distinguish between them and real images.

          • by tempmpi ( 233132 )

            They will be excluded because the discriminator network of the GAN will learn to distinguish between them and real images.

            The discriminator can't do magic. It can only decide based on the information contained in the training images. If the set of training images is rather small, the performance of the discriminator won't be very good. The discriminator will either overfit the training set and generate only images very similar to the ones already contained in the training set or won't be very accurate and the GAN will generate images are not plausible.

            Potentially a GAN could be a way of extracting hard to describe information

      • This seems to be a similar, but more complex, approach to boosted decision trees where the training samples that the algorithm initially mis-classifies are fed back through with a higher weight to make the algorithm pay more attention.
    • by Hodr ( 219920 )

      My thought is similar. This is fine if you want to train an AI to be "as good" as a classically trained person, or to produce lots of images for helping some trained in the normal process do so more easily.

      What it doesn't do is take advantage of the primary benefit of applying AI to these sorts of problems, recognizing patterns that have not been recognized before (and therefore wouldn't be identified in the "rules for identifying tumors" or likely included in producing synthetic imagery).

    • by ceoyoyo ( 59147 )

      It doesn't really. GANs are cool, and having something that can generate fake MRIs is cool, but how do you use it? You can't write a paper anymore saying "look at this cool thing we did", you have to invent some reason why it might be useful.

      The "we can supplement datasets for rare conditions using synthetic data" idea pops up quite a bit. It doesn't really make any sense.

    • by mikael ( 484 )

      If you look at diffusion tensor MRI volume scans, you get a set of voxel cubes that have direction gradients in them. With all those bundles of nerve connections, things can go wrong - disconnections, tumours, abnormal blood vessels. You could get an artist and some whizzy volume cube editing software to cut and paste some abnormality from one image to another in a Photoshop way. But it's quicker to use a pair of GAN's to generate these images.

      https://www.alamy.com/stock-ph... [alamy.com]

  • You'll just have convergence to what was already known about the characteristics of any abnormality, with near zero possibility of providing new insight.

    Seems that the way to characterize this, is that the networks are being used to generate plausible fakes.

    • by religionofpeas ( 4511805 ) on Monday September 17, 2018 @04:04AM (#57326532)

      I'm glad that for anything that scientist have thought of, there's always a slashdot expert who knows better.

      • Some of us actually know about neural networks and the dangers of doing meta-analysis on top of meta-analysis, without more original data.

        • I'm very glad that "knowing about neural networks" allows some of you to dismiss the results of senior research scientists who actually did the work. Have some of you actually read the paper ?

          • Pardon me, but what results? The analyses using the forged MRI images has not yet been done as best I can read the original article. There are no results yet to analyze or dismiss.

      • by ceoyoyo ( 59147 )

        Hi, I'm an actual scientist who does medical AI research, in actual hospitals. The OP is correct. The claim that you can usefully supplement datasets with synthetic data is either journalistic hyperbole, a throwaway "look this is practical and useful" line from a paper or a misunderstanding by some researchers who don't have much experience in the medical field.

      • I am always glad that my fellow humans never cease to astound me.

        You engage in magical thinking while worshiping science. Do you use an atomic as your religious symbol ?

        Anyway the essence of science is validation against reality, so tell me how do these networks link back to reality or expand the knowledge of the disease ?

  • On the nose (Score:4, Insightful)

    by nospam007 ( 722110 ) * on Monday September 17, 2018 @02:33AM (#57326338)

    Artificial Intelligence finds artificial brain damage.

    • Artificial Intelligence finds artificial brain damage.

      This is a synthetic brain, not an artificial brain, Jim.

      So the really interesting question, for the IgNobel, is . . . "Do synthetic brains dream of polyester sheep . . . ?"

      • syn-thet-ic (adjective):
        - made by chemical synthesis, especially to imitate a natural product.

        So, no, it is artificial and not synthetic.

BLISS is ignorance.

Working...