Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Medicine

AI Algorithms Uncannily Good At Spotting Your Race From Medical Scans (theregister.com) 144

An anonymous reader quotes a report from The Register: Neural networks can correctly guess a person's race just by looking at their bodily x-rays and researchers have no idea how it can tell. There are biological features that can give clues to a person's ethnicity, like the color of their eyes or skin. But beneath all that, it's difficult for humans to tell. That's not the case for AI algorithms, according to a study that's not yet been peer reviewed. A team of researchers trained five different models on x-rays of different parts of the body, including chest and hands and then labelled each image according to the patient's race. The machine learning systems were then tested on how well they could predict someone's race given just their medical scans. They were surprisingly accurate. The worst performing was able to predict the right answer 80 percent of the time, and the best was able to do this 99 per cent, according to the paper.

"We demonstrate that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate," the team warns [PDF]. "We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalizes to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."

This discussion has been archived. No new comments can be posted.

AI Algorithms Uncannily Good At Spotting Your Race From Medical Scans

Comments Filter:
  • This would be a good forensics tool.
  • by Anonymous Coward on Monday August 09, 2021 @06:07PM (#61673999)
    Can it tell men from women. Apparently that's impossible without being told their personal pronouns.
  • Things that have no bearing on the person and aren't even perceptible to a regular human the little machine can pick up on because there's a statistical significance there.

    It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.

    In any case I don't think it's going to have much of an impact on anything. You're a thous
    • It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.

      They used multiple different large datasets.

      • They used data from mostly one site from a few different studies. Some of the studies were quite small, others fairly large.
      • It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.

        They used multiple different large datasets.

        That doesn't really matter that much. What matters is, once you've trained it on that data, when you present other data to it, can it classify that data?

        How much replication is in these datasets? We don't know. The database of hand x-rays might contain mostly the same people as the chest x-rays, for example. Patient privacy prevents our being able to easily validate the data. We'll have to wait for other researchers whose reputations aren't tied to the outcome to attempt to replicate this study on other dat

        • I am somewhat familiar with the datasets in question. While possible to some degree, I doubt it's large enough to have an effect. As someone pointed out, if there is something missed here, it is something else, perhaps outside the image itself (e.g., metadata) that it is picking up on.
          • if there is something missed here, it is something else, perhaps outside the image itself (e.g., metadata) that it is picking up on.

            Yeah, that is exactly what I was thinking, based on the highpass filter results.

    • by Aighearach ( 97333 ) on Monday August 09, 2021 @07:01PM (#61674171)

      That's addressed right from the beginning, and clipping the image intensity at 60% to remove bone density information didn't have much effect.

      The problem I have with this study is Figure 6, where it still detects race even when the high pass filter has removed all obvious physical structures and all a human sees is a gray field. Lets wait for peer review on this one, because there is likely some mistake, some data leakage into the AI.

      When all you can see is a gray field, and then you remove half of the data that is still there, this still works. So if it works, what is it actually comparing? And if their are physical differences it can still distinguish, why does the ability to predict degrade at the same rate across races when you muddy the data like that? I've dealt with statistical analysis of low quality data before, and some of the groupings should fail first when you can no longer see the feature in the people with the smaller version.

      • Submitted to the Lancet, so should be a fairly rigorous review.
        • "Should be," sure. If the reviewers are good at data science, yeah. If the reviewers are good at medical science, maybe.

          I doubt it will survive review, but even if it does, I'd really wait for replication before taking it seriously. If it turns out to work in the real world this is going to be a huge, Nobel-Prize-Level discovery, that will likely be followed by massive improvements in AI disease diagnosis. But more likely it will be a nothingburger, like AI disease diagnosis generally.

          It could also lead to

          • That last line is what I worry about. Even if a lot of these models that spot race or homosexuality turn out to be as inaccurate and bullshit as measuring people's skulls was in the 30s it's not going to stop some autocracy somewhere from misusing it. The problem with machine learning models, especially deep models, is they give us the illusion of certainty and scientificness about things we really have no business being that certain about.

            What's worse is because of how the tech works at the moment they're

            • That's a silly worry. If you live in one of those places, these bad things are already happening without a dubious AI model. And this isn't the 1930s, with modern technology they'll figure out pretty quickly with their nefariously-collected realworld data if it is a real test. So never fear, if it works they'll target their horrible activities as they intend!

              I think it is hilarious when people worry that they might accidentally persecute somebody unintended. You just assume they'll consider you Virtuous? Ma

            • by quenda ( 644621 )

              inaccurate and bullshit as measuring people's skulls was in the 30s

              While Phrenology was indeed pseudo-scientific bullshit, you seem to be throwing the baby out with the bathwater. One can tell a lot about the race, age and gender of a cadaver by "measuring the skull". It just tells you nothing about personality. The whole area of "psychology" in those days was about as scientific as sociology is today :-)

    • The problem is deep learning models are still basically black boxes and they're very prone to overfitting or spurious correlations in some really weird ways. You could wind up with a model that decides every black person whose ring finger is longer than their index finger has cancer and never know how or why.

      Obviously the race fearmongering angle is pure media sensationalism, but the problem of black box models latching onto things that aren't relevant or are deeply concerning in other ways IS a very real i

    • by quenda ( 644621 )

      Things that have no bearing on the person and aren't even perceptible to a regular human the little machine can pick up on because there's a statistical significance there.

      The differences are very substantial, especially between sub-Saharan Africans and other races.
      It is as great as gender difference: an average black female in the US has about the same bone density as a white male, until menopause.
      https://www.sciencedirect.com/... [sciencedirect.com]

      This has important "bearing on the person" for things such as rates of bone fracture, and sporting ability.
      If you watched the Olympics, you may have notice a varied representation of races (not just nations) in different sports.

      I'm not sure why you

    • In any case I don't think it's going to have much of an impact on anything. You're a thousand times more likely to suffer a negative outcome due to your race when someone looks at you then when they look at your x-rays.

      There are other things at stake, than merely unjust discrimination. If Multiple branches of the healthcare world have a strong undercurrent school of thought strongly opining that race doesn't exist(TM). I.e. medically, all humans are so overwhelmingly the same and differences between races are minuscule compared to the differences within the same race.

      There are real world implications of this turns out to be false. E.g. some countries don't note the race of the patient in pharmacovigilance reports - the US

  • by presidenteloco ( 659168 ) on Monday August 09, 2021 @06:12PM (#61674015)
    Race is not biological. It is a social construct.

    How would that be consistent with AI detection of with x-rays?
    • by superdave80 ( 1226592 ) on Monday August 09, 2021 @06:22PM (#61674055)
      The use of the word race is a social construct, but there are biological differences between certain groups. Just not enough to justify using a term like 'race', biologically speaking.
      • by Entrope ( 68843 )

        What level of biological differences would be sufficient to use "race", "subspecies", or "species"? (Keep in mind that ability to interbreed is no longer determinative for species.) Where do the population differences for humans fall short of the differences that justify using "race"?

        • It was self-identified race.
        • by tragedy ( 27079 )

          Among modern humans, you would never use the terms species or subspecies. They can interbreed and occupy the same environmental niche. If you want to talk biology, you want to talk about phenotypes, not race. Race is the social construct, phenotype is the biological classification.

        • Race is an _informal_ ranking below subspecies. You're asking the wrong question, the problem is human race definitions are flaming hot garbage, and there's little reason to "fix" them.

          Like if I drive from Mexico to Argentina for example, which human races might I encounter and why. Check the Wikipedia page for SA demographics, there's some shit you've never heard of. Notice how the indigenous populations of a whole continent turns into indo-something, while different European religious groups are somet

      • Re: (Score:2, Insightful)

        by AmiMoJo ( 196126 )

        Indeed. A couple of hundred years ago Europeans started trying to classify every animal, creating hierarchies to explain nature. Naturally they wanted to explain why white Europeans were the pinnacle of human civilization too, superior to all others, so they started creating the modern concept of race.

        They looked at obvious things like skin colour, skull size and shape, and dubious measurements like IQ. It wasn't really science because the conclusion had already been drawn: White Europeans were the best, ev

    • Re: (Score:2, Insightful)

      by gweihir ( 88907 )

      Race is not biological. It is a social construct.

      How would that be consistent with AI detection of with x-rays?

      Actually, "race not being biological" is a human construct.

      • "race not being biological" is a human construct

        All right, I'll bite.

        Define the human "races" in purely biological terms.

    • Race is not biological. It is a social construct.

      You might want to clarify whether you're using the word "race" to refer to genetic features (such as Western Europeans' tolerance for dairy, Inuits' resistance to scurvy, Germans' tendency to pee in each other's mouths, etc) or something that's entirely pretend - the validity of your argument depends upon it.

    • by HiThere ( 15173 )

      Race is not biological because it is extremely ill-defined.

      OTOH, people do seem to have consistent intuitions about which of a specified group of "races" a particular individual is a member of. It's been demonstrated that this isn't, or isn't solely, skin color or any other particular feature. So, "social construct". That doesn't mean it's fictitious, but it does mean that we can't specify it accurately.

      If you're talking biology it's better to talk about gene lines or some such, but be sure you use well-

    • If it's not a social construct, then how many categories are there, and what are the classification rules?

      This article is about going backwards from medical imaging to racial labels, which it can do because we labeled its training data... according to our social rules. More simply, you could also trace a person's silhouette and make a guesses at which social labels might apply with fuzzy logic, same as this AI. Try going the other way, draw something for each race. You can't without leaning entirely on c

      • If the "socially constructed" human-labeled labels can be predicted accurately from the medical images by the artificial neural net, then there is an objective signal there, meaning that the labels have a basis in probabilistically distinguishable biological physical ranges along different dimensions/features. The neural net, as yet at least, does not do "social" construction of anything. It just does math on pictures of biological structures.
  • Yes, God forbid (Score:4, Insightful)

    by Krishnoid ( 984597 ) on Monday August 09, 2021 @06:14PM (#61674021) Journal

    and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."

    We wouldn't want to make medical decisions based on racial disparities [youtu.be].

    • by HiThere ( 15173 )

      Making medical decisions based on racial disparities is a very bad idea, because racial disparities are statistical, and patients are individuals, who may not share any particular attribute often, or even usually, true of their racial group. This is true of chemical factors like blood groups and antibody sensitivity and lactose intolerance as well as other characteristics, and that kind of difference can affect the correct medical decision.

    • by AmiMoJo ( 196126 )

      That's not the issue here. The issue is that we have seen human doctors make biased decisions due to race, such as delaying treatment or giving a worse prognosis that results in palliative care instead of trying to cure the patient.

      For example the algorithm may notice that black people tend to have lower survival rates for a particular type of cancer, and so recommend against chemotherapy because the probability of it working is low. However the lower survival rate is due to lack of access to healthcare, no

      • This can happen with age, too.

        If a man in his late 80's shows up with prostate cancer, many doctors would be reluctant to treat it aggressively (surgery + radiation + anti-androgen regimen), saying "oh, it's unlikely he'll live much longer anyway".

        But my grandfather had just that diagnosis, and his doctors said "oh, here's a guy who is otherwise healthy -- if we treat this aggressively and cure it, he might live a good long while yet." They were right -- he lived to 101.

  • So an AI that is specifically trained to recognize race from X-Ray's is good at that task. How does that transfer to other AI systems? Why would an AI that is trained to spot cancer also be looking for race? And if racial characteristics in an X-Ray make the cancer spotting AI better at finding cancer in different races then why would that be a bad thing?

    Seems like a bit of doom and gloom or possibly FUD that is only weakly linked to reality.
    1. Race identifying AI can identify race in medical images.
    • The concern of the authors is that because so much AI, particularly deep learning, can be a bit of a black box, it is possible for AI to become sensitive to "race" and then use that in inappropriate or iniquitous ways. For instance, you may have a training data set of images and outcomes for breast cancer. Your algorithm may pick up on race. It may then associate race with negative outcomes.

      Essentially, there are outcomes associated with demographics. If the AI is unconsciously picking up on those demogra

      • For instance, you may have a training data set of images and outcomes for breast cancer. Your algorithm may pick up on race. It may then associate race with negative outcomes.

        That makes no sense. If the algorithm is trained to recognize the risk/outcome of breast cancer, then it will weigh all various input criteria to optimize the determination of the risk/outcome for breast cancer. Assuming race *is* a factor in getting breast cancer, then you'd better hope the algorithm catches on to this and adds race as a contributing element, so it can get the most correct decisions in most cases. If, however, you refuse to let race be considered for ideological reasons, you end up putting

        • I disagree. If black women receive poorer care than white women, with the same type of breast cancer, and you design a predictive algorithm to assign patients to treatment, an algorithm that has been trained on data reflecting health inequity and inadvertently making classification decisions based on race, you can create a biased algorithm that perpetuates health inequity.

          As an example: https://www.breastcancer.org/r... [breastcancer.org]
          • This is complete nonsense. You're just determined to inject racism in the discussion (probably for some sad form of virtue signalling) and happy to ignore logic in the process.

            Contrary to your statement, the algorithm we're discussing was for spotting cancer, not for determining the course of treatment. The one that decides the treatment is the doctor. Now, it's possible that doctors do treat different races differently when they shouldn't. That needs to be addressed, but it's not caused by the predictive a

            • No virtue signaling. Nor ignorance of logic. You are missing the larger point of how AI can be used with medical images for diagnostic and treatment purposes. I never said that algorithms should ignore race, but if they are picking up on race and aren't intended to do so, well you get unintended consequences.

              To be clear, the paper isn't about this algorithm. The algorithm is a proof of concept, a strawman. It's what it is doing that is the point. Have you read the paper? Are you familiar with the studies i
      • Or, conversely, your training data set may not be very diverse and the algorithm only works acceptably on that group but not acceptably on other groups.
        • Or what if the scans lacked abdominal scans of young women, due to the potential risks and liabilities of CT scans of fetal tissue? We saw this recently with HIV and the COVID-19 vaccines. Because so few people with HIV were tested in the early studies, it took extra work to verify the vaccine's safety and effectiveness for them.

          • Yea, not all the bias is unintentionally racist. Because of all the variables and difficulties, and because at the end of the day there are profits to be made and R&D is usually cut short and/or offloaded to universes with low budgets. Thus some assumptions have to be made for ethical, or other considerations. Sometimes it’s just laziness in a publication because they didn’t want to analyze data they had to save an hour or two. Those kind of omissions can get propagated down the line i
    • by Lanthanide ( 4982283 ) on Monday August 09, 2021 @07:02PM (#61674175)

      Here's an idea.

      What if the training data for the cancer detecting AI turned out to be 99% caucasian scans? There was no specific bias from the researchers, they didn't deliberately go out of their way to only feed it 99% caucasian scans, it just happened to be the data set they could easily get their hands on. Maybe white people are more likely to get scans and the well-resourced hospitals they get their scans at were involved in the data collection, whereas the less resourced hospitals that cater to other ethnic communities were too busy to take part in the data collection.

      Now the cancer hunting AI turns out to be really good at hunting for cancers in white people, because that's the set it was trained on.

      But in turn it is less effective at hunting for cancers in black people, because there's some confounding aspect of race that gets mixed up in the neural net, and because we don't know what this aspect is or precisely what those neurons are coded to look for, we're mostly oblivious to this shortcoming.

      Congratulations, we've just perpetuated the existing racial disparities in current medical practice without anyone going out of their way to do so. Just no one thought about the risks involved because they weren't aware this was a problem.

      On in other words - now that we know this is an issue, we can think about the implications and scenarios where the issue might be a problem. Your comment was a classic example of being warned about something, just assuming its not a problem, and carrying on with BAU. Exactly the sort of behaviour that results in racial disparities in deliveries of services of every type everywhere.

      • "Race" is a social construct which is based on particular combinations of ethnic traits. The most obvious to humans is skin color, but there are quite a few other aspects of the human body where members of an ethnic group have similar traits. The "ai" is picking up on these patterns, and it's probably more accurate than the researchers realize because the humans are classifying a person's race as based on skin color.
  • I'm pretty sure if the AI was so trained it would be able to do about the same with people with different eye colors
    • I'm pretty sure if the AI was so trained it would be able to do about the same with people with different eye colors

      What? Heterochromia really lets you tell race way better? You learn something new ever day I guess.

      • no not Hererochroma. I meant green eyed people vs blue eyed people vs brown eyed people. So it learn to look at xrays and MRIs and determine a persons eye color.
  • by blitz487 ( 606553 ) on Monday August 09, 2021 @07:11PM (#61674207)

    because statistical differences in bone structure between the races have been known for a long time.

  • I bet it can even guess the country the individual came from with enough training.

  • This is BS (Score:4, Insightful)

    by Dan East ( 318230 ) on Monday August 09, 2021 @08:33PM (#61674413) Journal

    This is bullshit. If I were a black person, I'd damn well want my doctor to know and acknowledge I was black. Otherwise they would not be looking at diseases specific to my race, such a sickle cell anemia. Even getting into all this gender-fluid BS, again, I sure as hell hope a doctor would be considering my ACTUAL gender for my healthcare. I can identify as a woman all I want, but that won't keep me from dying of prostate cancer.

    If AI is being trained to identify disease and other issues from x-rays, and that correlates with some race, or gender or any other identifiable metric, it is totally amoral to try to make the AI behave differently or attempt to disassociate the disease from the race or gender.

    • It depends on what you're seeing the doctor for. If you got shot, you definitely don't want the doctor to think you're a black person. You want the doctor to think you're a white person. And if at all possible, a cop or a rich dude.
    • by AmiMoJo ( 196126 )

      But also you would not want an AI to look at your race when considering diseases that are not affected by it. Say the AI noticed that you were black and that black people have poor survival rates for a particular disease, so recommended against an expensive treatment with a low chance of working on you. Thing is the reason survival rates are low is because black people often can't afford the best treatments, not because of their biology.

      You would not want that, right?

      The AI shouldn't try to detect race or s

      • The AI's job is to find the disease, it doesn't know nor care what the treatment is.

      • Purposeful obfuscation:

        AI noticed that you were black ... so recommended against

        The AI reports, not recommends.

        those should be things entered into the system manually by a doctor or nurse,

        Who of course do not possess the biases you're railing against.

    • Are you sure we shouldn't treat prostate cancer based on gender identity? After all, the FDA will accept blood donations based on gender identity. I have a dick and so does my husband. We are unable to donate blood. But if one of us had a dick but identified as a woman (a transexual), then all of a sudden we are allowed to donate. I kid you not, all of this is on Red Crosses page. https://www.redcrossblood.org/... [redcrossblood.org]
  • The AI may be doing subtle things with bone structure, but what if it is learning subtle issues about the X-Rays themselves? For example, until a few years ago, X-Rays still were done everywhere using photographic film, but higher end hospitals switched to systems with digital X-rays so there's no film involved. If things like that are more common at hospitals in wealthy areas, which are predominantly white, that would already give the neural net some non-biological data.
  • This article and the exclamatory comments at the end mark this article as pure race baiting.

    Ignore it.

  • by MPAB ( 1074440 ) on Tuesday August 10, 2021 @02:23AM (#61675137)

    It's really easy: races are more than skin deep. An african albino has african facial features and we can tell he's not an european.

    As a neurologist, I usually look at CT scans, even before meeting the patient. Spanish (europeans) and hispanic (american native or mestizo) people share names that the latter inherited from the former. Still, I was able to know whether the scan of a certain Maria Pérez or Juan Fernández belonged to a hispanic or to a spanish person just by the shape of their skull. Spanish, as most europeans, are meso to dolichocephalic (skull elongated front to back), while hispanics, being long lost relatives to asians, are brachiocephalic (round skull). Africans have the most elongated skull, and between europeans, russians have the most round ones.

    Of course that's a simple, one guy's observation from a limited dataset. With a huge dataset it should be easy for a deep learning mechanism to find other variations.

    That also tells us that even though race doesn't exist as a genetic category, it's there and it can be calculated.

  • ...or more correctly I don't.

    But when even AIs with x-ray eyes that cannot see the color of the skin, differentiate between races, we are doomed.

  • Isn't it possible to extract knowledge from Neural networks? I mean to discover relationships present in data
  • Sorry to disillusion everyone, but the only 'race' is 'human'. The program can, at best' describe someones ethnic background. Which might help define genetic issues that someone should be aware of. Why the author of this limited piece is concerned about people learning the 'race' of someone is beyond me, maybe it's just more faux outrage.

    Regardless, the US census states 'An individual’s response to the race question is based upon self-identification.' https://www.census.gov/topics/population/race [census.gov]
    • I tried listing "human" in the box labeled "race" on a government form once, and was told that what they really wanted was my skin color. I tried writing "tan", and that didn't work either.
      You're right, and I hope we get there someday.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Sorry to disillusion everyone, but the only 'race' is 'human'. The program can, at best' describe someones ethnic background. Which might help define genetic issues that someone should be aware of.

      News flash: "Race" is how we describe someone's ethnic background.

  • If human races are subtly different and perhaps are affected differently by different health issues, then being able to classify them during research would certainly help with finding problems and matching treatments. We know that men and women respond to treatment differently, something that was neglected in research in the past. We should make sure the same thing doesn't happen with race.

  • It must be banned! BANNED I tell you! Because it can tell your race, unlike any person who sees you in person.

  • What I think would be very interesting would be to try to group people by other criteria than 'race', and check how well the AI (or any clustering algorithm) does in that case.
    Genetic Haplogroups, cultures and nutritional history spring to mind.

Real Programmers think better when playing Adventure or Rogue.

Working...