AI Algorithms Uncannily Good At Spotting Your Race From Medical Scans (theregister.com) 144
An anonymous reader quotes a report from The Register: Neural networks can correctly guess a person's race just by looking at their bodily x-rays and researchers have no idea how it can tell. There are biological features that can give clues to a person's ethnicity, like the color of their eyes or skin. But beneath all that, it's difficult for humans to tell. That's not the case for AI algorithms, according to a study that's not yet been peer reviewed. A team of researchers trained five different models on x-rays of different parts of the body, including chest and hands and then labelled each image according to the patient's race. The machine learning systems were then tested on how well they could predict someone's race given just their medical scans. They were surprisingly accurate. The worst performing was able to predict the right answer 80 percent of the time, and the best was able to do this 99 per cent, according to the paper.
"We demonstrate that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate," the team warns [PDF]. "We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalizes to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."
"We demonstrate that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate," the team warns [PDF]. "We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalizes to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."
Or ... (Score:2)
More importantly... (Score:5, Funny)
Probably just subtle differences in bone structure (Score:2)
It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.
In any case I don't think it's going to have much of an impact on anything. You're a thous
Re: (Score:3)
It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.
They used multiple different large datasets.
Re: (Score:2)
Re: (Score:2)
It's also entirely possible this wouldn't be replicable outside of their specific sample. E.g. this might work as long as your sample is limited to one state or country and completely break down when applied to a data set from another region.
They used multiple different large datasets.
That doesn't really matter that much. What matters is, once you've trained it on that data, when you present other data to it, can it classify that data?
How much replication is in these datasets? We don't know. The database of hand x-rays might contain mostly the same people as the chest x-rays, for example. Patient privacy prevents our being able to easily validate the data. We'll have to wait for other researchers whose reputations aren't tied to the outcome to attempt to replicate this study on other dat
Re: (Score:2)
Re: (Score:2)
if there is something missed here, it is something else, perhaps outside the image itself (e.g., metadata) that it is picking up on.
Yeah, that is exactly what I was thinking, based on the highpass filter results.
Re:Probably just subtle differences in bone struct (Score:4, Interesting)
That's addressed right from the beginning, and clipping the image intensity at 60% to remove bone density information didn't have much effect.
The problem I have with this study is Figure 6, where it still detects race even when the high pass filter has removed all obvious physical structures and all a human sees is a gray field. Lets wait for peer review on this one, because there is likely some mistake, some data leakage into the AI.
When all you can see is a gray field, and then you remove half of the data that is still there, this still works. So if it works, what is it actually comparing? And if their are physical differences it can still distinguish, why does the ability to predict degrade at the same rate across races when you muddy the data like that? I've dealt with statistical analysis of low quality data before, and some of the groupings should fail first when you can no longer see the feature in the people with the smaller version.
Re: (Score:2)
Re: (Score:2)
"Should be," sure. If the reviewers are good at data science, yeah. If the reviewers are good at medical science, maybe.
I doubt it will survive review, but even if it does, I'd really wait for replication before taking it seriously. If it turns out to work in the real world this is going to be a huge, Nobel-Prize-Level discovery, that will likely be followed by massive improvements in AI disease diagnosis. But more likely it will be a nothingburger, like AI disease diagnosis generally.
It could also lead to
Re: (Score:2)
That last line is what I worry about. Even if a lot of these models that spot race or homosexuality turn out to be as inaccurate and bullshit as measuring people's skulls was in the 30s it's not going to stop some autocracy somewhere from misusing it. The problem with machine learning models, especially deep models, is they give us the illusion of certainty and scientificness about things we really have no business being that certain about.
What's worse is because of how the tech works at the moment they're
Re: (Score:3)
That's a silly worry. If you live in one of those places, these bad things are already happening without a dubious AI model. And this isn't the 1930s, with modern technology they'll figure out pretty quickly with their nefariously-collected realworld data if it is a real test. So never fear, if it works they'll target their horrible activities as they intend!
I think it is hilarious when people worry that they might accidentally persecute somebody unintended. You just assume they'll consider you Virtuous? Ma
Re: (Score:2)
inaccurate and bullshit as measuring people's skulls was in the 30s
While Phrenology was indeed pseudo-scientific bullshit, you seem to be throwing the baby out with the bathwater. One can tell a lot about the race, age and gender of a cadaver by "measuring the skull". It just tells you nothing about personality. The whole area of "psychology" in those days was about as scientific as sociology is today :-)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Laugh, get you and your almighty ego. Careful, you'll trip over that thing.
Re: (Score:2)
Re: (Score:3)
The problem is deep learning models are still basically black boxes and they're very prone to overfitting or spurious correlations in some really weird ways. You could wind up with a model that decides every black person whose ring finger is longer than their index finger has cancer and never know how or why.
Obviously the race fearmongering angle is pure media sensationalism, but the problem of black box models latching onto things that aren't relevant or are deeply concerning in other ways IS a very real i
Re: (Score:2)
Things that have no bearing on the person and aren't even perceptible to a regular human the little machine can pick up on because there's a statistical significance there.
The differences are very substantial, especially between sub-Saharan Africans and other races.
It is as great as gender difference: an average black female in the US has about the same bone density as a white male, until menopause.
https://www.sciencedirect.com/... [sciencedirect.com]
This has important "bearing on the person" for things such as rates of bone fracture, and sporting ability.
If you watched the Olympics, you may have notice a varied representation of races (not just nations) in different sports.
I'm not sure why you
Re: Probably just subtle differences in bone struc (Score:2)
In any case I don't think it's going to have much of an impact on anything. You're a thousand times more likely to suffer a negative outcome due to your race when someone looks at you then when they look at your x-rays.
There are other things at stake, than merely unjust discrimination. If Multiple branches of the healthcare world have a strong undercurrent school of thought strongly opining that race doesn't exist(TM). I.e. medically, all humans are so overwhelmingly the same and differences between races are minuscule compared to the differences within the same race.
There are real world implications of this turns out to be false. E.g. some countries don't note the race of the patient in pharmacovigilance reports - the US
I thought the conventional wisdom was (Score:3, Insightful)
How would that be consistent with AI detection of with x-rays?
Re:I thought the conventional wisdom was (Score:5, Insightful)
Re: (Score:2)
What level of biological differences would be sufficient to use "race", "subspecies", or "species"? (Keep in mind that ability to interbreed is no longer determinative for species.) Where do the population differences for humans fall short of the differences that justify using "race"?
Re: (Score:2)
Re: I thought the conventional wisdom was (Score:2)
It was self-identified race.
What's the other kind?
Re: (Score:3)
Among modern humans, you would never use the terms species or subspecies. They can interbreed and occupy the same environmental niche. If you want to talk biology, you want to talk about phenotypes, not race. Race is the social construct, phenotype is the biological classification.
Re:I thought the conventional wisdom was (Score:4, Informative)
Well, that would be an interesting coincidence, if it were remotely true. Consider, for example, Caucasian, or white. Clearly most people labeled "Caucasian" are not descended from the people of the Caucasus. "White" covers a broad swathe. Compare a red-haired, freckled Irish person with a blond Scandinavian, or a Russian (there are red-haired Russians, of course, but there are obvious phenotypic differences). Consider the diversity in Africa and the long history of Europeans referring to people there as "the African race", etc. So, you're very wrong here.
Race is a social construct, and phenotype is a biological idea. To be very clear though, for human beings, both race and phenotype are fuzzy and messy. You can't do simple Venn diagrams with either. There's always a lot going on around the edges of the definitions.
Re: I thought the conventional wisdom was (Score:2)
Race is an _informal_ ranking below subspecies. You're asking the wrong question, the problem is human race definitions are flaming hot garbage, and there's little reason to "fix" them.
Like if I drive from Mexico to Argentina for example, which human races might I encounter and why. Check the Wikipedia page for SA demographics, there's some shit you've never heard of. Notice how the indigenous populations of a whole continent turns into indo-something, while different European religious groups are somet
Re: I thought the conventional wisdom was (Score:2)
Argentina, French Guyana, and Belize are not considered part of Latin America.
Re: (Score:2, Insightful)
Indeed. A couple of hundred years ago Europeans started trying to classify every animal, creating hierarchies to explain nature. Naturally they wanted to explain why white Europeans were the pinnacle of human civilization too, superior to all others, so they started creating the modern concept of race.
They looked at obvious things like skin colour, skull size and shape, and dubious measurements like IQ. It wasn't really science because the conclusion had already been drawn: White Europeans were the best, ev
Re: I thought the conventional wisdom was (Score:2)
Er ... so the AI system is detecting a social construct?
This is convoluted reasoning. (Yours, I mean, not the AI's.)
It's like deciding to call a vehicle a car or a truck. This is a straightforward task for an image classifying AI, because we've trained it on our social labels. It's a social construct more than anything technical like whether it's a body on frame design or whatever. For example, a Honda Ridgeline is a truck, because it fits the social expectations of a truck, unibody frame be damned. Anyone that isn't a total dipshit can see this, no offense.
Re: (Score:2, Insightful)
Race is not biological. It is a social construct.
How would that be consistent with AI detection of with x-rays?
Actually, "race not being biological" is a human construct.
Re: (Score:2)
"race not being biological" is a human construct
All right, I'll bite.
Define the human "races" in purely biological terms.
Re: I thought the conventional wisdom was (Score:2)
Race is not biological. It is a social construct.
You might want to clarify whether you're using the word "race" to refer to genetic features (such as Western Europeans' tolerance for dairy, Inuits' resistance to scurvy, Germans' tendency to pee in each other's mouths, etc) or something that's entirely pretend - the validity of your argument depends upon it.
Re: (Score:2)
Race is not biological because it is extremely ill-defined.
OTOH, people do seem to have consistent intuitions about which of a specified group of "races" a particular individual is a member of. It's been demonstrated that this isn't, or isn't solely, skin color or any other particular feature. So, "social construct". That doesn't mean it's fictitious, but it does mean that we can't specify it accurately.
If you're talking biology it's better to talk about gene lines or some such, but be sure you use well-
Re: I thought the conventional wisdom was (Score:2)
You just wrote the most thorough self-own, I had to reread it all to be sure you weren't facetiously pretending not to get it to troll the idiots that really don't get it. After all those good examples how do you not get it... but I think you mean well.
Ok, let's say the boundary between orange and red is at 620 nm, otherwise yes, orange and red are just social constructs. You're on to something! They're fuzzy, vague ideas, and light at 619 nm, 621 nm would be a total toss up for a jury of your peers.
Now
Re: I thought the conventional wisdom was (Score:2)
If it's not a social construct, then how many categories are there, and what are the classification rules?
This article is about going backwards from medical imaging to racial labels, which it can do because we labeled its training data... according to our social rules. More simply, you could also trace a person's silhouette and make a guesses at which social labels might apply with fuzzy logic, same as this AI. Try going the other way, draw something for each race. You can't without leaning entirely on c
Re: (Score:2)
Yes, God forbid (Score:4, Insightful)
and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice."
We wouldn't want to make medical decisions based on racial disparities [youtu.be].
Re: (Score:2)
Making medical decisions based on racial disparities is a very bad idea, because racial disparities are statistical, and patients are individuals, who may not share any particular attribute often, or even usually, true of their racial group. This is true of chemical factors like blood groups and antibody sensitivity and lactose intolerance as well as other characteristics, and that kind of difference can affect the correct medical decision.
Re: (Score:3)
That's not the issue here. The issue is that we have seen human doctors make biased decisions due to race, such as delaying treatment or giving a worse prognosis that results in palliative care instead of trying to cure the patient.
For example the algorithm may notice that black people tend to have lower survival rates for a particular type of cancer, and so recommend against chemotherapy because the probability of it working is low. However the lower survival rate is due to lack of access to healthcare, no
Re: (Score:2)
This can happen with age, too.
If a man in his late 80's shows up with prostate cancer, many doctors would be reluctant to treat it aggressively (surgery + radiation + anti-androgen regimen), saying "oh, it's unlikely he'll live much longer anyway".
But my grandfather had just that diagnosis, and his doctors said "oh, here's a guy who is otherwise healthy -- if we treat this aggressively and cure it, he might live a good long while yet." They were right -- he lived to 101.
Can != Will (Score:2)
Seems like a bit of doom and gloom or possibly FUD that is only weakly linked to reality.
1. Race identifying AI can identify race in medical images.
Re: (Score:2)
Essentially, there are outcomes associated with demographics. If the AI is unconsciously picking up on those demogra
Re: (Score:2)
For instance, you may have a training data set of images and outcomes for breast cancer. Your algorithm may pick up on race. It may then associate race with negative outcomes.
That makes no sense. If the algorithm is trained to recognize the risk/outcome of breast cancer, then it will weigh all various input criteria to optimize the determination of the risk/outcome for breast cancer. Assuming race *is* a factor in getting breast cancer, then you'd better hope the algorithm catches on to this and adds race as a contributing element, so it can get the most correct decisions in most cases. If, however, you refuse to let race be considered for ideological reasons, you end up putting
Re: (Score:2)
As an example: https://www.breastcancer.org/r... [breastcancer.org]
Re: (Score:2)
This is complete nonsense. You're just determined to inject racism in the discussion (probably for some sad form of virtue signalling) and happy to ignore logic in the process.
Contrary to your statement, the algorithm we're discussing was for spotting cancer, not for determining the course of treatment. The one that decides the treatment is the doctor. Now, it's possible that doctors do treat different races differently when they shouldn't. That needs to be addressed, but it's not caused by the predictive a
Re: (Score:2)
To be clear, the paper isn't about this algorithm. The algorithm is a proof of concept, a strawman. It's what it is doing that is the point. Have you read the paper? Are you familiar with the studies i
Re: (Score:3)
Re: (Score:2)
Or what if the scans lacked abdominal scans of young women, due to the potential risks and liabilities of CT scans of fetal tissue? We saw this recently with HIV and the COVID-19 vaccines. Because so few people with HIV were tested in the early studies, it took extra work to verify the vaccine's safety and effectiveness for them.
Re: (Score:2)
What about this scenario (Score:5, Insightful)
Here's an idea.
What if the training data for the cancer detecting AI turned out to be 99% caucasian scans? There was no specific bias from the researchers, they didn't deliberately go out of their way to only feed it 99% caucasian scans, it just happened to be the data set they could easily get their hands on. Maybe white people are more likely to get scans and the well-resourced hospitals they get their scans at were involved in the data collection, whereas the less resourced hospitals that cater to other ethnic communities were too busy to take part in the data collection.
Now the cancer hunting AI turns out to be really good at hunting for cancers in white people, because that's the set it was trained on.
But in turn it is less effective at hunting for cancers in black people, because there's some confounding aspect of race that gets mixed up in the neural net, and because we don't know what this aspect is or precisely what those neurons are coded to look for, we're mostly oblivious to this shortcoming.
Congratulations, we've just perpetuated the existing racial disparities in current medical practice without anyone going out of their way to do so. Just no one thought about the risks involved because they weren't aware this was a problem.
On in other words - now that we know this is an issue, we can think about the implications and scenarios where the issue might be a problem. Your comment was a classic example of being warned about something, just assuming its not a problem, and carrying on with BAU. Exactly the sort of behaviour that results in racial disparities in deliveries of services of every type everywhere.
Re: What about this scenario (Score:2)
Green eyed VS blue eyed VS brown eyed? (Score:2)
Re: (Score:2)
I'm pretty sure if the AI was so trained it would be able to do about the same with people with different eye colors
What? Heterochromia really lets you tell race way better? You learn something new ever day I guess.
Re: (Score:2)
This is completely unsurprising (Score:3)
because statistical differences in bone structure between the races have been known for a long time.
Re: (Score:2)
With this latest story, I'm wondering how much of it is based on truth after all.
Re: (Score:2)
More data (Score:2)
I bet it can even guess the country the individual came from with enough training.
This is BS (Score:4, Insightful)
This is bullshit. If I were a black person, I'd damn well want my doctor to know and acknowledge I was black. Otherwise they would not be looking at diseases specific to my race, such a sickle cell anemia. Even getting into all this gender-fluid BS, again, I sure as hell hope a doctor would be considering my ACTUAL gender for my healthcare. I can identify as a woman all I want, but that won't keep me from dying of prostate cancer.
If AI is being trained to identify disease and other issues from x-rays, and that correlates with some race, or gender or any other identifiable metric, it is totally amoral to try to make the AI behave differently or attempt to disassociate the disease from the race or gender.
Re: (Score:2)
Re: (Score:2)
But also you would not want an AI to look at your race when considering diseases that are not affected by it. Say the AI noticed that you were black and that black people have poor survival rates for a particular disease, so recommended against an expensive treatment with a low chance of working on you. Thing is the reason survival rates are low is because black people often can't afford the best treatments, not because of their biology.
You would not want that, right?
The AI shouldn't try to detect race or s
Re: (Score:2)
The AI's job is to find the disease, it doesn't know nor care what the treatment is.
Re: (Score:3)
The AI reports, not recommends.
Who of course do not possess the biases you're railing against.
Re: This is BS (Score:2)
Re: (Score:3, Insightful)
Basically when it comes to medicine, truth is first. Anything less could kill you.
Re: (Score:2)
Which is why the GP's post and so transitively yours for supporting is are deeply stupid.
Re: (Score:3)
Many people actually aren't aware that they have specific medical issues until a doctor diagnoses them. Lots of people go to see their doctor because they feel generally unwell, they have no idea *why* they feel unwell.
One of the key purposes of a doctor is to be able to accurately diagnose medical conditions, preferably as early as possible (ie often before the patient has severe symptoms).
Re: (Score:2)
Why is it you culture war tossers can't make your point without being a fucking cunt about it?
Lol. That is some offensive language coming from a person who has nothing useful to contribute to the conversation except demanding that people be polite to each other.
Re: (Score:2)
What about the X-Rays themselves? (Score:2)
It's Race Bait. Ignore. (Score:2)
This article and the exclamatory comments at the end mark this article as pure race baiting.
Ignore it.
I can tell from a CT scan if spanish or hispanic (Score:4, Informative)
It's really easy: races are more than skin deep. An african albino has african facial features and we can tell he's not an european.
As a neurologist, I usually look at CT scans, even before meeting the patient. Spanish (europeans) and hispanic (american native or mestizo) people share names that the latter inherited from the former. Still, I was able to know whether the scan of a certain Maria Pérez or Juan Fernández belonged to a hispanic or to a spanish person just by the shape of their skull. Spanish, as most europeans, are meso to dolichocephalic (skull elongated front to back), while hispanics, being long lost relatives to asians, are brachiocephalic (round skull). Africans have the most elongated skull, and between europeans, russians have the most round ones.
Of course that's a simple, one guy's observation from a limited dataset. With a huge dataset it should be easy for a deep learning mechanism to find other variations.
That also tells us that even though race doesn't exist as a genetic category, it's there and it can be calculated.
I see... (Score:2)
...or more correctly I don't.
But when even AIs with x-ray eyes that cannot see the color of the skin, differentiate between races, we are doomed.
And researchers have no idea how it can tell (Score:2)
There is no such thing as 'race' (Score:2)
Regardless, the US census states 'An individual’s response to the race question is based upon self-identification.' https://www.census.gov/topics/population/race [census.gov]
Re: (Score:2)
I tried listing "human" in the box labeled "race" on a government form once, and was told that what they really wanted was my skin color. I tried writing "tan", and that didn't work either.
You're right, and I hope we get there someday.
Re: (Score:2, Informative)
Sorry to disillusion everyone, but the only 'race' is 'human'. The program can, at best' describe someones ethnic background. Which might help define genetic issues that someone should be aware of.
News flash: "Race" is how we describe someone's ethnic background.
Isn't it a good thing? (Score:2)
If human races are subtly different and perhaps are affected differently by different health issues, then being able to classify them during research would certainly help with finding problems and matching treatments. We know that men and women respond to treatment differently, something that was neglected in research in the past. We should make sure the same thing doesn't happen with race.
MUST BE BANNED (Score:2)
It must be banned! BANNED I tell you! Because it can tell your race, unlike any person who sees you in person.
Alternate grouping criteria? (Score:2)
Genetic Haplogroups, cultures and nutritional history spring to mind.
Re: Mixed race individuals? (Score:2)
Probably not well, especially given that it wasn't really designed for the task. It would very likely guess one of the races they are predominately though, unless that particular mix of races just happens to look similar to another race.
If it was purpose built for racial identification with a large enough sample of mixed race folks, then it could probably figure it out. Race is evident in a whole lot of biology, even when it isn't 100%.
Re: (Score:2)
How well it does on mixed race samples, when trained with "pure" ones (as far as possible, today basically everybody is mixed to some degree and it makes the genetic make-up better), could give a clue what features it actually uses.
Re: (Score:2)
And how well does it detect lies? As much as 10% of some nations consists of the children of someone other than their identified father.
Re: (Score:2)
Ha.
This "racist" (branded as such for believing in "racist" ideas such as that people should be judged not by the color of their skin, but the content of their character) looks as white as a sheet of paper, but descends primarily from Southern and Eastern Europe, at a time when both were ruled by the Ottoman Empire.
Hence probably at least 2% African and Asian by heritage. Maybe less, maybe more.
So what???
It has no bearing whatsoever on what I believe.
I believe that people are people, and that their skin co
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Likely. I have a close friend who is 1/16 Japanese and otherwise Northern European. Multiple people (especially those of Asian heritage themselves) can spot her Japanese heritage in her facial structure.
Re: (Score:2)
Re: (Score:2)
And what do computers deal with best? Noise.
Re: (Score:2)
It is. The applications are better customized medical treatments. This is one of the factors.
Re: (Score:2)
Not clear that conventional medicine is anywhere near realizing an application of this.
This is research. It does not have to be "near" the final result to be useful.
Re: AI May Be Good, But Slashdot Admins are Better (Score:2)
A quote comes to mind: 'If you run into an asshole in the morning, you ran into an asshole. If you run into assholes all day, you're the asshole.'
Re: (Score:2)
Most of the people I meet in real life don't seem like assholes.
Most of the people I meet on the road, and many I meet online, very much do seem like assholes, and exceptionally putrid, foul-smelling ones at that.
Can't help wondering how much of that is them, and how much is me.
Re: (Score:2)
You get downmodded for being an antivaxxer, your race has nothing to do with it.
Re: (Score:2)
Re: (Score:2)
Yes.
If you have a fairly complete medical history, it is fairly easy to predict your likelihood to have the musculature and/or cardiovascular health to excel in either of those events respectively.
And there are some things you can't control, including age and genetics among many others, that will factor into that prediction.