Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to 'teach' AI software how to detect beer consumption.
Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to ‘teach’ AI software how to detect beer consumption.
Used with permission/Dartmouth Health

Dartmouth Study Shows AI Could Be ‘Double-Edged Sword’ in Medical Research

Share
Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to 'teach' AI software how to detect beer consumption.
Researchers at Dartmouth Health used X-rays of knees, along with dietary surveys, to ‘teach’ AI software how to detect beer consumption.
Used with permission/Dartmouth Health
Dartmouth Study Shows AI Could Be ‘Double-Edged Sword’ in Medical Research
Copy

A new study by researchers at Dartmouth Health highlights the potential risks of artificial intelligence in medical imaging research, showing that algorithms can be taught to give correct answers but for illogical reasons.

The study, published in Nature’s Scientific Reports, used a cache of 5,000 X-rays of human knee joints and also factored in surveys those patients completed about their dietary habits.

Artificial intelligence software was then asked to identify which of the patients, based on a scan of the X-rays, were most likely to drink beer or eat refried beans, even though there is no visual evidence of either activity in an X-ray of a knee.

“We want to assume it sees things that a human would see, or a human would see if we only had just better vision,” said the paper’s co-author, Brandon Hill, a machine-learning researcher at Dartmouth Hitchcock. “And that’s the core problem here: is that when it makes these associations, we presume it must be from something in the physiology, in the medical image. And that’s not necessarily the case.”

While the machine learning tool did in fact often accurately determine which of the knees — that is, the humans who were X-rayed — were more likely to drink beer or eat beans, it did so by also making assumptions about race, gender and the city in which the medical image was taken. The algorithm was even able to determine what model of X-ray scanning machine took the original images, which allowed it to make connections between the location of the scan and the likelihood of certain dietary habits.

Ultimately, it was those variables that the AI used to determine who drank beer and ate refried beans, and not anything in the image itself related to food or beverage consumption, a phenomenon researchers call “shortcutting.”

“Part of what we’re showing is, it’s a double-edged sword. It can see things humans can’t,” said Hill. “But it can also see patterns that humans can’t, and that can make it easy to deceive you.”

The study’s authors said the paper highlights the caution medical researchers should use in deploying machine learning tools.

“If you have AI that’s detecting whether or not you think a transaction on a credit card is fraudulent, who cares why it thinks that? Let’s just stop the credit card from being able to have charges,” said Dr. Peter Schilling, an orthopedic surgeon and the paper’s senior author.

But in the treatment of patients, Schilling advises clinicians to move forward conservatively with these tools in order to “actually optimize the care they’re given.”

This story was originally published by Maine Public. It was shared as part of the New England News Collaborative.

From Providence to panel workshops, Rhode Islanders have honored lives lost to AIDS through art, healing, and decades of community remembrance
Gov. Dan McKee announced he will allow the $14.34 billion spending plan passed by the General Assembly to become law without his signature
‘Your life isn’t made important by the internet and the phone. It’s made by the things you do and the people you surround yourself with, and the way you treat them’
Communities across the U.S. could lose critical public broadcasting support as bipartisan Senate voices challenge $9B in proposed cuts, calling them politically driven and potentially harmful