Related Sites

Related Sites

medical news ireland medical news ireland medical news ireland

NOTE: By submitting this form and registering with us, you are providing us with permission to store your personal data and the record of your registration. In addition, registration with the Medical Independent includes granting consent for the delivery of that additional professional content and targeted ads, and the cookies required to deliver same. View our Privacy Policy and Cookie Notice for further details.



Don't have an account? Register

ADVERTISEMENT

ADVERTISEMENT

Beer, beans, and medicine:  When AI gets ‘pernicious’ with the data

By Dr Doug Witherspoon - 10th Feb 2025

AI

The advent of artificial intelligence (AI) holds much promise in medical imaging and improving diagnostic accuracy. The studies are ongoing, but AI medical imaging has the potential to help ‘big data’ interpretation and assist in solving clinical problems and improving efficiency.

Legal and ethical considerations are topics of much discussion, but some recent studies have drawn attention to the potential problem of ‘shortcut learning’ in medical AI. If you weren’t familiar with that term, you might also want to acquaint yourself with the phrase ‘algorithm unfairness’.

‘Shortcut learning’ refers to a phenomenon where an AI model learns to solve a task based on spurious correlations in the data rather than features directly related to the task itself. ‘Algorithm unfairness’, also known as algorithm bias, occurs when machine learning models base their predictions on incorrect correlations in the data.

A study published in December illustrated some of the quirks of shortcut learning leading to algorithm unfairness. The paper by researchers at Dartmouth Health, US, was titled ‘The risk of shortcutting in deep learning algorithms for medical imaging research’ and was published in Scientific Reports. The authors illustrate how easily shortcut learning can happen and the potential risks.

“We use simple ResNet18 convolutional neural networks to train models to do two things they should not be able to do: Predict which patients avoid consuming refried beans or beer purely by examining their knee x-rays (AUC of 0.63 for refried beans and 0.73 for beer),” wrote the authors.

The study examined how AI algorithms often rely on confounding variables – such as differences in x-ray equipment or clinical site markers – to make predictions rather than medically meaningful features.

The researchers set out to test whether an AI model can be ‘trained’ to predict an outcome that lacks validity. In this case, it was whether the patient avoids drinking beer or abstains from eating refried beans based solely on a knee x-ray. It was an exercise that the team described as a “parlour trick”. They stated: “The models are not uncovering a hidden truth about beans or beer hidden within our knees, nor are the accuracies attributable to mere chance. Rather, the models are ‘cheating’ by shortcut learning – a situation where a model learns to achieve its objective by exploiting unintended or simpler patterns in the data rather than learning the more complex, underlying relationships it was intended to learn.”


It is incredibly easy to fall into the trap of presuming that the model ‘sees’ the same way we do. In the end, it doesn’t

The Dartmouth team analysed more than 25,000 knee x-rays (conducted on 4,789 patients) from the National Institutes of Health-funded Osteoarthritis Initiative. They created two models – one for each target variable of a patient’s self-reported preference for beans and beer during the study enrolment. The study examined bilateral PA fixed flexion x-rays collected across five clinical sites.

Brandon Hill, co-author of the study and a machine learning scientist at Dartmouth, commented on the results: “This goes beyond bias from clues of race or gender. We found the algorithm could even learn to predict the year an x-ray was taken. It’s pernicious – when you prevent it from learning one of these elements, it will instead learn another it previously ignored. This danger can lead to some really dodgy claims and researchers need to be aware of how readily this happens when using this technique.”

He continued: “The burden of proof just goes way up when it comes to using models for the discovery of new patterns in medicine. Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model ‘sees’ the same way we do. In the end, it doesn’t. It is almost like dealing with an alien intelligence. You want to say the model is ‘cheating,’ but that anthropomorphises the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.”

The bottom line appears to be that AI can be a tremendous boon in healthcare, but only ‘under adult supervision’, with rigorous evaluation standards.

We’ll leave the last word to computer scientist and AI researcher Eliezer Yudkowsky: “By far the greatest danger of AI is that people conclude too early that they understand it.”

Leave a Reply

ADVERTISEMENT

Latest

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Latest Issue
Medical Independent 11th March 2025

You need to be logged in to access this content. Please login or sign up using the links below.

ADVERTISEMENT

Trending Articles

ADVERTISEMENT