This site is part of the Siconnects Division of Sciinov Group

This site is operated by a business or businesses owned by Sciinov Group and all copyright resides with them.

ADD THESE DATES TO YOUR E-DIARY OR GOOGLE CALENDAR

Registration

AI “predicts” beer drinking based on knee X-rays – why this is not only wrong, but dangerous

Dec1, 2024

Where a radiologist can identify fractures and other abnormalities from an X-ray, AI models can see patterns humans cannot, offering the opportunity to expand the effectiveness of medical imaging. A study led by Dartmouth Health researchers, in collaboration with the Veterans Affairs Medical Center in White River Junction, VT, and published in Nature’s Scientific Reports, highlights the hidden challenges of using AI in medical imaging research. The study examined highly accurate yet potentially misleading results—a phenomenon known as shortcut learning.

Using knee X-rays from the National Institutes of Health-funded Osteoarthritis Initiative, researchers demonstrated that AI models could predict unrelated and implausible traits, such as whether patients abstained from eating refried beans or drinking beer. While these predictions have no medical basis, the models achieved surprising levels of accuracy, revealing their ability to exploit subtle and unintended patterns in the data.

While AI has the potential to transform medical imaging, we must be cautious, said Peter L. Schilling, MD, MS, an orthopaedic surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center (DHMC), who served as senior author on the study. These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable. It’s crucial to recognize these risks to prevent misleading conclusions and ensure scientific integrity.

Schilling and his colleagues examined how AI algorithms often rely on confounding variables—such as differences in X-ray equipment or clinical site markers—to make predictions rather than medically meaningful features. Attempts to eliminate these biases were only marginally successful—the AI models would just learn other hidden data patterns.

The burden of proof just goes way up when it comes to using models for the discovery of new patterns in medicine, Hill continued. Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model sees the same way we do. In the end, it doesn’t. It is almost like dealing with an alien intelligence. You want to say the model is cheating, but that anthropomorphizes the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.

Souece: https://healthcare-in-europe.com/en/news/ai-connect-knee-xray-beer-drinking-shortcut-learning.html


Subscribe to our News & Updates