Artificial intelligence (3) Resistance to AI (2)

24 June, 2025

Hello Neil

Thank you for initiating this conversation. I believe that there are good opportunities for AI integration in health. For some AI technologies, the burden of proof seems to have shifted from “building AI” to "being able to test AI”.

Diagnostics is said to be the first frontier for AI in healthcare. The largest source of medical data is medical imaging (>90% of all healthcare data in developed countries but >97% of it is unanalysed and unused). AI trained as task-specific models have been shown to be faster than humans in detecting specific abnormalities in images.

BUT:

AI requires high quality image datasets (large, diverse, correctly annotated by experts)

Thorough testing and human expert involvement (both AI experts and healthcare specialists)

AI are specialist programs and may not detect all abnormalities present in a medical image

Legal considerations mean that AI can help support the diagnosis of a human doctor (radiologist for example) but the human is responsible for the final report

Interpreting a medical image requires context. Did the patient have surgery in that area before? If so, that may explain unusual findings on a scan. Was there a nodule there before?

This requires integration or a good exchange / communication between different sources of patient data.

A model trained on one set of data may not work on another slightly different. Researchers at Mount Sinai’s Icahn School of Medicine found that the same deep learning algorithms diagnosing pneumonia in their own chest x-rays did not work as well when applied to images from the National Institutes of Health and the Indiana University Network for Patient Care. https://www.healthcareitnews.com/news/mount-sinai-finds-deep-learning-al...

The writer(s) of the ictworks article [ https://www.ictworks.org/overcome-ai-resistance-digital-health/ ] are making some assumptions that are not quite true? For example, when they say "concerns about AI exceeded enthusiasm for 25% of doctors in 2024. This resistance manifests in two problematic ways” they equate concerns to resistance - although concern can lead to resistance, one can be concerned but still a user? Similarly, when they say that doctors are "reluctant to take advice" because "even when it can improve their effectiveness and efficiency. This is a normal reaction” - it is not quite true that this is a normal reaction to technology. Theories of diffusion of innovation/technology have established a direct positive correlation between perceived (and/or established) effectiveness and efficiency of a technology and its adoption. Maybe diffusion of AI works differently.

“Using" a technology does not equate to “taking advice” from it (especially if this assumes that one experiences a change of mind in order to take advice). When the article states that doctors "said reducing administrative burdens through automation was the biggest area of opportunity for AI. They want AI to handle the paperwork but resist its clinical insights. This is backwards thinking that prioritizes physician comfort over patient outcomes” - What is wrong with this application of AI? Reducing administrative burden or improving records can save lives in hospital settings! In fact in many countries the issue of poor records is a significant problem.

Kind regards

Amelia

HIFA profile: Amelia Taylor is a Lecturer in AI at Malawi University of Business and Applied Sciences. Interests: Malawi AI, NLP, Health Informatics, Data Visualisation. ataylor AT mubas.ac.mw