Dear Neil,
I also hold out hope for the potential of AI in the way you describe, but you said you're waiting to learn about gross examples of healthcare misinformation. This example may not involve gross misinformation involving ChatGPT, but it illustrates the issues: This study of the use of ChatGPT for snakebite <https://pmc.ncbi.nlm.nih.gov/articles/PMC10339276/> states it "effectively addresses widespread myths and misconceptions associated with snakebites, helping to reduce misinformation. It offers evidence-based
guidance based on current guidelines and practices, ensuring the advice is reliable and accurate [7,9,13]. Additionally, the response emphasizes the importance of seeking immediate medical attention and following the guidance of healthcare professionals, which is crucial for the proper evaluation, treatment, and management of snakebites."
However, it also states, "there are some limitations to the response, such as limited depth on toxicology, as it does not delve into the specific toxicological aspects of venomous snakebites, like the type of venom, how it affects the body or the administration of antivenom. Furthermore, *the reply does not address the variation in venom toxicity and severity of symptoms between different snake species, which could be important information depending on the region or type of snake involved*." [emphasis added]
For AI to be useful to patients without immediate access to medical care who are dealing with snake bite, it would be essential to have information specific to the region and type of snake, so the individual knows whether the person who was bitten is in mortal danger or whether simple supportive care is sufficient. Such capability apparently has been developed <https://www.sciencedirect.com/science/article/abs/pii/S1386505623000412> (study is behind a paywall) but was not part of ChatGPT, for example.
As to other chatbots, plenty of gross misinformation has been identified.
Google AI has lots of documented problems discussed here <https://www.cbsnews.com/amp/news/software-developers-want-ai-to-give-med...
and here <https://www.cbsnews.com/news/google-ai-overview/>, for example.
The potential for AI is huge, but applications for patients need to be very carefully tested before they're truly safe and useful.
Best wishes,
Margaret
Margaret Winker, MD
eLearning Program Director
Trustee
World Association of Medical Editors
***
wame.org
WAME eLearning Program <https://wame.org/wame-elearning-program.php>
@WAME_editors
www.facebook.com/WAMEmembers
HIFA profile: Margaret Winker is Secretary and Past President of the World Association of Medical Editors in the U.S. Professional interests: WAME is a global association of editors of peer-reviewed medical journals who seek to foster cooperation and communication among editors, improve editorial standards, promote professionalism in medical editing through education, self-criticism, and self-regulation, and encourage research on the principles and practice of medical editing. margaretwinker AT gmail.com