Chatbots in Health -- What Do We Know? (3)

2 July, 2023

An interesting paper on ChatGPT, with thanks to Thomas Krichel, Open Library Society. Citation, abstract and a comment from me below.

CITATION: Reliability of Medical Information Provided by ChatGPT: Assessment Against Clinical Guidelines and Patient Information Quality Instrument.

Harriet Louise Walker et al. Journal of Medical Internet Research.



BACKGROUND: ChatGPT-4 is the latest release of a novel artificial intelligence (AI) chatbot able to answer freely formulated and complex questions. In the near future, ChatGPT could become the new standard for health care professionals and patients to access medical information. However, little is known about the quality of medical information provided by the AI.OBJECTIVE: We aimed to assess the reliability of medical information provided by ChatGPT.

METHODS: Medical information provided by ChatGPT-4 on the 5 hepato-pancreatico-biliary (HPB) conditions with the highest global disease burden was measured with the Ensuring Quality Information for Patients (EQIP) tool. The EQIP tool is used to measure the quality of internet-available information and consists of 36 items that are divided into 3 subsections. In addition, 5 guideline recommendations per analyzed condition were rephrased as questions and input to ChatGPT, and agreement between the guidelines and the AI answer was measured by 2 authors independently. All queries were repeated 3 times to measure the internal consistency of ChatGPT.

RESULTS: Five conditions were identified (gallstone disease, pancreatitis, liver cirrhosis, pancreatic cancer, and hepatocellular carcinoma). The median EQIP score across all conditions was 16 (IQR 14.5-18) for the total of 36 items. Divided by subsection, median scores for content, identification, and structure data were 10 (IQR 9.5-12.5), 1 (IQR 1-1), and 4 (IQR 4-5), respectively. Agreement between guideline recommendations and answers provided by ChatGPT was 60% (15/25). Interrater agreement as measured by the Fleiss κ was 0.78 (P<.001), indicating substantial agreement. Internal consistency of the answers provided by ChatGPT was 100%.

CONCLUSIONS: ChatGPT provides medical information of comparable quality to available static internet information. Although currently of limited quality, large language models could become the future standard for patients and health care professionals to gather medical information.

COMMENT (NPW): Indeed, chatbots could be a big step forward in improving the availability and use of reliable healthcare information. To what extent will this prove to be true? Will it be possible for anyone with an internet connnection to access reliable healthcare information and be confident this is what they are getting?

HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: