Chat GPT (5) Does Chat GPT provide reliable healthcare information?

4 April, 2025

Re: https://www.hifa.org/dgroups-rss/chat-gpt-3

Dear Chris,

Thanks for your message giving two examples of incorrect responses from chatbots like Chat GPT. In one case different bots gave wildly different answers to the question “How many presidents have pardoned their relatives?". In the second case a Norwegian man filed a complaint after ChatGPT falsely told him he had killed two of his sons.

You conclude "These and many other instances of chatbot hallucinations... should give pause to anyone relying on the botfo for health advice."

Do you (or other HIFA members) have any examples of incorrect responses to requests for healthcare information?

Also, are you aware of any research that has compared the reliability of healthcare information obtained from chatbots as compared with other ways of searching for health information online, such as Google?

As I said in my previous message, I would expect responses - especially to basic questions - to be accurate - not 100%, but more reliable than, say, a Google search (which will typically throw up misinformation as well as reliable information, without the ability to differentiate them).

These AI tools are in their infancy and can be expected to become increasingly accurate over time. I anticipate that they will be transformational in terms of obtaining reliable healthcare information, and in terms of protecting people from misinformation.

Best wishes, Neil

HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org