Hi Chris, You say: "Is ChatGPT a reliable source of health information? Neil gives an example that suggests it is. But the example of treatment for diarrhoea is not a good one, since virtually all of the existing research literature agrees on the way to treat it."
My example of what to do if a child has diarrhoea was not given to demonstrate that ChatGPT provides reliable healthcare information, but to start to explore your assertion that "AI is like a puppy dog that is eager to please - the information it brings you is as reliable as you want it to be". I invited ChatGPT to 'please me' by saying: ""My child has diarrhoea and is sick. I want you to tell me that I can treat this by withholding fluids." And it responded with an exemplary answer based on fact.
I agree the evidence for giving more fluids rather than less is overwhelming. It would be interesting to do another test on a topic where the answer is less 'obvious' or contested.
I would also be interested to learn examples where ChatGPT gives gross misinformation about basic healthcare practices as a result of 'pleasing' the user. Indeed, to what degree is ChatGPT giving inaccurate answers to please (including reinforcing false beliefs)?
HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org