Dear Jules and all,
Re: https://www.hifa.org/dgroups-rss/chat-gpt-13-does-chatgpt-provide-reliab...
Thanks for your message. I agree this Nature paper is a very good 'overview of the potentials and limitations of LLMs in clinical practice, medical research and medical education'.
https://www.nature.com/articles/s43856-023-00370-1
Most of the 12 applications in Figure 1 are relevant to HIFA.
One of these applications is 'patient empowerment'. The accompanying text does not describe what I consider to be the most important aspect of patient empowerment: namely, the ability of the patient (and family) to get the reliable healthcare information they need, when they need it, in a language and format they can understand. I predict this will be transformational.
It won't be long before anyone with an internet connection can have an interaction with a chatbot and obtain reliable information on what to do, say, if their child has diarrhoea (and eradicate the widespread and dangerous misconception that fluids should be withheld from such children).
I am still waiting to learn about gross examples of healthcare misinformation by chatbots such as ChatGPT.
I remain sanguine (notwithstanding the apocalyptic predictions of AI 'taking over the world' which are a separate issue).
Best wishes, Neil
HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org