I am wondering if you are using the word “AI” and “access to knowledge” interchangeably? When you concluded that "AI WILL Save Lives” did you mean that “having access to knowledge will save lives” or “can save lives” - if knowledge can be correctly acted upon. [*see note below]
In Malawi, we developed an app that leveraged LLMs (GPT) and provided access to knowledge for community health workers. The app allowed the interrogation of medical and health guidelines and knowledge about diseases (case definitions) in the form of questions and answers in natural language. We had some feedback sessions in which community health workers made the following observations:
- the tool was useful in giving the right information, the LLM could retrieve useful and relevant information to questions asked
- the human agency in weighing the quality of the answer and deciding how to act was key
- answers were too long and not always suitable to the context
- the information coming from LLMs should be included in some kind of “organised training” and not remain at the level of questions and answers so that the user can build true competencies over time.
- LLM use can cause “dependency” and hence lead to a deterioration in users' critical thinking or learning
- information in local languages is important - and not available.
(We also found that people need to value learning and make a conscious decision to integrate new knowledge in their own knowledge schema, and use it. Then comes the issue of feedback: how do LLMs incorporate knowledge from the user? For example, the user finds out that an advice was wrong? GPT can only permanently “know” something new if it undergoes a new round of fine-tuning or training with the added data. But this new data needs to be curated - usually by humans. Mostly human feedback is treated as a prompt and is temporarily used during a chat session only.)
LLMs are good at text retrieval and summarisation tasks. We know that algorithms used for these tasks rely on pattern matching or filling in text that “is statistically a good fit" from a very large body of digital text.
But are LLMs actually capable of “automated reasoning”? I think the current consensus is that they cannot “reason”. Until this is the case there is always a significant danger that LLMs will create gibberish.
Another interesting point is that LLMs answers are extremely influenced by prompts - a lot of these prompts have been developed and tested and are part of the training of a model (prompts are generated from expected user needs or behaviour). LLMs seem stable because people ask predictable things. When people step outside those patterns, the instability of LLMs becomes more visible.
(Here are two papers if anyone is interested in some of the work I mentioned: https://www.researchgate.net/publication/383340925_Self-Directed_Learnin... and https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5338803)
Apologies for the long email. But I want to make a final comment on AI being able to take comprehensive patient history. Here are a few of the challenges that I see:
- language (patients may not know how to express specific symptoms they experience - health or medical language is something one learns through experience)
- sometimes the guardian is the one speaking on the behalf of the patient (e.g. in the case of a minor, or someone seriously ill unable to communicate well)
- signs and symptoms are different things but both are needed for the history
- patients may show signs that an experienced doctor can recognise but the patient may not notice or understand their connection to the condition
- patients may be fearful in disclosing some information - here careful probing and good rapport is important - LLMs are still not able to probe or carry out a sequence of questions without reaching a cycle or dead end. Not disclosing information is a common situation in Malawi. Sometimes patients fear that disclosing certain information may result in treatment or surgery being delayed or changed.
- many conditions include very similar symptoms (in varying degrees,, combinations and significance)
Kind regards
Amelia
HIFA profile: Amelia Taylor is a Lecturer in AI at Malawi University of Business and Applied Sciences. Interests: Malawi AI, NLP, Health Informatics, Data Visualisation. ataylor AT mubas.ac.mw
[*Note from HIFA moderator (NeilPW): Thanks Amelia, yes I am confident that AI will save many lives. I believe it will be a game-changer with regards to the availability and use of reliable healthcare information, at all levels of care from the home through all levels of the health system. I believe it will especially save lives at the level of the home and primary health care]