Many thanks Augustine Veliath for your leadership in promoting ethical AI.
I’m interested in exploring how we can create a future where every person is able to differentiate reliable information from misinformation. This can be done in at least two ways.
1. The first method is to flag reliable information. This approach was pioneered by Edith Certain and the Health on the Net Foundation in Geneva. They developed a robust approach to awarding webistes the 'HoN code' if they followed certain principles in content production. Unfortunately HoN had to close in 2022 due to lack of funding - symptomatic of the lack of awareness and support for reliable healthcare information among funders. It would be valuable to hear more about what happened with HoN - can anyone help?
In the past several years the Patient Information Forum (UK) has been using a similar approach - the PIF TICK https://pifonline.org.uk/pif-tick/
One of the biggest challenges for such initiatives is to make themselves known nationally and globally as 'household' names. They can draw inspiration from the success of the Fairtrade Mark, which informs consumers about ethical trade practices in fruit, vegetables, coffee and many other products. 'The Fairtrade mark is the most globally recognised ethical label globally'.
It is encouraging to see the ongoing work of Caroline De Brun in our current HIFA thread 'Please send details of international patient information quality systems'. This first step to identify approaches similar to PIF TICK (or with similar objectives) is a necessary first step to promote international collaboration to globalise efforts.
2. The second method would be a realtime check of any piece of text against an AI tool such as ChatGPT - the AI tool would automatically give a score out of 10 on *its* assessment of the reliability of the information, including which aspects it disagrees with. The challenge here is partly technical, but we can envisage this will be technically possible - and available - within the next 1-2 years. The bigger challenge is trust in AI. There will need to be widespread recognition among national and global populations that any given AI tool - such as ChatGPT - is indeed highly effective and accurate as a tool to help people differentiate reliable information from misinformation. Such effectiveness first needs to be demonstrated beyond doubt and I anticipate this will soon be achieved.
HIFA profile: Neil Pakenham-Walsh is coordinator of HIFA (Healthcare Information For All), a global health community that brings all stakeholders together around the shared goal of universal access to reliable healthcare information. HIFA has 20,000 members in 180 countries, interacting in four languages and representing all parts of the global evidence ecosystem. HIFA is administered by Global Healthcare Information Network, a UK-based nonprofit in official relations with the World Health Organization. Email: neil@hifa.org