Generative artificial intelligence and medical disinformation | The BMJ
https://www.bmj.com/content/384/bmj.q579
"In their study, Menz and colleagues focused on the potential of generative AI’s large language models (LLMs) technology to produce high quality, persuasive disinformation that can have a profound and dangerous impact on health decisions among a targeted audience. The authors reviewed the capabilities of the most prominent LLMs/generative AI applications to generate disinformation. They described techniques that enable the creation of highly realistic yet false and misleading content with the potential to circumvent the apps’ built-in safeguards (using fictionalisation, role playing, and characterisation techniques)."
R
HIFA profile: Richard Fitton is a retired family doctor - GP. Professional interests: Health literacy, patient partnership of trust and implementation of healthcare with professionals, family and public involvement in the prevention of modern lifestyle diseases, patients using access to professional records to overcome confidentiality barriers to care, patients as part of the policing of the use of their patient data
Email address: richardpeterfitton7 AT gmail.com