I also appreciate the discussion summary, and want to add to Chris’ comments on AI. One benefit of AI, especially for OA content, is the ability to translate content to any number of languages (albeit with variable accuracy), as discussed in this forum. I'm sharing this article just published in NEJM: "When Protecting Privacy Means Protecting Health: Foreseeable Health Harms of AI Language Translation and Interpretation Technologies for Immigrant Patients." https://ai.nejm.org/doi/full/10.1056/AIp2500960 The article is behind a paywall, but the abstract raises important issues. (The language is oriented toward issues for immigrants in the US, but is relevant for other individuals in other countries as well).
The abstract is provided below. It emphasizes the potential privacy issues raised by using AI translation tools for patients. However, of additional relevance for this discussion is the potential for an individual using an AI tool to translate medical information to compromise their medical privacy and have that medical information traced back to them, if appropriate privacy safeguards are not in place.
"Artificial intelligence (AI)–enabled language translation tools are making their way into clinical spaces, often justified by their potential to improve patient–clinician communication and the clinical encounter. For patients who are immigrants in the United States, especially those who are undocumented or live with undocumented family members, these technologies may also introduce privacy risks with potentially material health consequences. Context-specific disclosure of personal information that arises during clinical encounters (e.g., household crowding, employment conditions, or immigration history) can be converted into portable, structured data by commercial vendors that reserve broad licenses over inputs and outputs. When combined with other datasets or accessed through opaque data-sharing arrangements, particularly private-sector disclosures or compelled access by government entities, such information risks aiding eviction, job loss, or immigration enforcement actions, each independently linked to adverse physical and mental health outcomes. Existing safeguards are often ill-suited to withstand the growing risks; deidentification is often unreliable — and sometimes impossible — given modern reidentification techniques. Oversight of data subprocessors and government access requests remains opaque. Moreover, existing evaluation frameworks tend to prioritize accuracy over the potential health harms of data misuse. We propose a risk-of-harm lens that treats confidentiality as a preventive clinical intervention for patients. Clinicians should obtain structured, risk-stratified consent; default to qualified human interpreters for sensitive social or legal discussions; and restrict data capture to what is necessary for safe communication. Health systems and policy makers should make procurement contingent on strict limits to onward sharing, require transparency and independent testing for privacy vulnerabilities, and sharply limit any data sharing between private vendors and government entities. Without such measures, AI translation may entrench the very inequities it aims to reduce."
Best wishes,
Margaret
Margaret Winker, MD
eLearning Program Director
Trustee
World Association of Medical Editors
***
wame.org
WAME eLearning Program
@WAME_editors
www.facebook.com/WAMEmembers
HIFA profile: Margaret Winker is Trustee and Past President of the World Association of Medical Editors (WAME) and Director of the WAME eLearning Program. She is based in the US. Professional interests: WAME is a global association of editors of peer-reviewed medical journals who seek to foster cooperation and communication among editors, improve editorial standards, promote professionalism in medical editing through education, self-criticism, and self-regulation, and encourage research on the principles and practice of medical editing. margaretwinker AT gmail.com