Viewing this debate and SDGs both as information spaces, and mental health/illness can provide some insights.
'Learned helplessness' is an established phenomenon in psychological studies and history.
Adam Grant in Life&Arts, FTWeekend 29-30 March 2025, p.2.
( https://adamgrant.net/book/think-again/ )
writes in 'HOW TO UNLEARN HELPLESSNESS'
'Many people believe that in a world of echo chambers and misinformation, it's not possible to talk anyone out of their convictions. They`re wrong. Even a bot can do it.
In recent experiments psychologists recruited thousands of Americans who believed in unfounded conspiracy theories ranging from the moon landing being faked to 9/11 being an inside job. After the participants described their views, the psychologists randomly assigned some of them to discuss these views with an AI chatbot that was prompted to refute them.
After less than 10 minutes of conversation with a ChatGPT-based LLM, more than a quarter of participants felt uncertain about their
views - and those doubts persisted two months later. Short exchanges were even enough to move the opinions of strong believers.
Why? It's not just that ChatGPT has access to infinite knowledge. It turns out that AI chatbots are better listeners than the average human. The researchers found that chatbots were persuasive because they presented information that directly challenged
the reasons behind people's beliefs.'
Also helpful, as per discussion to consider AI as a new / emerging 'information space' as a literal frontier, a new territory with unknowns that businesses wish to conquer and the public must learn to trust - even as users. What knowledge - understanding is needed of what is happening in the 'box'?
Once again (within Hodges' model) we can potentially draw in (all) the literacies:
information
IT
media
culture - spiritual
financial
health
civic - national
This also means to what extent is existing imbalances - inequality, equity ... in parity of esteem between the mind and body dichotomy are perpetuated, even increased?
It is not surprising that there is an imbalance, in the distribution of datasets on huggingface:
https://huggingface.co/docs/hub/en/datasets
e.g. INTRA- INTERPERSONAL [MIND]
BioBert 1.1
UCI ML Drug Review dataset - Subset
LOST
arXiv:2306.05596 [cs.CL]
Mindwell
https://doi.org/10.1007/978-981-97-3601-0_34
SCIENCES [BODY]
eScience kidney
factomics (PubMed
titles)
BioBert 1.1
SAVSNET sample
(VetCN) - Important in terms of Zoonotic diseases and planetary health.
Image datasets*:
Lung Cancer IQ-OTH/NCCD
Surface Crack Detection
Oxford-IIIT Pet
BreastMNIST
DermaMNIST
PneumoniaMNIST
BloodMNIST
RetinaMNIST
There is a lot of overlap also as expected.
Ack. As mentioned late 2024, the datasets were discussed at:
https://www.bcs-sgai.org/health2024/
Regards
Peter
Peter Jones
Community Mental Health Nurse, Part-time Tutor and Researcher
Blogging at "Welcome to the QUAD"
http://hodges-model.blogspot.com/
http://twitter.com/h2cm
HIFA profile: Peter Jones is a Community Mental Health Nurse with the NHS in NW England and a a part-time tutor at Bolton University. Peter champions a conceptual framework - Hodges' model - that can be used to facilitate personal and group reflection and holistic / integrated care. A bibliography is provided at the blog 'Welcome to the QUAD' (http://hodges-model.blogspot.com). h2cmuk AT yahoo.co.uk