Artificial intelligence (5) BMJ: Ironing out kinks in the evidence base (2)

13 November, 2023

Joseph, [in response to: https://www.hifa.org/dgroups-rss/artificial-intelligence-3-bmj-ironing-o... ]

I found Stuart Russell's Reith lectures on Artificial Intelligence very helpful. I have pasted a link below and, personally, I would recommend the 17 sustainable Development Goals as accepted objectives for AI - THE 17 GOALS | Sustainable Development (un.org)

<https://sdgs.un.org/goals> and in policing AI , I would follow the money, as they do in the TV detective dramas!!

BBC Radio 4 - The Reith Lectures - Nine Things You Should Know About AI

<https://www.bbc.co.uk/programmes/articles/3pVB9hLv8TdGjSdJv4CmYjC/nine-t...

in his final lecture, Stuart offers some solutions and some ideas how about how we might live with AI.

“To solve this problem, we’ll have to go back to the very beginning, the core of how AI is defined. Machines are intelligent to the extent that their actions can be expected to achieve their objectives. Almost all AI systems are designed according to this definition, which requires that we specify a fixed objective for the machine to achieve or “optimise”.

“The problem with this approach was pointed out by Norbert Wiener, the founder of cybernetics, in 1960. He said: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively we had better be quite sure that the purpose put into the machine is the purpose which we really desire.”

“And there’s the difficulty: if we put the wrong objective into a super[1]intelligent machine, we create a conflict that we are bound to lose. The machine stops at nothing to achieve the specified objective.”

“Suppose, for example, that COP36 asks for help in deacidifying the oceans; they know the pitfalls of specifying objectives incorrectly, so they insist that all the by-products must be non-toxic, and no fish can be harmed. The AI system comes up with a new self-multiplying catalyst that will do the trick with a very rapid chemical reaction. Great! But the reaction uses up a quarter of all the oxygen in the atmosphere and we all die slowly and painfully. From the AI system’s point of view, eliminating humans is a feature, not a bug, because it ensures that the oceans stay in their now-pristine state. So, there’s little chance that we can completely and correctly specify the full objective, the one that matters - that is, humanity’s ranking of all possible futures. We need a different way of thinking.”

“Now, early in 2013, I was on sabbatical in Paris, and I spent a good part of that time thinking about this problem. I also joined the chorus of an orchestra, L’Orchestre Lamoureux, as a very amateur tenor, and one evening I was on the Métro heading to rehearsal and listening on my headphones to the piece I was learning, Samuel Barber’s Agnus Dei. “This is so sublime,” I was thinking to myself, and, as one sometimes does in Paris, thinking “Live for this moment,” even if the rest of the time in Paris one is thinking, “This moment is frustrating and humiliating.”

“But then, as often happens, my day job spoiled the moment, and I wondered how on Earth an AI system could ever know what constituted such moments - whether sublime or frustrating or humiliating - for a human being. And then it occurred to me. We have to build AI systems that know they don’t know the true objective, even though it’s what they must pursue.”

HIFA profile: Richard Fitton is a retired family doctor - GP. Professional interests: Health literacy, patient partnership of trust and implementation of healthcare with professionals, family and public involvement in the prevention of modern lifestyle diseases, patients using access to professional records to overcome confidentiality barriers to care, patients as part of the policing of the use of their patient data. Email address: richardpeterfitton7 AT gmail.com