Bioethics Professor: It Is About Finding The Middle Ground
No way should we let machines and data take over from humans in health care. But we should let machines assist us in the hard work, where they are better than humans. The overriding question is; how much control and authority do we delegate to machines? Interview with Ezio Di Nucci, professor of bioethics at Section of Health Services Research, University of Copenhagen.
What are the most crucial ethical and political question in health care and Machine Learning (ML)?
“One of the possibly most important questions in this century is how much we should delegate to technology and in particular to complex technologies such as machine learning algorithms,” says Ezio De Nucci.
“Evidence-based advice and decision-making are two very different tasks, we can choose to delegate. Respect for human autonomy requires that patients ought to be involved in the latter, but not the former, which would otherwise constitute an irresponsible outsourcing of responsibility.”
In other words, we can use machine learning to assist doctors, not to take over doctors’ jobs?
“Exactly. The best way to use machine learning in health care is at the very beginning of the encounter with the patient. Machine learning tools will not miss anything when looking through the possible diagnoses and solutions. It will probably find too much. Then, the human doctor should go in and evaluate. We need human doctors to be able to evaluate. We must not replace human doctors with machines,” he says.
Some researchers say that AI/ML systems that recommend treatment options present a potential threat to shared decision making, because the individual patient’s values do not drive the ranking of treatment options. What is your response to that?
“Let’s take IBM’s Watson. It is in fact true that we must take into account personal values and wishes, but the point is that Watson — with the computational possibilities opened up by its machine learning algorithms is particularly well placed to do exactly that, and definitely better placed than overworked oncologists anyway.”
“However, this leads us to a surprising conclusion: The real worry might turn out to be that including the patient’s point of view and values in Watson’s calculations might compromise its advice by making it opaque and also not purely medical. And then we really could not responsibly base shared decision-making on it.”
So Watson should only give purely medical advice and not include the patient’s values?
“Yes. It is important to distinguish between delegating to Watson the task of advising us and for example providing evidence that would have otherwise not been available within a reasonable timeframe and then delegating the decision-making task to Watson.”
If the research phase is given to the machine and the decision-making to the doctor, everything is fine?
“There are other ethical and political issues at stake. I am worried about Watson due to the way IBM markets Watson – as if there is concordance between oncology (Oncology is a branch of medicine that deals with the prevention, diagnosis, and treatment of cancer) in Denmark and in New York. There is not necessarily. IBM does the same in third world countries, and its software is super expensive. They come with this huge promise, and the poor countries cannot afford it, so they may have to close training programs for younger doctors. If they don’t have resources to evaluate, what Watson suggests, it is a serious problem.”
We risk being medically colonized by big tech and big pharma
So, IBM and other big tech companies are not only big tech but also big pharma?
“Yes. The problem is that 1) IBM is sitting on a lot of data, and 2) IBM will sell their software so expensively that poor countries will have a hard time keeping the jobs and expertise of human doctors. We risk being medically colonized by big tech and big pharma and get what you can call tech unemployment on top of that.”
“If IBM gave away Watson for free in e.g. Bangladesh it would be a good business ethics. But Bangladesh would probably still replace human doctors with software and not keep local human expertise.”
What is the solution?
“We should not think of ML and AI as a replacement of human doctors but as advisers to human doctors. However, the more we use it, the more authority we give to the data. So, we must carefully analyse what kind of outcome we want. We should not be afraid of taking from the machines, what they do best as long as we keep human control, meaningful human control, MHC, it is called.”
“An example: The Chinese doctor, He Jiankui, who claimed he can modify our genes, made everybody say that is was horrible. That it is a dangerous, slippery slope. The difficult task here is to analyse this properly, and find where there are benefits in it. Because there are benefits. It is about finding a middle ground”
We should not be afraid of taking from the machines, what they do best as long as we keep human control, meaningful human control
What do you mean with the title of your upcoming book: The Control Paradox?
“Feeding more data to machines make them more intelligent and give us better outcomes. But it also means loss of authority. We can cope with the doctors’ loss of authority. But what we cannot cope with is non-transparent opaque algorithms. We must be able to reconstruct every step in a computer algorithm and to explain it. Otherwise we don’t only lose authority but also meaningful human control. We should never cross that line as a community – we need to always be able to recognize and explain every step of the algorithm.”
The Control Paradox, as far as I understand, means that there is both a huge promise and a huge scary part of new tech. Can you give some examples?
“Take autonomous technologies such as killer robots and self-driving cars. They both promise more and better control over the task at hand – killing, driving. While at the same time threatening the very thing they were supposed to improve on, namely control. And with Watson or similar technologies we aim for more precise diagnosis and therapeutic solutions, while at the same time fearing that Watson will end up deciding – over both doctors and patients – who lives and who dies. That is the control paradox.”
You say we can avoid the paradox, if control is properly delegated?
“Again, we should consider what tasks should be delegated to whom under which conditions. We can delegate some control but should not delegate responsibility. Machines are not responsible. So, we should not underestimate the risk of giving up too much oversight.”