AI Ethics in Healthcare: Can I Trust My Doctor’s AI Assistant?

How can we trust AI? How can we be sure that the AI devices used in healthcare have been built to follow, and actually do follow, the same ethical principles that our physicians do?
February 3, 2022
Introduction

Artificial intelligence (AI) is an integral part in many aspects of our personal and professional lives. Sometimes AI is obvious, like when we engage with “Alexa” or “Siri” or interact with customer service chatbots. Other times, it is hidden from sight, such as when Amazon or Netflix recommends something new that matches our interests or how Google Search seems to magically know what we’re searching for. Regardless of which form it takes, AI does provide us with many benefits and helps improve our quality of life.

One of the most rapidly expanding applications for AI is in the daily practice of medicine. For example, AI is already being used to help doctors diagnose disease earlier, pick appropriate medications and treatments, and even manage their practice workflow.1 And the use of AI in medicine is only going to continue to grow. But, with that increased pervasiveness and even increased reliance, how can we trust the AI? How can we be sure that the AI devices used in healthcare have been built to follow, and actually do follow, the same ethical principles that our physicians do? This is absolutely critical: Without ensuring ethical behavior, physicians and patients will never fully trust in AI. Without that trust, these powerful devices will  never had the opportunity to enable the quality healthcare we know it can.

During my 25 years of clinical practice, I served on many hospital ethics committees, institutional research review boards, and physician peer review boards. During my 12 years as a medical officer at the US Food and Drug Administration, I not only evaluated medical devices for risk and benefit before and during their time on the market, I also co-authored their first white paper on the regulation of AI in medical devices. I continue to apply that same rigor in my role as  chief medical officer at Eko. From this unique and varied perspective, I will focus on and attempt to answer the following questions: What are the generally accepted principles of medical ethics, and how should AI follow them? Where will compliance with ethical principles be straightforward for AI, and where will it be difficult? And when must an ethical human oversee the AI, compared to when the AI may function unsupervised? 

The principles of medical ethics

One can find on the internet many, many codes of medical ethics,2 but I find them overly complex and “jargony” for most purposes. So, to keep things simple I’ll use a generally accepted, four-point philosophical framework instead. These four principles apply to all aspects of healthcare, and also must apply to AI:3

  • Autonomy
  • Beneficence
  • Non-maleficence
  • Justice

As I discuss each in turn, I will try to provide examples of how device manufacturers and clinicians can adhere to them through proper product design and use. Also, when I say “AI”, I am referring to AI in healthcare, and specifically to AI in medical devices. (Trying to avoid too many abbreviations!)

Autonomy

Autonomy refers to a patient’s rights with regard to their own body. In other words, the patient always has final say about their treatment. For AI, autonomy may pose a significant issue, depending significantly on the situation in which it is used and the purpose it is meant to serve.

I find it helpful to think about AI in healthcare as falling into one of several categories.4 The first consideration is the situation of AI use, which could be deemed as “critical”, “serious”, or “non-serious”. The second consideration is the purpose of AI use, and can be “treat or diagnose”, “drive management”, or “inform management.” By categorizing AI by situation and purpose, it becomes clear how devices used to inform the management of non-serious clinical situations pose much less complicated ethical problems than does AI that functions in “critical” situations to “treat or diagnose.”

Let’s consider an example of AI that “informs”. AI that informs does not put any immediate or near term action onto a patient. It only informs the patient or clinician of an initial interpretation of data, (e.g., a blood pressure measurement, a body weight). It does not infringe upon the patient’s autonomy; the human is always between the AI and the treatment or diagnosis and has the choice of using or disregarding the information. The patient or clinician continues to accept the risks and benefits of freely choosing their course of action.

Things become a little more complex when the AI is “driving” clinical management. In this situation, the AI is not merely a reporter but an advisor. Yet “driving” still does not include directly treating or diagnosing because a human is still required to make a final decision. Autonomy is therefore still preserved, even with AI that drives care. Of course, the risk of device use does rise in this situation over “informing”, because device errors can cause harm, but the well-informed patient or clinician still functions with complete autonomy.

In both of these cases, the AI does not directly intervene on its own; there is always a patient and/or clinician in the middle who can accept or discard the AI’s output before any action is taken. An example would be an AI that continuously monitors blood sugar levels. What if it displayed to the patient, “You must take 20 units of insulin right now,” but it does not actually deliver any insulin? In this “driving” management situation, the patient or clinician would intake that advice, and may or may not agree to proceed in giving insulin. And so patient autonomy is preserved.

What about the last case, when the AI actually treats or diagnoses the patient directly? Continuing with our example of the insulin dosing device, in this case the “treating” AI would actually deliver the insulin itself, independently, without patient or clinician review, or opportunity for intervention. Obviously this is the riskiest clinical situation, but for our discussion this means that the patient now has lost autonomy over the insulin dose. Can autonomy be preserved with AI treating or diagnosing? How? 

In this case, patient autonomy must be ensured at the very beginning of device use, even before it is put into action, and throughout use by providing a “kill” switch. What does this mean? It means that the patient must clearly understand and indicate that they understand what the device does, the benefit of its use, and the risk of its use. The patient must also explicitly agree to its use. It also means that the patient must be educated and able to deactivate the device at any time and under any circumstances.

Beneficence

Beneficence means that medical professionals must do everything they can to provide the best care for their patients. Therefore, the beneficence principle puts the clinician into the role of the patient’s agent. It places responsibility on the clinician to stay current in new medical diagnostic and therapeutic practices, and to share relevant knowledge with their patients so the patients can decide which course of treatment they prefer.

When employing AI, the ability to follow the beneficence principle strongly depends upon whether the AI is “fixed” or “adaptive”. What does that mean? “Fixed” means that the AI is locked and does not change once it becomes available on the market. “Adaptive” means that the AI can change in the wild, on the fly, as it is used and gains experience. 

If the AI is fixed and will not change its function or performance over time, then beneficence must be achieved by ensuring that the clinician understands the AI, including any limitations, and explains it to the patient before it is used. This understanding includes what the AI is doing, how its output is used in treatment and diagnosis, and the risks and benefits to the patient. (Issues regarding trust and explainability for the AI are topics for another article.) Should problems arise with the AI (for example, it is found not to function well in certain situations), or if the AI is superseded by a new version or better device, then it is the clinician’s responsibility to explain the new situation to the patient, discuss potential paths forward such as stopping device use or using the device with a new understanding, and involve the patient in making a decision that is in their best interest.

On the other hand, if the AI is adaptive and changes on its own, then satisfying the beneficence principle can be much more difficult. Medical device regulators grapple with the concept of adaptive AI from the risk-benefit perspective, because the safety and effectiveness of a device may change as it adapts over time. 

One way to satisfy the beneficence principle when using an adaptive AI device may be to require the device manufacturer (and the device itself) to alert the clinician and patient whenever a change in the device takes place. Along with that alert would come a description of the change, in sufficient detail and at the appropriate level of understanding: what the change entails regarding device use, how it alters safety and/or effectiveness, and the new balance of risk and benefit. The device would have to pause its function until the clinician and/or patient digested the alert and indicated in an affirmative and traceable manner whether they want to proceed with use of the device or not.

Of course, this approach may not fit every situation, and might even have undesirable side effects. Consider again our example of the smart insulin pump — ceasing pump function in the face of a device change and then waiting for what may be a prolonged interval without insulin could be quite harmful. So, perhaps in that case the pump would instead continue to function with its existing, “old” AI version until the clinician and patient had time to digest the alert, and decide to transition the device to its “new” AI program.

Non-maleficence

Non-maleficence is a paraphrasing of the first principle of the Hippocratic oath, “First, do no harm.”5  For our purposes, the non-maleficence requirement can be satisfactorily met by following the same line of thinking as for beneficence. That is, for a fixed AI device, the clinician and patient would have to understand and agree prior to device deployment that the benefit of device use outweighs the risk. For an adaptive AI device, the clinician and patient would have to understand and agree that a (potential) pause and reevaluation would take place upon each device adaptation, and that the new AI would not be put into effect until an agreement to proceed were made.

Justice

Justice means ensuring that medical decisions are made fairly. It is the most complex of the medical device principles, and can often be the hardest to fulfill. 

Problems in achieving justice most often crop up when resources are limited. The classic example is organ transplantation: There are far more patients who need organs than there are organs available. By designing and implementing a uniformly accepted set of criteria for allocating donated organs, through which each patient is evaluated equally and fairly, justice can be achieved. For example, the United Network for Organ Sharing (UNOS) has published a set of criteria by which patients in need of organs are classified.6 Those in the highest, most urgent classes because of their medical condition and suitability would receive a matched donor organ first, whereas those who are less sick would be lower down the waiting list and would not receive an organ until those higher on the list were transplanted.

The justice principle would only rarely pose a problem for AI medical devices. In fact, one of the main advantages of AI medical devices is their ubiquity and ease of access, which in fact helps to decrease injustice and inequity because of their ubiquity and low barrier to use.7 In principle, anyone who has an internet connection and a device capable of accessing it should have access to an AI medical device. 

Unfortunately, it is the infrastructure problem that still stands in our way for achieving equitable distribution across society. And although it is not fair to place the burden of solving that society-wide problem on medical device manufacturers, it is reasonable to require them to make all efforts to reduce the resource demands of their products so that access is fairly distributed. Broadband companies are already expressing interest in the healthcare space,8–10 and the medical device manufacturers better get on the bandwagon!

Finally, although this might not seem to be relevant to the justice principle, there is a looming problem with AI in medical devices: data integrity and diversity. AI devices that are trained on large datasets gathered from many hundreds or thousands of patients, may not represent the intended users of the device. For example, suppose the dataset used to train an algorithm is constructed mostly from Caucasian males. In that case, there is at least some likelihood that it may not function as well on African American females. In my mind, AI that performs better on one subpopulation than another would violate the justice principle, and we must be proactive in figuring out .

Summary

Arguably the most important part of the clinician-patient relationship is trust. And as part of that trust, patients can and should take it for granted that their clinician is acting ethically, and following the principles outlined here. When new technologies come along that might benefit their patients, clinicians must evaluate why and how they use them, and how to adhere to these ethical principles.

Understandably, there are concerns on the part of all the stakeholders – AI medical device manufacturers, medical device regulators, clinicians, and patients – about the ethics associated with the use of these devices. However, as most of us already know either from direct experience or from intuition, AI can bring significant benefits to all of us, and so it is worth confronting and overcoming any ethical challenges AI devices might face. I hope I have explained above how we can adhere to the four principles of medical ethics that have been applied for many years, long before there was AI, to help us reach the lofty goal of ensuring autonomy, beneficence, non-maleficence, and justice.