Seeing Patients Through Medical AI

C. Ben Mitchell, Ph.D.

Distinguished Fellow

The Tennessee Center for Bioethics & Culture

Artificial Intelligence (AI) is being integrated within the practice of medicine in leaps and bounds. Fields such as radiology, telehealth, and emergency medicine are increasingly implementing AI in diagnostics and treatment. Doubtless, AI will eventually enhance patient care, prognostics, and clinical practice generally. But at what cost to the physician-patient relationship and trust?

Keeping the patient in view as a whole person is already a challenge for contemporary medicine. Patients are often objectified by their body parts, disease, or location. “That’s the ovarian cancer in Room 323,” remarks the attending physician to the medical students during rounds. No, that’s Crystal, the 42-year-old mother of 3, who is hoping against hope to see her eldest son graduate from college, her daughter get married, and her youngest child start middle school. And that’s her husband, Ray, who is sleeping in that uncomfortable position in a torturous hospital chair.

In his most recent volume, psychiatrist Abraham Nussbaum, MD, observes that “For more than a century, the transformation for physicians like me has been about know the body, not the person, of the people we meet as patients.” Seeing patients as persons is challenging. Seeing them through the lens of technologies like AI layers on additional challenges.

Bias

According to an essay published in The Hospitalist, a publication of the Society of Hospital Medicine, AI has been shown to have embedded racial and ethnic biases. Because AI scrapes large swaths of data from a huge number of sources, any biases found in those sources are likely to be included in the algorithms that generate new protocols and methodologies. “Implicit issues such as racial profiling, gender disparities, etc., have often permeated the healthcare community. Incorporating this data may lead to the AI system exaggerating these biases and worsening medical outcomes,” the authors contend.

Privacy

In many doctor’s offices a third party has been introduced (although sometimes without an introduction), the scribe. He or she records digital notes as the physician and patient interact in the examining room. AI has been used in the place of the scribe, recording a verbatim of the physician-patient conversation and generating case notes. We know, however, that ChatGPT and other AI platforms produce so-called hallucinations, creating misinformation that has no basis in fact. In a well-publicized example from the legal field, ChatGPT generated references to court decisions in legal cases that did not exist. It just made them up! So very careful review of the notes from an office visit will be required.

To add to the challenge, all these data points are liable to breaches in security. Hospital systems all over the country have been disrupted by international hackers who will use those data nefariously if they can.

Distrust

Because these and other faults of AI are increasingly in the news, and because patients are wary of most new technologies, patients are likely to distrust the results of AI generated diagnoses, prognoses, and treatment modalities. And who is to blame them? Trust is already at risk when physicians spend more time with their iPads or other digital devices during an office visit than they do actually examining the patient. And, because AI is mindless and emotionless, it can’t read facial expressions, voice inflection, and other clues about how a patient is experiencing illness and dis-ease.

In the interest of informed consent and patient autonomy, some are suggesting that patients should be educated about the benefits and liabilities of AI tools and be able to opt-out before the implementation of AI in their diagnosis and treatment. Otherwise, patients will lose trust in their physicians. And we all know that trust is difficult to gain and very easy to lose.

As Medical College of Wisconsin bioethicists Fabrice Jotterand and Clara Bosco have written, “Keeping the ‘human in the loop’ is a quintessential dimension to clinical practice.” Medical AI may be useful in some ways to augment a physician’s toolkit, but it cannot replace the person-to- person choreography of care—the treatment tango—that is at the heart of the physician-patient relationship.