News

Can we replace human empathy in healthcare?

June 11, 2021
By: School of Public Health
Human empathy in healthcare

 

In a May 2021 paper published in the journal AI & Society, clinical empathy expert and Berkeley Public Health bioethics professor Jodi Halpern, MD, PhD, posits that artificial intelligence (AI) cannot replace human empathy in the healthcare setting and that empathy is key to the successful treatment of patients.

The use of AI has shown promising results in clinical medicine, including in research and treatment. However, in her paper “In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare,” Halpern—whose 2011 book From Detached Concern to Empathy: Humanizing Medical Practice was called “seminal” by JAMA—outlines the obstacles to the application of AI in clinical medicine and care where empathy is important. She concludes that these problems cannot be solved with any of the technical and theoretical approaches to AI currently in use.

Jodi Halpern MD, PhD
Professor Jodi Halpern

Clinical empathy “involves attuning to another person’s emotional meanings and trying to imagine the situation from their perspective,” says Halpern. “The attitude necessary for this is genuine curiosity to hear more about another’s perspective based on recognizing that you don’t know another person’s world or what matters most to them.”

This type of human connection is vital to clinical care, Halpern says, and improves outcomes in the following three ways: “First, outcomes depend on correct diagnosis. Getting a good history is essential for correct diagnosis,” Halpern says. “Videotaped doctor-patient interactions, in studies replicated internationally. show that patients disclose much more information when doctors are demonstrably empathic.

“Second, outcomes depend on patients adhering to treatment.  More than half of prescriptions are not followed for a variety of reasons. The biggest predictor of adherence to treatment is trust in your physician. Empathic communication by the physician is a big predictor of trust.

“Third, when there is a serious diagnosis, patients cope better with the ‘bad news’ when it is delivered in an empathic context.”

In her paper, Halpern concludes that empathic AI is either impossible or unethical; impossible because of the lack of genuine empathy on the part of the AI and unethical because using AI in these situations can erode the expectations that human beings in distress deserve real, human empathy.

Halpern says she’d like to see AI used “for all the aspects of medicine it can contribute to, but not to use it to replace primary doctor-patient relationships as sources of therapeutic empathy.”

Read the full paper at Springer Link