The “A.I.” Doctor is in the House – Part I

Who would you rather make your medical decisions – a nurse or doctor – or a computer? This bizarre question is becoming a frightening reality for the medical profession – and for people like you and me.

Yes, artificial intelligence or “A.I.” is now influencing the way we practice medicine. Lisa Bannon, writing for The Wall Street Journal, tells us of a real-life experience of Melissa Beebe, an oncology nurse at UC Davis Medical Center in California. An alert signal prompted by the computer said her patient in the oncology unit was having sepsis, but Melissa knew that was wrong. “I’ve been working with cancer patients for 15 years so I know a septic patient when I see one,” she said. “I knew this patient wasn’t septic.”

The alert correlates elevated white blood cell count with septic infection. It wouldn’t take into account that this particular patient had leukemia, which is also associated with high white blood cell counts. The algorithm, which was based on artificial intelligence, triggers the alert when it detects patterns that match previous patients with sepsis. The algorithm didn’t explain its decision.

Hospital rules require nurses to follow protocols when a patient is flagged for sepsis. While Beebe can override the AI model if she gets doctor approval, she said she faces disciplinary action if she’s wrong. So, she followed orders and drew blood from the patient, even though that could expose him to infection and run up his bill. “When an algorithm says, ‘Your patient looks septic,’ I can’t know why. I just have to do it,” said Beebe, who is a representative of the California Nurses Association union at the hospital.

As she suspected, the algorithm was wrong. “I’m not demonizing technology,” she said. “But I feel moral distress when I know the right thing to do and I can’t do it.”

How did things get so out of hand?

Artificial intelligence and other high-tech tools, though nascent in most hospitals, are raising difficult questions about who makes decisions in a crisis: the human or the machine? The technologies, which can analyze massive amounts of data with a speed beyond human capacity, are making extraordinary advances in medicine, from improving the diagnosis of heart conditions to predicting protein structures that could speed drug discovery. When it is used alongside humans to help assess, diagnose and treat patients, AI has shown powerful results, academics and tech experts say.

But there is always another side to the story, as Nurse Beebe has experienced. The tools can be flawed and are sometimes implemented without adequate training or flexibility, say nurses and healthcare workers who work with them regularly, putting patients at risk. Some clinicians say they feel pressure from hospital administration to defer to the algorithm.

AI should be used as clinical decision support and not to replace the expert,” said Kenrick Cato, a professor of nursing at the University of Pennsylvania and nurse scientist at the Children’s Hospital of Philadelphia. “Hospital administrators need to understand there are lots of things an algorithm can’t see in a clinical setting.”

Sounds to me like the administrators trust the computers more than their nurses and doctors. But there’s an old saying about computers, “Garbage in, garbage out.” The computer is only as good as the information it is given.  One has to wonder how many hours of experience practicing medicine the computer programmer had.

In a survey of 1,042 registered nurses published this month by National Nurses United, a union, 24% of respondents said they had been prompted by a clinical algorithm to make choices they believed “were not in the best interest of patients based on their clinical judgment and scope of practice” about issues such as patient care and staffing.” Of those, 17% said they were permitted to override the decision, while 31% weren’t allowed and 34% said they needed doctor or supervisor’s permission.

(Note: More on this subject in my next post.)