Increased Clarity: It may seem dystopian to rely on machines for such a prognosis. Many, however, believe software and algorithms can detect patterns and data that humans cannot. According to Leonard Davolio, Founder and CEO of health care performance improvement company Cyft and assistant professor of Harvard Medical School, “the advantage of the special software being able to learn these patterns, instead of the human telling it the patterns, is that software can consider so many more factors or variables than a human could, and it can do it in microseconds. It’s basically checking all of the patterns that have come before and predicting the next step forward.”
“I really view it as a sixth vital sign,” states Andrew Mayo, M.D., the Chief Medical Officer of Minnesota-based St. Croix Hospice. Dr. Mayo and his colleagues rely on a system that predicts if death is imminent for a person within seven (7) to 12 days. “It provides our clinical team with additional information that helps them make decisions about care. It doesn’t replace the need for human contact and evaluation. Quite the contrary, it can trigger increased involvement at a time where patients, their families and caregivers may need increased hospice involvement and guidance.”
In a health care system in which doctors are overburdened and often under-trained when it comes to advising palliative care, this information saves time and money. The last weeks of a person’s life can be the most uncomfortable (and most costly). Knowing a patient’s status sooner can create a better end-of-life experience.
However, are algorithms always correct? It depends. While St. Croix has benefited from the data, other studies have demonstrated issues. For example, a study about a University of Pennsylvania algorithm observed more than 25,000 cancer patients with an AI system predicting their risk of dying in the next six months. In the high-risk category, 45% of those patients actually died. Although the algorithms demonstrate who has the elevated risk of dying soon (not who actually dies), it is clear that not all AI software is accurate all the time.
According to Penn’s Ravi Parikh, an Oncologist and Researcher, “What we explicitly said to clinicians was…‘If the algorithm would be the only reason you’re having a conversation with this patient, that’s not enough of a good reason to have the conversation — because the algorithm could be wrong.’”
The medical community has yet to develop a research study comparing the outcomes of clinicians who used the AI tool to the outcomes of clinicians who did not. Right now, the studies have focused on comparing the effects of before and after the tool was implemented. So, while AI has created more data in this area, it is still up to a human to make the call.
REFERENCE: Seven Ponds; 20 FEB 2022; Charlotte Winters