Can AI help families make difficult end-of-life care decisions?
NBC News- 198 views
- 7 Sep 2024
Medical research believes that artificial intelligence in hospitals can help with difficult end-of-life decisions, which are often left to ...
Ai is transforming our lives, from travel to education.
The Pythagorean theorem is a fundamental principle in Euclidean geometry.
To art and music. But could it also reshape the end of our lives?
I got to be honest, my first reaction was like, Oh, no.
But a group of scientists are going there, trying to make families and doctors make hard end-of-life decisions using an AI that could predict whether incapacitated patients want to live or die.
You can create this psychological twin of them that would, in some sense, be able to speak on their behalf.
Specifically to make a decision on things like a do not resuscitate order or DNR, which tells medical workers not to provide life-saving care if it could lead to too much pain. It's a more common dilemma than you may think. If a person has not planned for such difficult decisions. Anthony Penal says he found himself faced with signing a DNR for his grandmother after she was diagnosed with dementia.
How involved was your grandmother in these conversations?
Not at all, to be frank. She was rapidly declining, not remembering my mom or me, acting very erratically and having hallucinations.
Penal says he and his mother agonized over trying to figure out what his grandmother would have wanted. They eventually signed the order.
Trying to put yourself in her shoes is, yeah, it is very hard. And even when her physical health was declining in those last months and weeks, it's trying to imagine the pain that she's in. It's It's a very hard thing to do.
Researchers think the AI can help. The idea? To feed a bunch of digital information about a patient into a large language model, like ChatGPT.
It could be trained on their medical records, past treatment decisions that they made.
Or even text, emails, social media posts, demographic info, and more. And it would require patient or family consent.
We run them through a bunch of treatment scenarios. They give you their preferences. Then you apply AI to those answers, and then you use that program to predict when the patient isn't able to make decisions for themselves.
Researchers say it could help improve the accuracy of decisions made by medical surrogates, which one 2006 JAMA study placed at just 68 %. But other bioethicists worry that the data isn't enough.
It's still so speculative. And then the other piece of my skepticism is whether even if we had that data, an algorithm could accurately predict someone's future preferences.
And they point out, as painful as these decisions are, they are part of human experience.
Is there a need for something like this?
Sometimes surrogates do struggle. In my experience, it's the minority. What they need is really more the emotional support. What they're struggling with is the weight of the situation they're in.
I think it's more worrying that humans should want to run away from those situations, decisions, feelings, and just resort to something like AI. It can help us grow, I think, as humans, having to wrestle with those emotions.
Christine, for those worried that this may get into the hands of bad actors, one of the biggest focuses of David Wendler, the researcher you saw in the piece there, is making sure that for-profit systems like hospitals or other businesses don't use this to ultimately advance their bottom line and that patients and families are respected throughout the process. Christine.
Thanks for watching. Stay updated about breaking news and top stories on the NBC News app or follow us on social media.