Choose your Own Adventure
How Chat GPT or Bard could help physicians have Serious Illness Conversations with our patients
Growing up, I was a voracious reader. I loved reading. I loved reading so much I looked forward to doing book reports at school and even read the books assigned to my brother, who was three years older than me.
One of the series I really enjoyed was the “Choose Your Own Adventure” series.
Whether it was the “Cave of Time” or “The Mystery of Chimney Rock,” Choose Your Own Adventure, published by Bantam Books, was one of the most popular children's series during the 1980s and 1990s, selling more than 250 million copies.
They are described by Wikipedia as: “The stories are formatted so that, after a few pages of reading, the protagonist faces two or three options, each of which leads to further pages and further options, and so on until they arrive at one of the many story endings. The number of endings varies from as many as 44 in the early titles to as few as 7 in later adventures. Likewise, there is no clear pattern among the various titles regarding the number of pages per ending, the ratio of good to bad endings, or the reader's progression backwards and forwards through the pages of the book. This allows for a realistic sense of unpredictability, and leads to the possibility of repeat readings, which is one of the distinguishing features of the books.”
What does this have to do with palliative care?
Choosing Wisely Canada promotes the use of serious illness conversations as a means to discuss a patient’s prognosis, review preferences and establish goals of care for patients. By doing so, patients can avoid unnecessary tests and treatments, many that may cause harm and undue stress on family and caregivers.
A recent NY Times article states: “A lot of factors contribute to invasive actions in patients’ final days and weeks. Some originate within the health care system itself. Doctors may be reluctant to initiate difficult conversations about what dying patients want, or be poorly trained in conducting them.”
Advances in artificial intelligence and widespread availability of Chatbots like ChatGPT and Google’s Bard opens the door to improving goals of care discussions with patient facing progressive, life-limiting illnesses and their caregivers. Known as “Serious Illness Conversations,” these directed and crucial patient conversations are an integral part of palliative care.
The concept of serious illness conversations began with Atul Gawande in 2015. Dr. Gawande first came to prominence for his “Surgical Safety Checklist,” modelled after the checklist used by pilots prior to takeoff. Stating “Hope is not a plan,” Gawande recognized that physicians were often not having high quality conversations with patients facing a life-limiting illness. Using scripts and pre-determined questions known as the “Serious Illness Conversation Guide,” physicians and health care providers could practice these conversations using simulated patients.
“The landmark Serious Illness Conversation Guide serves as a framework for physicians, nurses, social workers, chaplains, allied health professionals, and other clinicians to explore topics that are crucial to gaining a full understanding about and honoring what is most important to patients. In clinical trials, the program results in more, earlier, and better serious illness conversations and reduction in anxiety and depression for patients. Research also demonstrates that the program is associated with improvements in patient and clinician experience and reductions in total medical expenses.”
I was privileged to attend one of the first training sessions for the Serious Illness Conversation Guide in Boston in 2015. His book, “Being Mortal” should be required reading for anyone caring for a patient with a life-limiting illness.
One of the rate-limiting steps in introducing and teaching clinicians how to have serious illness conversations is the use of simulated patients and the need to coordinate time and space to practice as well as videotaping sessions to review afterwards and coaches and mentors to evaluate progress.
What if we could leverage AI and Chatbots to behave as patients facing a life-limiting illness so that clinicians could practice is a safe, risk-free environment at the own convenience, at their own pace?
Much like a CYOA book, clinicians could move backwards and forwards through the conversation, learn from mistakes, try different approaches and work towards better outcomes. Chatbots could be adjusted based on an almost limitless variety of variables, from age, gender and ethnicity to disease-entity and trajectory. The sky would be the limit.
So what does the evidence say?
In a study published in JAMA Network Open, researchers evaluated a computerized decision aid tool that uses AI for end-of-life care conversations. The study found that the tool was effective in increasing the quality and efficiency of these discussions.
A systematic review published in BMC Health Services Research found that AI-based decision support tools can improve the quality of end-of-life care discussions and help clinicians identify and address gaps in patient understanding and preferences.
According to a report by the National Academy of Medicine, AI technologies can enable more accurate prognostication and facilitate personalized and more patient-centered decision-making in end-of-life care discussions.
AI can also help predict survival. This is often a crucial trigger for goals of care discussion and discussing informed consent around treatment such as systemic therapy and interventions like CPR.
Using AI in medicine is not a new concept. I was introduced to the idea by Dr. Eric Topol in his book, “Deep Medicine.” In his book, Topol argues that AI has the ability help re-establish the doctor-patient relationship by freeing up physicians from mundane tasks, such as note taking, and allowing the focus on what really matters: patients.
And if all all else fails, it looks like Chat GPT can just go ahead and pass the medical licensing exam and do it all by itself.
So while it would be fun to survive “The Horror of High Ridge, travel to “The Third Planet from Altair” or learn the “Secrets of the Pyramids,” the use of AI and Chatbots might actually help to improve patient care through better serious illness conversations as physicians choose their own adventure at their own time and pace.
Promising, assuming most of communication is just in the words you speak. No non-verbals, no affect, no tone.
Non-verbals matter, agree 100%. As AI evolves, it likely could assess or comment on these aspects of an interaction. AI might simply be an adjunct to role playing and simulation.