Can We Talk? Artificial Intelligence in the World of Cancer Care


Katie Coleman has utilized online support groups and developed a following via TikTok, X (formerly Twitter), Instagram, YouTube and her “Oncology Unscripted” podcast since receiving her diagnosis of stage 4 kidney cancer. She is now a software engineer for the Rare Cancer Research Foundation.

Photo credit: Zach Brigham

Software engineer Katie Coleman has incorporated technology into every stage of her journey with kidney cancer since receiving her diagnosis two and a half years ago.

“I was diagnosed at 29, and I didn’t know really anything about cancer at the time,” says Coleman, who lives near Austin, Texas. “But as I found myself navigating through the system as a patient, I was shocked at how little technology seemed to be integrated in the whole process and system. When you come down to treatments and some of the tools that are actually used in the medical space, tech has come a long way there, but it didn’t seem like tech has come a long way to really help patients.”

Coleman received a diagnosis of stage 4 kidney cancer — oncocytoma, which, as explained by the National Cancer Institute’s Center for Cancer Research in a 2022 article about Coleman, comprises 15% to 20% of kidney tumors and is typically benign. What made Coleman’s case rare, the National Cancer Institute noted, is that her tumor had spread, making hers “likely the only known case in the United States and one of only a handful throughout the world.”

“Because I have such a rare cancer, I was seen by lots of different physicians at lots of different institutions, and I got to experience how frustrating that is when your data doesn’t sync up and you’re sending things by fax that get there half of the time (or) you’re mailing things. It’s a very slow, tedious process,” Coleman tells CURE.

Coleman created an app that allows patients to streamline their doctors’ information. In addition, she utilized online support groups and developed a following via TikTok, X (formerly Twitter), Instagram and YouTube and her “Oncology Unscripted” podcast.

Now a software engineer for the Rare Cancer Research Foundation, Coleman recently experimented with using the latest iteration of AI chatbot ChatGPT to create a spreadsheet and track her medical expenses in anticipation of filing
her taxes next year. It was a task that would have taken hours, or a day, for her to complete on her own — and, she says, ChatGPT got it done “in about a minute.”

“I was trying to figure out if I would be able to reach the limit for my medical expenses by adding them all up, if it would be higher than the standard (tax) deduction,” Coleman says. “And so I consolidated all my medical expenses, and then kind of put that (into ChatGPT) and had it generate a table for me and sum them up, which I could have done on my own but it would have taken a really long time.”

‘Validate and Double Check’

Chatbots such as ChatGPT are powerful, AI-driven large language models capable of generating humanlike conversation and having humanlike interactions with users, according to Dr. Matthew Lungren, an associate clinical professor at the University of California, San Francisco, and chief medical information officer at Nuance Communications, a conversational AI company purchased in 2022 by Microsoft.

“While it is very tempting to sort of anthropomorphize these models, I want to stress that it does not have any humanlike consciousness or personal experiences,” Lungren said. “It’s simply mimicking understanding based on learned patterns.”

Research in the field has yielded generally positive results lately — with some caveats.

“I’ve had (ChatGPT) try to find research papers for me, for example, and it does an OK job at that sometimes,” Coleman says. “Other times, it’s completely made up research papers that don’t actually exist. It’s important to really understand that context. And that’s why I think it’s really important when people use systems like this to understand that it’s a tool. It’s a tool to aid you but, especially where it’s at currently, you definitely need to validate and double-check the information.”

An investigation published in April 2023 in JAMA Internal Medicine comparing physician and AI-generated responses to patient questions found that evaluators preferred chatbot responses 78.6% of the time and that chatbot responses were generally longer, rated as both of significantly higher quality and significantly more empathetic.

A February 2023 study conducted at the University of Maryland School of Medicine reported that ChatGPT correctly answered questions related to breast cancer approximately 88% of the time. Results of a study from a team of researchers at Harvard Medical School published in May 2023 in the International Journal of Internal Medicine showed that “chatbots used for oncological care to date demonstrate high user satisfaction, and many have shown efficacy in improving patient-centered communication, accessibility to cancer-related information and access to care.”

However, the Harvard researchers noted that “currently, chatbots are primarily limited by the need for extensive user testing and iterative improvement before widespread implementation.”

“(Chatbots) generate humanlike responses and engage in interactive conversations. However, they often generate highly convincing statements that are verifiably wrong or provide inappropriate responses,” Stephen Gilbert, professor for medical device regulatory science at the Else Kröner Fresenius Center for Digital Health at Dresden University of Technology in Germany, said in a news release. Gilbert was a co-author of an article published in June 2023 in the journal Nature Medicine that called for chatbots to be vetted and approved as medical devices by regulators.

“Today there is no way to be certain about the quality, evidence level or consistency of clinical information or supporting evidence for any response. These chatbots are unsafe tools when it comes to medical advice, and it is necessary to develop new frameworks that ensure patient safety,” Gilbert said.

Authors of an April 2023 study published in JNCI Cancer Spectrum cited “an urgent need for regulators and health care professionals to be involved in developing standards for minimum quality and to raise patients’ awareness of current limitations.”

When logging into ChatGPT, users are presented with the warning that “while we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

“Sometimes I worry, if patients go in and they use ChatGPT and start asking questions, it can go in different ways,” says Dr. Irbaz B. Riaz, an assistant professor of oncology and medicine at Mayo Clinic’s Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery in Phoenix.

“Obviously, it can provide some useful information because it has been trained on the data available over the web. There’s a lot of useful information over the web, so (patients) can get reasonable answers,” Riaz says. “But you can also then see the flip side. Because it is trained on the data over the web, there are lots of inaccuracies and there is lots of false information.”

There is, Riaz says, “definitely a possibility of getting wrong information, false information, inappropriate information and misleading information, because you never know what you’re going to get out of the model.”

There is also, as Coleman experienced, the phenomenon of hallucination, where the AI model fabricates an answer entirely.

AI platforms “are unique in that sense that they have amazing strengths,” Lungren says, “but they also have pretty important weaknesses that, if you’re going to be using these things regularly, (are) important to know about.”

For example, Lungren explains, ChatGPT “was trained on a massive amount of data” — but that was cut off around September 2021, “meaning that beyond that data point in time, the model has not learned anything about current events.”

To counter that limitation, there are platforms such as the Bing Chat function on the Microsoft Edge browser, which combines the internet access of a traditional search engine with an AI chatbot large language model.

“I think there needs to be an educational component where patients and everyone should understand that the information we are getting may or may not be correct, and this should not be the starting point of reality. This should be just a guiding point,” Riaz says.

‘The Role of Oncologists Will Evolve’

Dr. Wilfred Ngwa, an associate professor of radiation oncology and molecular radiation sciences at Johns Hopkins University in Baltimore, has worked with patients who used ChatGPT to learn about their conditions and to compare treatment plans and costs. The emergence of AI, he says, will change the way patients and providers work in concert with technology.

“Patients will still need that human confirmation,” Ngwa says. “And that’s where I think the role of oncologists will evolve to one where it’s more (about) confirmation.”

Authors of a 2021 study published in JMIR (Journal of Medical Internet Research) Cancer posited that “further research and interdisciplinary collaboration could advance this technology to dramatically improve the quality of care for patients, rebalance the workload for clinicians and revolutionize the practice of medicine.”

A commentary published in May 2023 in Cancer found ChatGPT to be “well-suited to review and extract key content from records of patients with cancer, interpret next-generation sequencing reports and offer a list of potential clinical trial options.”

Ngwa, who co-wrote a paper in 2020 for the “American Society of Clinical Oncology Educational Book” on the ways health-related technology can reduce the gap between developed and under-developed regions around the world, explains how AI chatbots can increase access to medical information for patients.

“In low- or middle-income countries like in Cameroon, where I was born … you haven’t got in the country two oncologists and there is no time, there is no opportunity … for multidisciplinary care,” Ngwa says. “But then think about that — AI can provide the other doctors, right? So you can actually have a conversation, you can have a chat with the chatbot. So, basically, it can give you that perspective that you’re missing.”

‘You Don’t Know What You Don’t Know’

Suzanne Garner, a stage 2 breast cancer survivor, learned about the NATALEE trial via Outcomes4Me, an AI-driven platform integrated with the National Comprehensive Cancer Network’s Clinical Practice Guidelines in Oncology.

The trial compared Kisqali (ribociclib) plus endocrine therapy with standalone

Breast cancer survivor Suzanne Garner leads patient community efforts for the AI-driven platform Outcomes4Me.

Photo courtesy of Ryuji Suzuki for Outcomes4Me

endocrine therapy in adult patients with hormone receptor-positive/HER2-negative early-stage breast cancer.

“When you first download the app, it asks you a bunch of questions through a diagnostic questionnaire that you, to the best of your ability, answer about what you currently know about your diagnosis,” Garner explains. “And then it invites you, if you want to give it more information, to share your medical record. … Then the algorithm can start to feed you an education that is very specific.”

Two years after enrolling in NATALEE, Garner went to work for Outcomes4Me to lead patient community efforts for the company.

“Often, there’s more than one path to remission or treatment line, and you have choices,” Garner says. “And that’s really hard as a patient, when your oncologist says, ‘Well, there are a few different things we can do. We can do A or B or C,’ and they present you the risks and benefits of each. But ultimately, it’s up to you to choose. And with no medical history background that’s really intimidating and hard.

“And so, being able to read about the different treatment options that were likely going to be recommended to me before I even entered that next oncology visit was very helpful, because the oncologist visit can be intimidating. You don’t know what you don’t know. … One of the biggest benefits to me is that the app also does clinical trial matching based off your specific diagnosis.”

In recent years, a number of oncology-specific AI platforms have been introduced.

The Penn Center for Innovation at the University of Pennsylvania has partnered with information technology company Accenture on Penny, a chatbot that communicates with patients with cancer via text message. Belong.Life has launched its AI conversational oncology mentor, Dave. German researchers developed the chatbot PROSCA to provide patient information on early prostate cancer detection. And the World Health Organization launched a women’s health chatbot featuring information on breast cancer.

“There are several resources out there, but the problem with these resources is that they may not be exactly answering the questions you’re looking for,” Riaz says. “This is why I feel the future is still with these large language model-based chatbots, but (those that are) trained for that particular task, trained to (answer) ‘What questions should

I ask my doctor about prostate cancer?’ or ‘What are the initial things that (patients with) prostate cancer with early-stage disease should know about the disease in that state?’ So, I think the future will be moving toward these large language model-based chatbots. But, I think we’re just a step behind at the moment.”

For more news on cancer updates, research and education, don’t forget to subscribe to CURE®’s newsletters here.

CML Alliance
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart