Imagine if, after being diagnosed with cancer, you could access algorithms that would help you find the best treatment, the hospital with the best results, and even your likely six-month to five-year survival odds.
All this is possible today. While clinicians are excitedly discussing how they can utilize OpenAI’s dazzling new ChatGPT, artificial intelligence (AI) is quietly giving patients the ability to find, create, and act on an unprecedented breadth and depth of trusted information.
What, then, happens to the physician’s role?
The essence of medical professionalism was defined as the possession of knowledge inaccessible to the lay public. This knowledge monopoly, however, is rapidly eroding. Medicine is entering an era of new roles, rules and relationships, all of which remain very much in flux.
Perhaps the clearest example of ongoing changes is cancer. Four out of ten Americans will be diagnosed with cancer at some point in their lives, according to the National Cancer Institute (NCI), with nearly two million receiving this terrible news each year. The National Academy of Medicine has declared that shared decision-making is as important as technical competence, yet anxious and frightened patients remain so reluctant to challenge their doctor that it has been compared to a hostage situation.
Enter AI that empowers the patient. In its most basic form, AI represents a method for making sense of a deluge of data. ChatGPT’s special appeal lies in its ability to provide a coherent and comprehensive narrative response. The problem, as several reviewers have pointed out, is that insights and inaccuracies can blend seamlessly together. Unlike ChatGPT (at least for now), the companies discussed below anchor their AI to medical databases, not the internet, although that doesn’t make the technology or recommendations perfect.
In cancer, the initial treatment decision is critical. The National Comprehensive Care Network (NCCN), an alliance of 32 leading cancer centers, produces care guidelines based on a critical evaluation of the evidence and the clinical experience of experts in areas where highly reliable evidence is not available.
Outcomes4Me says it has developed “the only direct-to-patient platform that integrates with the NCCN clinical practice guidelines in oncology.” The company, founded by former pharmaceutical and Google executives and an oncologist with an NCI background, inputs comprehensive information from a patient’s medical records into an analytics engine. Possible treatment options are presented to the patient in plain English, setting the stage for a very different kind of doctor-patient conversation about treatment.
Does your doctor claim to already follow the guidelines? Perhaps, but even at academic medical centers adherence varies wildly, according to the researchers, despite evidence that following guidelines “leads to better outcomes”; that is, it may be a matter of life and death. So maybe that big name hospital advertising on TV isn’t the best place to be cared for.
A company called PotentiaMetrics promises to apply patients’ personal medical information to “cancer treatment options and hospital outcomes across the United States.” Its AI-powered platform reportedly has “the largest cancer outcome dataset of its kind.” The company also offers personal browser assistance – again, significantly changing the traditional dynamic of the doctor-patient dialogue.
There is a problem: unlike Outcomes4Me, PotentiaMetrics is not a direct-to-patient platform. However, one part of its business model illustrates how much roles, rules and relationships are changing. Cancer has become the top driver of healthcare costs for large companies, according to the Business Group on Health. PotentiaMetrics sells its services to employers and health plans who are convinced that the most clinically effective treatment is also the most cost-effective. Importantly, involving employers and plans paves the way for providing this type of information to a wide range of the public.
(Both Outcomes4Me and PotentiaMetrics also rely on revenue from connecting patients to pharmaceutical companies’ clinical trials, a topic for a different discussion.)
Meanwhile, a group of Canadian researchers recently published results showing that they could accurately predict 80 to 90 percent six-month, three-year, and five-year survival rates for patients with a wide variety of cancers. These predictions were based on an AI review of patients’ complete medical records after their first oncology consultation. If researchers choose to publicly publish their algorithm, US regulations relating to information sharing and interoperability mean it can be quickly adapted into a patient-facing application that allows patients to search their own records.
The predictions, of course, are not certainties for any individual patient, and more importantly, we still don’t know the background of these companies. It is also easy to forget that “AI” is a term applied to a variety of learning models and that different databases can provide different answers. Despite breathless speculation, AI is nowhere near replacing the doctor.
Yet the predictions that oncologists make of their cancer patients are notoriously inaccurate. Furthermore, quantum leaps in AI capabilities are on the way. Google is testing a dedicated Q&A chatbot called MedPaLM, while OpenAI, which just released ChatGPT-4, is also planning a medical version.
What is certain is that AI is poised to radically reshape customer service. With our smartphones, watches and even clothing increasingly packed with sensors, and sophisticated peer-to-peer learning sites providing unique insights, incorporating patient-reported data into care is critical.
Good medicine needs to become participatory medicine, not least because involving the patient as a partner consistently improves care. In cancer, for example, incorporating patient-reported outcomes during treatment has improved patient survival and quality of life. Elsewhere, I have proposed a formal framework for information sharing, engagement, and accountability that I call “collaborative health.”
The transparency that AI is bringing to medicine will highlight the wide variations in medical practice and outcomes. For example, the US spends twice as much on cancer care as the average for high-income countries, yet patient mortality rates are only marginally better than average, a Yale-led study found.
A quarter of a century ago, I wrote a book entitled, Demanding Medical Excellence: Physicians and Accountability in the Information Age. The information age and the demand for accountability seem finally to be upon us, with the turmoil precipitated by ChatGPT making the book’s conclusion even more relevant today:
“The destruction of old forms of medical practice may be an inevitable source of anxiety, but it should not be a source of despair. Patients and caregivers should celebrate better days ahead. Destruction often precedes renewal, and it is in that renewal that the future of American medicine lies.”