(Please excuse all of the terrible puns in this post. I couldn’t resist.)
Down in the Dumps?
Colonoscopies have gotten some pretty shitty news coverage lately.
In a cheekily titled New York Times piece, Elisabeth Rosenthal boldly blamed colonoscopies for causing the U.S. to the lead the world in health care expenditures. Tracking the confluence of specialist lobbies, lucrative up-billing by ambulatory surgical centers, and obscure market-specific price variation, Rosenthal highlights how colonoscopies have become one of the most expensive and widely used screening tests despite evidence that alternative treatment methods may be just as effective.
The price variation piece was hit home a week later by the New York Times’ editorial board.
And then there was the thoroughly disturbing revelation that 3 out of 20 endoscopes, which are used in colonoscopies, remain contaminated with “biological dirt” after current cleaning practices.
Given such painful coverage, it was a refreshing breath of air to read that Kaiser Permanente in California has actually been doing something about it for the last 20 years. Back in 1993, Kaiser changed its colon cancer screening policies to favor sigmoidoscopies, which are less invasive than colonoscopies. Screening rates jumped up to 45% but stalled (probably because having something stuck up your rear is unpleasant, no matter how far up you go).
But in 2006, Kaiser research found that a new version of a stool test had better accuracy than older ones. It offered it as an inexpensive, noninvasive mail-in stool test. Screening rates soared to nearly 85%, resulting in a corresponding jump in “screen-detection rate” (% of cancers detected through screening methods):
The jury seems to be out on the relative effectiveness of the three screening methods: colonoscopy, sigmoidoscopy, and stool test. The CDC recommends any of the three without preference. The thing is, most prior research has compared the effectiveness of the different screening modalities without considering vastly different rates of uptake. You may have the most sensitive test, but if people aren’t willing to undergo it, its value as a screening tool at the population level greatly diminishes. As the Kaiser study concluded:
“The program realized that employing the most accurate screening method did not make a difference in screen-detection rate overall, if the availability of endoscopic resources and patient unwillingness to comply with the strategy did not support the program. […] Using a good test (FOBT/FIT) that is able to reach more people, rather than the “perfect test” that reaches fewer people, transforms an ineffective program into a successful one when the strategy moves from individual testing to population-based screening.”
And that’s the thing: when clean clinical science meets messy clinical practice, oftentimes we’re not sure of the result.
Pragmatic Trials: Because Real Medicine Isn’t So Clean
Here’s how a typical randomized controlled trial might play out:
You define the research question as whether Intervention A will lower BP for hypertension patients. You recruit subjects and explicitly exclude anyone with a co-morbidity (say, diabetes or heart disease). You train a set of practitioners to deliver the intervention according to a specific protocol, and throw out results from any practitioners that don’t follow it. You require that subjects follow the intervention to the T, and throw out results from any patients that don’t comply. Finally, after running the trial, you produce rigorous, clean data that tells you, YES, the intervention works like a charm.
And then you take it out into the real world and it suddenly has no effect. Turns out patients and practitioners don’t comply 100% of the time in real life. Or have more problems than just high BP.
This example illustrates the contrast between randomized controlled trials and “pragmatic trials“. Randomized controlled trials, long considered the gold standard in clinical research, rely on careful participant selection and stringent adherence to protocols to determine “efficacy” in a best case scenario. Pragmatic trials attempt to mimic real-world scenarios to determine the “effectiveness” of an intervention in actual practice.
The concept of pragmatic trials have been around since 1967, but only took off in the last ten years. It’s part of a field known as comparative effectiveness research, which compares existing tests/treatments/services, often in real-world settings. Kaiser Permanente’s Division of Research has been doing this for years, and the health reform law devoted $3.5 billion through 2019 to fund the Patient-Centered Outcomes Research Institute (PCORI), which will fund and disseminate comparative effectiveness research. PCORI recently announced its 2nd round of research grants, totaling $88.6 million to 51 research projects across 21 states, and unveiled a plan to establish 8 “Clinical Data Research Networks (CDRNs)” and 18 “Patient-Powered Research Networks (PPRNs)”. They could’ve come up with catchier acronyms.
Research Not Sufficient to Change Clinical Practice
Despite the hullabuloo, it may be too early to celebrate.
A recent NEJM study presents a great example of comparative effectiveness research at its best. It found that Enbrel, Amgen’s blockbuster joint disease drug, is no more effective than a cocktail of generic therapies in treating rheumatoid arthritis. Enbrel costs about $25,000 a year. The generic cocktail costs about $1,000.
Yet an accompanying editorial had some words of warning about how this study may be too late to influence ingrained physician practices:
“We have to consider, however, whether these findings have arrived too late to influence modern practice, in which arguably a TNF inhibitor [such as Enbrel] is the preferred next step when methotrexate alone is inadequate.”
Comparative effectiveness research was a focus of last October’s Health Affairs issue, and one article outlined a number of reasons that such research fails to change patient care, including:
- Less effective therapies may be better reimbursed under fee-for-service;
- Economic incentives encourage pharmaceutical and device manufacturers to use creative ways of influencing physicians’ decisions;
- Many results may be ambiguous and prone to accusations of methodological weakness, especially when they try to venture away from the gold-standard randomized controlled trial into the messy world of real life medicine;
- Physicians are only human and exhibit a number of psychological biases, including “confirmation bias“, “pro-intervention bias“, and “pro-technology bias”; and
- Use of decision support to help physicians change their practices is limited.
In fact, physician culture may trump even financial incentives and fear of lawsuits; a recent study showed that even physicians at Veterans Affairs hospitals, who do not get paid for ordering more tests and are rarely sued, order just as many unnecessary nuclear stress tests as physicians at other hospitals.
Simply conducting comparative effectiveness research and establishing recommended guidelines is necessary but insufficient. We need powerful ways to disseminate these guidelines, overcome historical prescribing inertia, and actually change treatment practices. We need more efforts like Dr. Jerry Avorn and colleagues’ “academic detailing” program, in which pharmacists and nurses are deployed to visit doctors one-on-one and inform them of the latest therapy recommendations without industry conflicts of interest. We need more systems like Kaiser and UCSF, who will not only research different treatment options but also implement them system-wide.
After all, spending thousands on a painful, infection-prone colonoscopy when a simple mail-in stool test will do can be a real pain in the…
Pingback: Health Wonk Review: Rhetorical Question Edition | Wing Of Zock