Some edits made, and new items added, late the same evening.
I’ve recently learned of some well-intentioned medical research that disturbs me so deeply that I think it’s time to get formal about teaching e-patients and their partners how to detect research that misses its target, even if it’s well intentioned.
Doing this responsibly requires a deep understanding of the purpose of research and its methods. So this is the start of a series in which I’ll lay out what I’ve learned so far, describe the problems and challenges and opportunities that I see, and invite dialog on where I’m wrong and your own experiences as patient or clinician or researcher.
If this succeeds we’ll have a new basis for considering questions of what to do and how to prioritize it, in this era of change in medicine – not just in research but in all of medicine, as we work on reducing our spend. My goal in the series will be to be as clear in my writing as I can, while being as verifiably accurate as I can, given that I’m no PhD or Pulitzer laureate. Critique and correction are welcome.
This first post is an introduction, with background reading.
Context: Patient Engagement
The context for this series is patient engagement: patients shifting from being “compliant” cars in a medical car wash to being responsible and engaged.
Empowered, engaged patients take responsibility for their health and their care. One aspect is being responsible for understanding, as best we can, the evidence that a recommended treatment is right for us. Sometimes it’s pretty simple, sometimes not; but the higher the stakes get, the more important it is.
With respect to research, there’s a big challenge: sometimes the published evidence sucks – even though it got through the peer review process and was approved by a big-name journal.
Of course not all evidence sucks. But if you’re considering whether to be cut open or eat chemicals (meds), you have a choice: trust blindly (“whatever you say, doc”) or take responsibility for understanding as much as you can.
As we’ll discuss, one big reason blind trust fails is that the evidence your doctor gets isn’t necessarily great, and most clinicians aren’t rigorously trained in how to scrutinize it. (They too are largely trained to trust the journal process.) So this is for them too.
- Patients and caregivers – the people on the receiving end of the treatment; the ones who make the decision to accept treatment.
- Clinicians, for two reasons:
- In a participatory relationship, the patient and clinician need to be on the same page regarding the basis for decisions, or one will think the other’s crazy
- As I said, in my experience most clinicians haven’t been rigorously schooled in the weakness of the info they were taught to trust. (This isn’t an insult; see homework below.)
- Health policy people (government and non-profits), because they need to be firmly grounded in reality, or they can’t possibly make policies that work in reality (eh?)
- Insurance companies (commonly euphemized as “care plans” or just “the plans”), who decide what will get paid for. (I know some insurance companies don’t mind paying for stuff that doesn’t work; they basically get a commission on all spending. But others do care what works and what doesn’t – some even have staff who help patients understand the options! They need to be well informed too.)
- Others, I’m sure.
This will make some people unhappy.
It’s the unhappiness that comes from realizing the world isn’t what you thought it was. And the unhappiness that comes from realizing you have to adjust.
But ladies and germs, disconnects like that are what keep a dysfunction in place and make problems intractable. So, comfortable or not, let’s get on with it. The unhappiness I anticipate:
- Some clinicians don’t welcome questions from their patients. (Others do.) In my personal experience most of the ones who object don’t realize how weak the evidence is.
- I hope they’ll remember what I learned in school: all science must be open to new information. (As SPM co-founder and ACOR founder Gilles Frydman said in 2010, “All knowledge is in constant beta.”)
- I know clinicians have many pressures including short appointments. This doesn’t have to be done by an MD; in my view of the future, every “medical home” will have coaches who can help assess published material.
- Some patients really don’t want to hear that the science they depend on – which has indeed produced miracles – has also produced crap sometimes. They especially don’t want to hear that clinicians – their clinicians, who they know are good people – aren’t perfect.
- In general, everyone wants certainty – doctors and patients alike – so it’s unsettling to know you can’t have it. (Even the best science has a chance of errors, and all science is subject to correction.)
Important: This is not a “we reject science” series.
- I love science. I personally am alive because of great medical research that created a harsh treatment delivered brilliantly by great clinicians at Beth Israel Deaconess in Boston. I love the training and clinical experience that made them able to save my life!
- It included great laparoscopic surgery and orthopedic surgery developed by great skilled scientists and delivered by skilled, adroit surgeons & teams. Hooray for science!
- But in the end, science knows that there is no certainty. They’re doing the best they can amid uncertainty. Heck, I myself live in uncertainty:
- The best evidence (which is not great) says there’s a 50% chance my cancer will return, which would likely kill me
- At diagnosis the evidence said I had 24 week median survival
- On the flip side, the treatment that did cure me usually doesn’t work.
Bottom line: There Is No Certainty.
The art of designing, conducting and reporting research includes dealing accurately with this issue. Whatever you read, there’s always a chance it’s wrong.
In my view, the ultimate responsible patient understands this, accepts the uncertainty (as best a human can), and responds by saying “Okay, what are the options? And what are the chances they’ll work?”
If you fully understand that much research is shaky and deserves questioning, you can skip to the end and wait for round 2. If not, read these past posts, because if you don’t realize there’s weakness, you have no reason to learn what comes next.
Here’s your homework.
Past posts establishing the need to be responsible for our decisions
These posts are from e-patients.net.
Making sense of health statistics (Nov. 2008)
Post about a profound article by Gerd Gigerenzer et al making clear that only a small minority of clinicians really understand statistics about research and can answer important questions correctly. Imagine the effect of that on the quality of recommendations.
Note: this doesn’t mean they’re schlocks – this stuff is difficult! We just shouldn’t be in denial about the difficulty. (And, by the way, politicians who make a political issue of such numbers are certainly no better; see examples therein.)
Response the next day by ACOR founder Gilles Frydman – excellent additional information. When he first wrote this it was over my head, but I get it now.
No *other* conflict of interest, huh? (Nov. 2008)
Another by Gilles – a real-life absurd example of how impotent our “conflict of interest” rules are. It reports on an important article about a drug in the revered New England Journal of Medicine. The section on conflicts of interest by the authors is over 300 words long … then they say “No other potential conflicts have been reported.” Like, this is supposed to make us trust the article and NEJM??
Personally, I feel as comforted by that as I am by knowing who pays each lobbyist’s salary in Washington.
This was my first sign that medical science doesn’t live up to the standards I learned in high school(!). If I ran an experiment in chemistry or physics five times, and only two worked, and I didn’t report the other results, Mr. Viger and Mr. Bauermeister sure wouldn’t have been pleased. But guess what’s common among the people who study the drugs that get fed to us? It’s common to suppress evidence. More on this in a moment.
And it’s not a statistical fluke. Last spring I met a doc who said active suppression by the drug maker is such an issue that he now tells a drug company he won’t do their study unless he gets to publish it regardless of the outcome. And many turn him down.
One example of outright corruption:
“… tried to minimize the risk of diabetes and weight gain associated with its antipsychotic drug quetiapine (Seroquel), in part by “cherry-picking” data for publication. … lawsuits by some 9,000 people who claim to have developed diabetes while taking the drug. … company had “buried” three clinical trials and was considering doing so with a fourth. … Another … discussed ways to “minimize” and “put a positive spin” on safety data”
This is, as I say, outright corruption of the scientific process. In my view these people are knowingly putting patients’ lives at risk for profit, and should be disbarred or delicensed, if there were such a thing for scientists.
And perhaps there should be – licensing is done where competence is important and harm can come from doing it wrong.
(Investors, you should think about this too, including people who own mutual funds: when we say “putting lives at risk for profit,” it’s often expressed as “shareholder value.” That blood’s on your hands, too.)
(And don’t anyone tell me I shouldn’t be so harsh. Wake up; talk to those 9,000 patients.)
A quote I won’t soon forget (Oct. 2009)
The quote is from Marcia Angell MD:
It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.
Great podcast on the fundamentals of how we study what we study. Participants:
Peter Frishauf, founder of Medscape
Richard Smith, M.D., former editor of the British Medical Journal
Jessie Gruman, Ph.D., President, Center for Advancing Health [and then Co-Editor In Chief of the Journal of Participatory Medicine (JoPM)]
Larry Green, DrPH, ScD(Hon.), Professor of Epidemiology and Biostatstics, UCSF Helen Diller Family Comprehensive Cancer Center [and author of a piece in JoPM]
Atlantic: Lies, Damned Lies, and Medical Science (Oct. 2010)
“A new article in Atlantic bears striking parallels to a 2009 piece in the Journal of Participatory Medicine:
- JoPM, Oct 21, 2009: “….most of what appears in peer-reviewed journals is scientifically weak.”
- Atlantic, Oct. 16, 2010: “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.”
- JoPM 2009: “Yet peer review remains sacred, worshiped by scientists and central to the processes of science — awarding grants, publishing, and dishing out prizes.”
- Atlantic 2010: “So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?”
That item on “doctors drawing on misinformation” is what I meant at the top. Clinicians are trained to trust peer reviewed research; the article details how erroneous that research can be.
This post, and its great comment stream, list many ways that a published finding often becomes less true as time goes by! Tops on my list: when I was in school we were taught that any good study could be repeated by another lab with the same results, but most published studies are never replicated by another lab.
Combine that with the Seroquel story above (and Ben Goldacre below) and you see why it’s prudent to question research, on anything important you’re considering doing to your body (or your baby’s or mother’s).
Really, I’m not kidding, read this one.
Richard Smith, a 25 year editor of the British Medical Journal, has written another piece for the BMJ blog, citing a JAMA study showing “that of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005.
What’s an e-patient to do?? Especially when we “patients who google” are so often sneered at by physicians who rely on these same journals.
Great, entertaining TEDGlobal talk by Ben Goldacre MD (Twitter @BenGoldacre), including this line: “Real science is all about critically appraising the evidence for somebody else’s position.” Wait’ll you hear what actually happens.
Former NEJM editors on the corruption of American medicine (NY Times) (March 2012) (Added 9/2/12)
The Times ran a video interview with Dr. Angell, and her former mentor Arnold Relman, “who for decades have been warning of a growing “medical industrial complex.”
In 1984, Dr. Relman became the first editor of a medical journal to require authors to disclose financial ties to their subject matter and to publish those disclosures. He later came to suspect that simple disclosure was not enough, and his policy evolved to excluding all authors with financial interests from writing large educational reviews.
Note: Relman’s policy said “It’s not enough to say ‘By the way, I get money from this product I’m reviewing’ – you simply shouldn’t be writing about it.” Makes sense, right? But I had no idea that apparently it’s not easy to find untainted experts anymore, so Relman’s successor punted that rule:
That rule was reversed in 2002, after the journal’s current editor in chief, Dr. Jeffrey M. Drazen, took the job. Dr. Drazen and his colleagues reported that for some subjects, so few experts without financial ties could be found that the journal’s scope was becoming artificially curtailed. (Emphasis added)
What?? If we exclude tainted people, the NEJM won’t have enough authors??
Note: Drs. Relman and Angell are criticized as “pharmascolds,” but their successor said that.
A hallway interview with Goldacre after his superb talk at TEDMED 2012. By now I started getting into analyzing just how skewed the situation is.
Also: Where’s the Missing Data? – the video, released later, of his talk that day.
To be clear, this whole series won’t be about corruption and suppression of evidence. This is just the initial “case for action” – one set of evidence that it’s reasonable for us to think for ourselves and ask questions. (And “us” is all of us, including clinicians.)
In subsequent posts we’ll get into how studies are designed, which deeply affects whether they’re measuring what they set out to measure. And then, beyond that, is whether what they’re measuring is even what patients want.
Comments? Critique? Please: respond below. I don’t know everything and a lot depends on this.