New series: Understanding and Guiding Medical Research

With my doctor in an exam room

With my primary physician, Dr. Danny Sands, in a BIDMC exam room

Some edits made, and new items added, late the same evening.

I’ve recently learned of some well-intentioned medical research that disturbs me so deeply that I think it’s time to get formal about teaching e-patients and their partners how to detect research that misses its target, even if it’s well intentioned.

Doing this responsibly requires a deep understanding of the purpose of research and its methods. So this is the start of a series in which I’ll lay out what I’ve learned so far, describe the problems and challenges and opportunities that I see, and invite dialog on where I’m wrong and your own experiences as patient or clinician or researcher.

If this succeeds we’ll have a new basis for considering questions of what to do and how to prioritize it, in this era of change in medicine – not just in research but in all of medicine, as we work on reducing our spend. My goal in the series will be to be as clear in my writing as I can, while being as verifiably accurate as I can, given that I’m no PhD or Pulitzer laureate. Critique and correction are welcome.

This first post is an introduction, with background reading.

Context: Patient Engagement

The context for this series is patient engagement: patients shifting from being “compliant” cars in a medical car wash to being responsible and engaged.

Empowered, engaged patients take responsibility for their health and their care. One aspect is being responsible for understanding, as best we can, the evidence that a recommended treatment is right for us. Sometimes it’s pretty simple, sometimes not; but the higher the stakes get, the more important it is.

With respect to research, there’s a big challenge: sometimes the published evidence sucks – even though it got through the peer review process and was approved by a big-name journal.

Of course not all evidence sucks. But if you’re considering whether to be cut open or eat chemicals (meds), you have a choice: trust blindly (“whatever you say, doc”) or take responsibility for understanding as much as you can.

As we’ll discuss, one big reason blind trust fails is that the evidence your doctor gets isn’t necessarily great, and most clinicians aren’t rigorously trained in how to scrutinize it. (They too are largely trained to trust the journal process.) So this is for them too.

Intended audiences

  • Patients and caregivers – the people on the receiving end of the treatment; the ones who make the decision to accept treatment.
  • Clinicians, for two reasons:
    • In a participatory relationship, the patient and clinician need to be on the same page regarding the basis for decisions, or one will think the other’s crazy
    • As I said, in my experience most clinicians haven’t been rigorously schooled in the weakness of the info they were taught to trust. (This isn’t an insult; see homework below.)
  • Health policy people (government and non-profits), because they need to be firmly grounded in reality, or they can’t possibly make policies that work in reality (eh?)
  • Insurance companies (commonly euphemized as “care plans” or just “the plans”), who decide what will get paid for.  (I know some insurance companies don’t mind paying for stuff that doesn’t work; they basically get a commission on all spending. But others do care what works and what doesn’t – some even have staff who help patients understand the options! They need to be well informed too.)
  • Others, I’m sure.

This will make some people unhappy.

It’s the unhappiness that comes from realizing the world isn’t what you thought it was. And the unhappiness that comes from realizing you have to adjust.

But ladies and germs, disconnects like that are what keep a dysfunction in place and make problems intractable. So, comfortable or not, let’s get on with it. The unhappiness I anticipate:

  • Some clinicians don’t welcome questions from their patients. (Others do.) In my personal experience most of the ones who object don’t realize how weak the evidence is.
    • I hope they’ll remember what I learned in school: all science must be open to new information. (As SPM co-founder and ACOR founder Gilles Frydman said in 2010, “All knowledge is in constant beta.”)
    • I know clinicians have many pressures including short appointments. This doesn’t have to be done by an MD; in my view of the future, every “medical home” will have coaches who can help assess published material.
  • Some patients really don’t want to hear that the science they depend on – which has indeed produced miracles – has also produced crap sometimes. They especially don’t want to hear that clinicians – their clinicians, who they know are good people – aren’t perfect.
  • In general, everyone wants certainty – doctors and patients alike – so it’s unsettling to know you can’t have it. (Even the best science has a chance of errors, and all science is subject to correction.)

Important: This is not a “we reject science” series.

  • love science. I personally am alive because of great medical research that created a harsh treatment delivered brilliantly by great clinicians at Beth Israel Deaconess in Boston. I love the training and clinical experience that made them able to save my life!
  • It included great laparoscopic surgery and orthopedic surgery developed by great skilled scientists and delivered by skilled, adroit surgeons & teams. Hooray for science!
  • But in the end, science knows that there is no certainty. They’re doing the best they can amid uncertainty. Heck, I myself live in uncertainty:
    • The best evidence (which is not great) says there’s a 50% chance my cancer will return, which would likely kill me
    • At diagnosis the evidence said I had 24 week median survival
    • On the flip side, the treatment that did cure me usually doesn’t work.

Bottom line: There Is No Certainty.

The art of designing, conducting and reporting research includes dealing accurately with this issue. Whatever you read, there’s always a chance it’s wrong.

In my view, the ultimate responsible patient understands this, accepts the uncertainty (as best a human can), and responds by saying “Okay, what are the options? And what are the chances they’ll work?”

If you fully understand that much research is shaky and deserves questioning, you can skip to the end and wait for round 2.  If not, read these past posts, because if you don’t realize there’s weakness, you have no reason to learn what comes next.

Here’s your homework.

Past posts establishing the need to be responsible for our decisions

These posts are from e-patients.net.

Making sense of health statistics (Nov. 2008)

Post about a profound article by Gerd Gigerenzer et al making clear that only a small minority of clinicians really understand statistics about research and can answer important questions correctly. Imagine the effect of that on the quality of recommendations.

Note: this doesn’t mean they’re schlocks – this stuff is difficult! We just shouldn’t be in denial about the difficulty. (And, by the way, politicians who make a political issue of such numbers are certainly no better; see examples therein.)

Lies, Damn Lies And Statistics: Collective Statistical Illiteracy (Nov. 2008)

Response the next day by ACOR founder Gilles Frydman – excellent additional information. When he first wrote this it was over my head, but I get it now.

No *other* conflict of interest, huh? (Nov. 2008)

Another by Gilles – a real-life absurd example of how impotent our “conflict of interest” rules are. It reports on an important article about a drug in the revered New England Journal of Medicine. The section on conflicts of interest by the authors is over 300 words long … then they say “No other potential conflicts have been reported.” Like, this is supposed to make us trust the article and NEJM??

Personally, I feel as comforted by that as I am by knowing who pays each lobbyist’s salary in Washington.

Clinical trials: Unfavorable results often go unpublished (Science Blog) (January 2009)

This was my first sign that medical science doesn’t live up to the standards I learned in high school(!).  If I ran an experiment in chemistry or physics five times, and only two worked, and I didn’t report the other results, Mr. Viger and Mr. Bauermeister sure wouldn’t have been pleased. But guess what’s common among the people who study the drugs that get fed to us? It’s common to suppress evidence. More on this in a moment.

And it’s not a statistical fluke. Last spring I met a doc who said active suppression by the drug maker is such an issue that he now tells a drug company he won’t do their study unless he gets to publish it regardless of the outcome. And many turn him down.

MedPage: Negative Data on Seroquel Suppressed by Drug’s Maker (Feb 2009)

One example of outright corruption:

“… tried to minimize the risk of diabetes and weight gain associated with its antipsychotic drug quetiapine (Seroquel), in part by “cherry-picking” data for publication. … lawsuits by some 9,000 people who claim to have developed diabetes while taking the drug. … company had “buried” three clinical trials and was considering doing so with a fourth. … Another … discussed ways to “minimize” and “put a positive spin” on safety data”

This is, as I say, outright corruption of the scientific process. In my view these people are knowingly putting patients’ lives at risk for profit, and should be disbarred or delicensed, if there were such a thing for scientists.

And perhaps there should be – licensing is done where competence is important and harm can come from doing it wrong.

(Investors, you should think about this too, including people who own mutual funds: when we say “putting lives at risk for profit,” it’s often expressed as “shareholder value.” That blood’s on your hands, too.)

(And don’t anyone tell me I shouldn’t be so harsh. Wake up; talk to those 9,000 patients.)

A quote I won’t soon forget (Oct. 2009)

The quote is from Marcia Angell MD:

It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.

Must-hear: four Journal of Participatory Medicine contributors discuss how we know what we know (August 2010)

Great podcast on the fundamentals of how we study what we study. Participants:
Peter Frishauf, founder of Medscape
Richard Smith, M.D., former editor of the British Medical Journal
Jessie Gruman, Ph.D., President, Center for Advancing Health [and then Co-Editor In Chief of the Journal of Participatory Medicine (JoPM)]
Larry Green, DrPH, ScD(Hon.), Professor of Epidemiology and Biostatstics, UCSF Helen Diller Family Comprehensive Cancer Center [and author of a piece in JoPM]

Atlantic: Lies, Damned Lies, and Medical Science (Oct. 2010)

“A new article in Atlantic bears striking parallels to a 2009 piece in the Journal of Participatory Medicine:

  • JoPM, Oct 21, 2009: “….most of what appears in peer-reviewed journals is scientifically weak.”
  • Atlantic, Oct. 16, 2010: “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.”
  • JoPM 2009: “Yet peer review remains sacred, worshiped by scientists and central to the processes of science — awarding grants, publishing, and dishing out prizes.”
  • Atlantic 2010: “So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?”

That item on “doctors drawing on misinformation” is what I meant at the top. Clinicians are trained to trust peer reviewed research; the article details how erroneous that research can be.

The Decline Effect: Is there something wrong with the scientific method? (New Yorker) (Jan 2011)

This post, and its great comment stream, list many ways that a published finding often becomes less true as time goes by! Tops on my list: when I was in school we were taught that any good study could be repeated by another lab with the same results, but most published studies are never replicated by another lab.

Combine that with the Seroquel story above (and Ben Goldacre below) and you see why it’s prudent to question research, on anything important you’re considering doing to your body (or your baby’s or mother’s).

Really, I’m not kidding, read this one.

Richard Smith: Beware journals, especially “top” ones (BMJ blog) (June 2011)

Richard Smith, a 25 year editor of the British Medical Journal, has written another piece for the BMJ blog, citing a JAMA study showing “that of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005.

What’s an e-patient to do?? Especially when we “patients who google” are so often sneered at by physicians who rely on these same journals.

e-Patient Training via TED Talk: “Battling Bad Science” (Oct. 2011)

Great, entertaining TEDGlobal talk by Ben Goldacre MD (Twitter @BenGoldacre), including this line: “Real science is all about critically appraising the evidence for somebody else’s position.” Wait’ll you hear what actually happens.

Former NEJM editors on the corruption of American medicine (NY Times) (March 2012) (Added 9/2/12)

The Times ran a video interview with Dr. Angell, and her former mentor Arnold Relman, “who for decades have been warning of a growing “medical industrial complex.”

In 1984, Dr. Relman became the first editor of a medical journal to require authors to disclose financial ties to their subject matter and to publish those disclosures. He later came to suspect that simple disclosure was not enough, and his policy evolved to excluding all authors with financial interests from writing large educational reviews.

Note: Relman’s policy said “It’s not enough to say ‘By the way, I get money from this product I’m reviewing’ – you simply shouldn’t be writing about it.” Makes sense, right? But I had no idea that apparently it’s not easy to find untainted experts anymore, so Relman’s successor punted that rule:

That rule was reversed in 2002, after the journal’s current editor in chief, Dr. Jeffrey M. Drazen, took the job. Dr. Drazen and his colleagues reported that for some subjects, so few experts without financial ties could be found that the journal’s scope was becoming artificially curtailed. (Emphasis added)

What?? If we exclude tainted people, the NEJM won’t have enough authors??

Note: Drs. Relman and Angell are criticized as “pharmascolds,” but their successor said that.

“The cancer at the core of evidence-based medicine”: Ben Goldacre on the missing data (April 2012)

A hallway interview with Goldacre after his superb talk at TEDMED 2012. By now I started getting into analyzing just how skewed the situation is.

Also: Where’s the Missing Data? – the video, released later, of his talk that day.

————

To be clear, this whole series won’t be about corruption and suppression of evidence. This is just the initial “case for action” – one set of evidence that it’s reasonable for us to think for ourselves and ask questions. (And “us” is all of us, including clinicians.)

In subsequent posts we’ll get into how studies are designed, which deeply affects whether they’re measuring what they set out to measure. And then, beyond that, is whether what they’re measuring is even what patients want.

Comments? Critique? Please: respond below. I don’t know everything and a lot depends on this.

13 comments to New series: Understanding and Guiding Medical Research

  • Dave, one of my findings on the difficulty of publishing negative results.
    http://scopeblog.stanford.edu/2012/05/16/a-critical-look-at-the-difficulty-of-publishing-negative-results/
    I look up the positive side in google scholar and the negative side in bionot –
    http://bionot.askhermes.org/integrated/BioNot.uwm
    My pet peeve in the whole clinical research is the length of time it takes to get to practical use, some of my therapists said 30 years, others 20 years.
    Dean

    • e-Patient Dave

      Here’s the opening of the post Dean linked to:

      “Science is supposed to work like this: A researcher tests a question with an experiment, produces results of the experiment and publishes the work so it can be evaluated by peers. Other scientists can then run the same experiment and see if they get the same results.

      That’s the part I noted above, which rarely happens. That’s a great big #fail in the scientific method, good buddy.

      And it continues:

      But if the results don’t match the first experiment, it can be tough to actually get these “negative” findings published. …”

      That’s exactly as defective as if I told Mr. Bauermeister in tenth grade chemistry, “It worked the first time I tried it, but not the next time, so I’m rejecting the second try.” You know what any good science teacher would say to that.

      re the length of time it takes to get new findings into practice – I imagine you’re talking about Balas’s amazing IOM paper in 2001, which established that it takes an average of 17 years for HALF of clinicians to adopt new methods. That will be in this series too – it’s another major reason for engaged patients to learn what they can.

    • e-Patient Dave

      btw, Dean, I’d be deeply grateful if you could paste in an example of a simple search you’ve done for something, in Google Scholar (for the positive) and BioNot (for the negative). I’ve never heard of BioNot, and a simple example might be really useful, for clinicians as well as engaged patients / caregivers.

  • I highly recommend Lawrence and Lincoln Weeds’ “Medicine in Denial.”

    • e-Patient Dave

      Hi Bobby – thanks – I have a copy and haven’t been able to get time to read it yet (the life of the unfunded startup) – can you tell us something specific about what it says??

      (That’s why I didn’t just paste in a pile of links, I took time to include something about what each post says.)

      Of course I intend to read it when I can, but you should SEE the pile of books waiting for me…

      Thanks!

  • Dave,

    Thank you so much for diving into this complicated topic. We have much to learn from you on this, as on other matters. Thanks also for referring to the Jonah Lehrer piece about the scientific method — even if his journalism is suspect, given recent events, the point remains an important one.

    All in all, you have collected an amazing bibliography for us, your students! Thanks!

  • nancy

    An article in Nature (http://www.nature.com/nature/journal/v483/n7391/full/483531a.html) gets into the lack of reproducibility on the pre-clinical side.

  • Dave,

    BioNOT is described here;

    http://www.biomedcentral.com/1471-2105/12/420/

    When I search for fish oil in google scholar I get links to articles, when I search for it in Bionot i get negated references from the actual article. I get more information faster by BioNOT. I love what you’re doing with this subject, it’s near and dear to my heart.

    Dean

    • e-Patient Dave

      Dean, this is fascinating – I’m amazed that I’ve never heard of it before. Except I see that the article’s less than a year old.

      Readers, has anyone else seen it? The idea is that while you may normally find plenty of positive stories when you search for a combination, Bionot takes the same phrase and finds articles that have them with a “not” or “no” nearby. (See the article for the particulars – it’s a bit thick, not written in layman’s terms, but that’s the idea)

      The article says they’d indexed 32 million sentences with appropriate “not”s in them (see article for details); the Bionot search screen now says 500 million sentences.

      The subject matter is limited, it appears – the article talks about genomics; I searched famous debacles like “Vioxx arthritis” and “Seroquel schizophrenia” and didn’t find coverage of the scandals. So, as I say, it appears they’re focusing so far on other things.

      I wish their home page would make that clear, because I think this can be a GREAT resource for all the stakeholders I listed above! And I hope it continues to expand – rapidly.

      It says Bionot was supported by a grant from the National Institutes of Health. Thanks, NIH!

  • Amanda

    Dave,

    I’m so thankful for the insight this series will bring to your readers! As a researcher, a scientist, and a humanitarian, I’d like for healthcare consumers to be their own best advocates.

    When I began my career it started with a simple guideline: We must do what is right. For me, and for most scientists I work with, it is exciting to find “favorable” data at the summation of our research. However, as much as we feel good about “favorable” outcomes, we are always aware that our research MUST be validated and proven reliable.

    Likewise, when we are “disappointed” in our data, we must push forward and open our results to others to validate and prove reliability. I certainly don’t want to affect another’s life with faulty research.

    My research is small beans, compared to most medical research. I observe. I listen. I aggregate and follow the relationships of data. Even if my “small beans” research does not have a global effect, it must still be replicated, validated, and proven reliable.

    I do hope that I will see some mention of the limitations placed on conducting valid and reliable research. There are many reasons that it takes a great deal of time to “prove” any issue, in as much as anything can truly be proven…

    All the best and highest regards,
    A

    • e-Patient Dave

      Hi Amanda –

      > I do hope that I will see some mention of the limitations placed on conducting valid and reliable research.

      What limitations do you have in mind? Do you mean the limitations inherent in all research, or limitations “placed” by outside influences?

      btw, as I said, this won’t be a series about malfeasance – it’s a series about understanding research so we can all make more well-informed decisions.

      • Amanda

        Good afternoon, Dave!

        Thank you for the response to my reply. When I speak of limitations, I am referring to the certain requirements inherent in any research. Let me provide an example of how public education concerning research limitations will be beneficial. A colleague of mine has recently begun researching the benefits of social support for those with a chronic condition. This study is quite specific to a certain mode of support, which has raised the hairs of some people in the study population. They have been unkind in their criticism of her study, because they are presently unaware of why the study must focus on one mode of social support, even one particular portion of that mode. It comes back down to measureability, validity, and reliablity. The narrow focus isn’t meant to exclude or alienate any self-selecting participants, but to ensure that the results are specific and measureable. *We all like fruit salad, but a good foundation of research will focus on why we like one fruit in the salad. After that is figured out, we move on to why two fruits are so well liked. So on and so on…

        *Disclaimer- I’ve made a huge generalization for the purpose of education. You may or may not like fruit salad. ;D

        >btw, as I said, this won’t be a series about malfeasance – it’s a series about understanding research so we can all make more well-informed decisions.

        Oh yes, and I hope I’ve not come across as being upset or critical. I’m actually applauding your effort and fully support you!

  • […] aligned: an article he co-authored five years ago that blew my mind last week. (I mentioned it in Saturday’s post that began this […]

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>