Hope springs Ephemeral?

I sometimes wander through medical journals like a tourist, stopping here to gaze admiringly at a headline, pausing there to read a letter or two. Occasionally, I stumble upon a piece of information tucked away like a child under a quilt on a winter bed -interesting stuff that would still not likely find its way onto the six o’clock news even in summer.

Maybe it has something to do with my own personal epigenetics, but there was a piece in the July Canadian Medical Association Journal that intrigued me. The article was entitled Stanford researcher contends most medical research results are exaggerated and came from a keynote address by Dr. J. P. A Ioannidis at the World Congress on Research Integrity in Montreal in May. Now at first blush, that smells of a pot calling its researcher-colleague-kettle dirty, but think about the ramifications if true.

To quote from the article: ‘Empirical studies suggest that most of the claimed statistically significant effects in traditional medical research are false positives or substantially exaggerated.’ Indeed, ‘Even the pharmaceutical industry is now trying to replicate well-regarded studies before they invest in developing particular drugs.’ In fact, the speaker went on to note that ‘researchers at Amgen, a biopharmaceutical company, could replicate only 6 of 53 studies’ [in one project]… So, ‘Most of the time, clinicians should not jump on the results of a single study, even if it came out in a prestigious journal and was widely covered… Bias and random error are the chief reasons research findings often lack credibility.’

And why do I find that so fascinating, one might ask? Well, for one thing, the stuff I see tends only to be published if there is some positive effect, and seldom -if ever- when there is no effect. It’s like everybody I encounter in a journal is correct; things work. But in real life where the patients are not highly selected (or rejected) to satisfy the needs of an experiment -i.e. my practice, or that of my colleagues- there’s more of a Bell Curve effect.

I can live with that; I believe in the Scientific Method: make a theory, gather data to test it, and then see if it can be replicated or predict the results of similar future tests. It should always be subject to revision or even rejection if a better theory or explanation comes along. Science is always a work in progress, an unfinished sandwich…

So its nice to confirm that we shouldn’t always rely on conclusions drawn from one set of data -however large and convincing. They could be the result of a poor -or at least unfortunate-choice of candidates to study, or the results could simply be too good to be true. Ioannidis again: ‘Big discoveries do happen… but most of the effects that are floating around to be discovered are probably pretty small. When we see large effects, probably we should adjust them downward.’

We see things that seem like good ideas being tested all the time in Medicine; sometimes they appear so intuitively correct that minimal sober second thought seems to have been expended before their adoption into mainstream practice -minimal Scientific Method? Without benighting a now-accepted device in obstetrics -the fetal heart monitor and its use in labour and antenatal testing- let me say that there was initially a surge of what can only be politely termed prophylactic Caesarian Sections. It was, at least at first, a classic failure to disprove the Null Hypothesis. And yet, in its early use, if you accepted an almost unconscionably high rate of Caesarian Sections, it seemed very effective at preventing fetal damage in labour…  until it was realized that not every fetal heart rate deceleration, not every bout of fetal tachycardia, not every episode of decreased fetal heart rate variability necessitated immediate operative delivery.

Or how about the IUD -the intrauterine device? Stones worked in camels (at least apocryphally), so what the heck. And surrogate plastic stones were very effective at preventing pregnancies in humans as well. Only later did we find that they resulted in a fairly high rate of infection and even sterility… Admittedly, after reconsidering their design and choosing a patient population that would be at a lesser risk, we have shown them to be an asset. But they had more liabilities than benefits at first… More than we had predicted; more than we had planned for.

We can see the Hope in both these cases -and indeed Hope in the cases to which Dr. Ioannidis refers: the hope for a significant breakthrough in treatment or surveillance. Sometimes it just takes perseverance in the face of adversity: knowing (hoping?) you are on to something. But at what stage should one back off -or at least step back far enough to see if the road leads anywhere? It’s easy enough to look back in time and judge that they were never going to succeed with forceps that could be manually tightened around the baby’s head (so they wouldn’t come off if there was a need for a greater pull), or that the use of sterilized string -instead of a hard monofilament nylon line- on the end of an IUD (so it wouldn’t injure a penis) would act as a kind of wick to draw bacteria from the vagina into the uterus… Even the idea of over-the-counter (and therefore inevitably indiscriminate) availability of antibiotics for gonorrhea in some countries where infections were high and medical help unaffordable seemed like a forward-thinking idea at the time. I mean, who would have thought that gonorrhea would be so good at developing resistance to them?

So I suppose the observation that research results can be exaggerated shouldn’t come as a surprise. We’ve often let hope -belief- lead the way; looking for a path is what takes us somewhere else, somewhere we’ve never been, or have only glimpsed darkly in the night. But we must always be willing to retrace our steps, or at least listen when others are shouting to us from another trail.