Why Did the Randomized Clinical Trial Become the Primary Focus of My Career?
Abstract
I suppose this process began when I brought my inquiring personality to the University of Illinois College of Medicine in 1956 where, despite its well-deserved high reputation, a recent therapeutic scandal was smoldering (and occasionally bursting into flames). Its university Vice-President-Director and famous physician-physiologist Dr. Andrew Ivy (a renowned gastrointestinal physiologist who had represented the American Medical Association at the Nurenberg Nazi Doctors Trial and subsequently became Executive Director of the National Advisory Cancer Council and a director of the American Cancer Society) had recently been accused of fraudulently defending the efficacy of a quack cancer remedy, Krebiozen (which turned out to be simple creatine) [1]. Although none of my teachers (some of whom were involved in attempts to resolve the dispute) ever spoke of the scandal, there was an atmosphere of skepticism toward authority figures around the place that fostered iconoclasm.
For example, by 1959 I had become a final-year medical student, and I once found myself responsible for a teenager who had been admitted to a medical ward with hepatitis (this episode is described in detail elsewhere, both in my answer to the question: Tell us about medical school. What happened there, and how did it shape your later career? and in an essay I wrote for the James Lind Library—a 1955 clinical trial report that changed my career) [2]. After a few days of enforced total bed rest—the standard management of the condition—his spirits and energy returned and he asked me to let him get up and around. I felt I needed to have a look at relevant evidence to guide my response to his request. I went to the library and came across a remarkable report [3] for which the lead author was Tom Chalmers. A meticulously conducted randomized trial had made it clear that there was no good evidence to justify patients with hepatitis to remain in bed after they feel well. Armed with this evidence, I convinced my supervisors to let me apologize to my patient and encourage him to be up and about as much as he wished. His subsequent clinical course was uneventful.
Gathering momentum, during my postgraduate training in internal medicine, the better I became at diagnosing my patients’ illnesses, the more frustrated I became at my profession’s collective ignorance about how I should treat them, or whether I should treat them at all. I was already caring for patients at McMaster when the practice of treating “peptic” ulcers by freezing stomachs came into question [4], and before 1967 [5], the “experts” advised against treating symptomless diastolic blood pressures of less than 130 mm Hg.
Contemporary therapeutics was mostly based on clinical observations of treatments applied by expert clinicians. But I came to the conclusion that there were four things wrong about the way they were using their clinical observations in those days to decide whether a treatment did more good than harm; more precisely, I was worried that these four “wrongs” destroyed our ability to make “fair comparisons” of the effects of different treatments. The validation of these worries both initiated and reinforced my decision to devote most of my career to randomized controlled trials (RCTs).
Authors
David L. Sackett