On the clinically important difference
ACP J Club. 1992 Sept-Oct;117:A16. doi:10.7326/ACPJC-1992-117-2-A16
Related Content in the Archives
• Letter: On the clinically important difference
Many of the studies abstracted in ACP Journal Club contain information about clinical interventions that could be important to clinical care, but few are definitive enough to cause readers to change their practice without assessing the evidence carefully. Measures of clinical importance that can be used to weigh evidence focus largely on making a difference to patients, to their families, or to society. In deciding whether a treatment makes a clinically important difference, the reader must first assess the importance of the clinical outcomes, then the size of the treatment effect, the extent to which benefit is offset by adverse effects, and, ultimately, whether the resources required to institute the treatment are affordable in comparison with other uses of the resources. Although exact formulae exist to determine the size of the effect of a therapeutic intervention, none is available to assess its clinical importance. This editorial provides some considerations that bear on the clinically important difference between treatments.
The first step in evaluating any article on therapy is assessment of the validity of the study, which is determined by the strength of the methods of investigation. The second step is the determination of the existence of a treatment effect. Treatment effects are typically expressed as relative risk reductions for a treatment when compared with placebo or standard therapy, with 95% confidence intervals and P values. If the P value for the difference in the effect of a treatment compared with its control is greater than 0.05, then the result is not statistically significant, and the evidence is not sufficient to warrant considering it clinically important. A P value of less than 0.05 indicates a real effect of the treatment but does not guarantee that it is clinically important. Once statistical significance has been established, the key question becomes this: Should we change our clinical practice to incorporate this new treatment? The decision can be approached through consideration of the four questions that follow.
Is the adverse clinical outcome that the treatment prevents or delays really important?
Some treatments are administered to reduce mortality or delay major morbid events, such as crippling strokes. Many treatments have less dramatic effects, however, and are given in the hope that they will make patients feel better or able to perform usual activities more easily. Measures of clinical importance then often relate to the patient's symptoms, functional status, or health-related quality of life. For example, in a randomized double-blind study of chronic airflow limitation, patients were given 500 mg, 1000 mg, or 1500 mg of terbutaline 4 times daily (1). Despite small increments in peak flow rates and forced expiratory volume in one second (FEV1) when higher doses were administered, no differences were noted in functional exercise capacity as measured by a 6-minute walk test or a self-report of quality of life (2). Higher doses of this drug, therefore, appeared clinically unimportant from the patient's point of view. If only surrogates (such as FEV1) for the clearly clinically important measures of morbidity are measured in a trial, a known, strong relation must exist between the substitute measure and a clinically important outcome, and the expected effect on the outcome must be substantial to warrant a change from usual therapy.
Were the (clinically important) treatment benefits large enough to make the therapy worth prescribing?
Once it has been determined that the measured outcomes are clinically important, the next step is to determine whether the magnitude of the effect is large enough to warrant implementation of the treatment. For this, we need an accurate estimate of the effect size. Previous editorials have described two complementary approaches: confidence intervals (3) and the number needed to treat (NNT) (4).
The confidence interval combines consideration of statistical significance and effect size. In a meta-analysis of trials examining the differential effect of stress ulcer prophylaxis on overt bleeding (5), results showed that prophylaxis with histamine-2-receptor antagonists reduces the incidence of overt hemorrhage in the critically ill population (relative risk, 0.71; 95% CI, 0.55 to 0.83). Because the 95% confidence interval around the odds ratio excludes 1.0, this benefit is statistically significant. The upper limit of the confidence interval, 0.83, indicates the most conservative estimate of the treatment effect that is still consistent with the trial results. If this most conservative estimate of the treatment effect is clinically important, it obviously favors using the treatment.
The NNT (6) is derived from the absolute risk reduction. In the example of stress ulcer prophylaxis, the average risk for bleeding with treatment was 0.02 with and 0.05 without treatment. The absolute risk reduction was 0.05 - 0.02 = 0.03, and the NNT is the reciprocal of this: 33 patients must be treated to prevent 1 person from bleeding overtly from stress ulcers. The clinically important difference in this context is the NNT below which a physician finds the effort worthwhile.
Trials using quality of life as an outcome measure are increasing. Translating the study results into meaningful terms for clinical care is crucial in their interpretation. The minimally important difference defines the smallest difference in function that the patient judges to be worth the risks and trouble of complying with the treatment (2). Unfortunately, the minimally important difference has not yet been defined for many measures.
Are the adverse effects of the treatment small or infrequent enough that the treatment generates more good than harm?
A balanced report of a treatment gives information on its adverse effects as well as its benefits, for example, the efficacy of anticoagulation compared with the risk of bleeding. If adverse effects of the treatment are not reported, readers should look elsewhere for the information or wait for a more comprehensive assessment. Although quality of life measures may capture benefit and harm in a single measure, many studies record separate lists of each, and clinical judgment must be used to appraise the net effect.
Are the clinical and other resources required to apply a treatment better expended pursuing this rather than some other clinical action?
Societal and institutional points of view are increasingly important in deciding what constitutes a clinically important difference. These can be captured in studies of cost-effectiveness, cost-benefit, or cost-utility and may be abstracted in the Health Economics section of ACP Journal Club if they meet strict criteria. For example, health technologies can be considered with respect to their cost or savings to society using quality-adjusted life-years (QALYs). It has recently been suggested that technologies costing less than $20 000/QALY should be available to all, whereas technologies costing $20 000 to $100 000/QALY should be more limited (7). Economic analyses can be challenging reading for the clinician; however, guides for their interpretation have been published previously (8).
In summary, few individual studies provide enough information to allow one to make straightforward decisions about changing clinical practice. ACP Journal Club does try to help, however, by providing commentaries from experts who are familiar with other literature in the field, by grouping studies on the same topic together, and by abstracting scientific overviews, meta-analyses, and economic analyses. We also provide guides such as this one, to help define the role that readers themselves must play in decisions about the implementation of new findings.
Deborah Cook, MD
David L. Sackett, MD
3. Altman DG.Confidence intervals in research evaluation [Editorial]. ACP J Club. 1992 Mar-Apr;116:A28-9.
4. Laupacis A, Naylor CD, Sackett DL.How should the results of clinical trials be presented to clinicians? [Editorial]. ACP J Club. 1992 May-Jun;116:A12-4.
8. Department of Clinical Epidemiology and Biostatistics, McMaster University. How to read clinical journals: to understand an economic evaluation. Parts A and B. Can Med Assoc J. 1981;130:1428-33, 1542-9.