Indirect Comparison: a fundamental breakthrough in evidence synthesis But how valid is it?
By Laure Ngouanfo
To decide, on the benefits/harms of an intervention or the validity of a clinical hypothesis, based on a single study alone is unrealistic. Results often vary from one study to the next. A meta-analytic approach is used to synthesize the body of evidence gathered from a systematic search process and typically generate estimates – of the comparative effects of all relevant interventions – along with the uncertainty around them. This is an essential, quick and cheap tool used by (inter)national policy-making bodies and regulatory agencies.
We call AB trials, the body of evidence of all the relevant trials (Randomised Controlled Trials RCTs) comparing treatments A and B for a specific clinical condition. A meta synthesis of the AB trials (Meta Analysis MA) would produce an overall estimate of the effect of A relative to B.
B
Bucher et al. (1997), introduced Indirect Comparison (IC) – also referred to as Indirect MA or adjusted IC – to tackle situations in which we could have AC trials and BC trials only – and policymakers urgently need to inform the public on A relative to B; They might not be patient enough to wait for any comparative experiments (RCTs) to be carried out. Thus, they are compelled to rely on the IC: is it valid at all?.
The IC of A vs B, IndAB, is adjusted by the results of their direct comparisons (dAC and dBC) with the common comparator C, so any biased result in the AB comparison will be inherited from the AC and BC trials. Therefore it is reasonable to examine the quality as well as the similarity of the trials within (key assumptions inherited from meta analysis that ensures validity of results) and additionally across the AC and BC comparisons.
Randomly allocating patients in two or more intervention groups, automatically creates a balanced distribution of (un)known and (un)measured prognostic factors across intervention groups within a RCT. However, “randomization is not sufficient for comparability”. Blinding – of Outcome assessors, Clinicians and/or (some) participants – is a critical step of a trial as it ensures integrity of the process:
- pre randomisation blinding (allocation concealment): treatment sequence should remain undisclosed to the parties
- post randomisation blinding: parties should remain unaware of who is taking what treatment
- blinding of the outcome: investigators might report subjective outcome intervention effect.
Lack or unclear blinding, patients exclusion after randomization – may results to an
overestimate of the relative effects.
Some few effect modifying covariates that are a threat to the internal validity of trials
are:
- patients characteristics: disease severity at baseline, age, sex, gender: (pregnant) women, children, old, frail vulnerable people are commonly not allowed in trials
- publication bias: the body of evidence should not only include positive or published trials. Non statistically signicant trials can strengthen the results or add supportive information in the body of evidence.
Lack or unclear blinding, patients exclusion after randomization – may results to an overestimate of the relative effects.
Some few effect modifying covariates that are threat to the internal validity of trials are:
- patients characteristics: disease severity at baseline, age, sex, gender: (pregnant) women, children, old, frail vulnerable people are commonly not allowed in trials
- publication bias: the body of evidence should not only include positive or published trials. Non statistically significant trials can strengthen the results or add supportive information in the body of evidence.
Randomizing patients is done within trials and not across trials and so there is indeed a risk that patients characteristics are not comparable across trials and so not comparable across comparisons, on average. Trials quality settings, outcome measures, follow up duration, treatment dose, treatment indications, might not be comparable across trials as well.
Another important aspect is the generalisability of findings: we want to be able to say that if A is better than C at curing patients involved in the AC trials then it is still better than C for patients in the BC trials and, similarly, extend the efficacy of B relative to C in the AC trials population. Doing so assumes, we are assessing the same outcome within a three arm trial ABC and we know that within RCTs:
- relative treatment effects are consistent: IndAB=dAB=dAC+dCB, implying that if B is better than C i.e dCB>0 and C better than A i.e dAC>0 then B is better than A, making C transitive. (d is the difference in treatment outcomes, d could be the log Odds-Ratio or the mean difference)
- treatments have the same indications: we would not want A to be a first line regimen while Band C are second or fourth line.
- patients population in C are considered homogeneous: if patients randomized in C from the AC trials differ in any characteristics (age, dose administration) from the one from the BC trials, then they should be similar on average (e.g similar mean age). A scenario to avoid is C is an ointment in the AC trials but rather an injection in the BC trials.
The imbalance in the effect modifiers across the AC and BC comparisons affects the indirect comparison AB by confounding bias. These being said, to test the validity of an IC, one just has to check the similarities of the average distributions of the effect modifiers. However, if the two sets of comparisons are similarly biased, the IC will be unbiased.
Sometimes, the direct evidence on AB is available but insufficient, so it might borrow strength from the indirect evidence, when combined (in Mixed Treatment Comparison MTC or Network Meta Analysis), to increase statistical power and improve precision on the relative estimate. However, if they do not both (direct and indirect evidence) reflect the true relationship between the treatments the same way, their combination would be invalid. Knowing which one of indirect or direct evidence provide less biased estimate than the other is subject to further research.
When appraising a new therapy, models that include study/patient level covariates, are best to assess potential added value and to identify subgroups where its efficacy appears most promising. However, effect modifiers are often under reported. Therefore, policymakers might make uncertain or wrong decisions because based on lower quality or limited evidence. Postponing the decision might be clever but for sure, the quality of decision making will be increased by being transparent and explicit. Following Bucher’s work, Lu and Ades (2004) have conceived MTC in a Bayesian framework incorporating multi-arm studies. This approach has been powerful to show how parameter uncertainty can be combined with variation within individual trials and heterogeneity in meta analyses.
Bibliography
- Heiner C. Bucher et al. (1997). The Results of Direct and Indirect Treatment Comparisons in Meta-Analysis of Randomized Controlled Trials. J Clin Epidemiol 50;6:683-691
- Jeroen P. Jansen et al. (2011). Interpreting Indirect Treatment Comparisons and Network Meta-Analysis for Health-Care Decision Making: Report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: Part 1. International Society for Pharmacoeconomics and Outcomes Research (ISPOR) doi:10.1016/j.jval.2011.04.002. Elsevier Inc.
- F. Song et al.] (2008). Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventions. doi: 10.1016/j.jclinepi.2007.06.006 Journal of Clinical Epidemiology 61
- Sofia Dias, A. E. Ades, et al. (2018). Network Meta-Analysis for Decision-Making. New York, John Wiley & Sons, Inc.
You must be logged in to post a comment.