Scientific Publication

Alternatives to randomisation in the evaluation of public-health interventions: design challenges and solutions

Abstract

There has been a recent increase in interest in alternatives to randomisation in the evaluation of public-health interventions, and in particular in the difficulties of drawing causal inferences from such evidence. This paper describes specific scenarios in which randomised trials may not be possible and describes, exemplifies and discusses alternative strategies. We conclude that in many scenarios barriers are surmountable so that randomised trials (including stepped-wedge and cross-over trials) are possible. We rank alternative designs and suggest that evidence from non-randomised designs is more convincing when: confounders are well understood, measured and controlled; there is evidence for causal pathways linking intervention and outcomes and/or against other pathways explaining outcomes; and effect sizes are large. We suggest that non-randomised trials might provide adequate evidence to inform decisions when: interventions are demonstrably feasible and acceptable, and where evidence suggests there is little potential for harm, but caution that such designs may not provide adequate evidence when intervention feasibility or acceptability is doubtful, and where existing evidence suggests benefits may be marginal and/or harms possible