Rather than arguing about the suitability of natural experimental methods to inform decisions we need to focus on refining their scope and design, say Peter Craig and colleagues Natural experiments have long been used as opportunities to evaluate the health impacts of policies, programmes, and other interventions. Defined in the UK Medical Research Council's guidance as events outside the control of researchers that divide populations into exposed and unexposed groups, natural experiments have greatly contributed to the evidence base for tobacco and air pollution control, suicide prevention, and other important areas of public health policy.1 Although randomised controlled trials are often viewed as the best source of evidence because they have less risk of bias, reliance on them as the only source of credible evidence has begun to shift for several reasons. Firstly, policy makers are increasingly looking for evidence about "what works" to tackle pervasive and complex problems, including the social determinants of health,23 and these are hard to examine in randomised trials. In Scotland, for example, legislation to introduce a minimum retail price per unit of alcohol included a sunset clause, which means that the measure will lapse after six years unless evidence is produced that it works. This has resulted in multiple evaluations, including natural experimental studies using geographical or historical comparator groups.4 Similarly, the US National Institutes of Health has called for greater use of natural experimental methods to understand how to prevent obesity,5 and a consortium of European academies for their greater use to understand policies and interventions to reduce health inequalities.3 Secondly, a wider range of analytical methods developed within other disciplines, mostly by economists or other social or political scientists, are being increasingly applied to good effect. A good example is the use of synthetic control methods …