Follow us
Publications Commentary Research People Events News Resources and Videos About IFS
Home Publications Can the UK learn from developing countries? The case for proper policy evaluation

Can the UK learn from developing countries? The case for proper policy evaluation


On Monday the respected economist Professor Esther Duflo delivered the IFS annual lecture discussing approaches to helping the world's poor (see slides). Professor Duflo drew on several, novel examples of randomised controlled trials (RCTs) - a method which she has put to good use in much of her own research. Randomised trials have done much to improve social policies in the developing world, and a key lesson is that we should make greater use of this important tool for evidence-based policymaking in the UK.

In an RCT, a policy is evaluated by randomly assigning it to some individuals or regions (a "treatment" group) but not to others (a "control" group). The effectiveness of the policy can then be assessed by looking at differences in outcomes between these treatment and control groups. Common in medical research, RCTs are increasingly favoured by economists as well because, by removing contaminating factors correlated with both take-up of a policy and the outcomes of interest, they allow us to estimate a policy's direct, causal impact on outcomes of interest more easily. There are of course some limitations to the use of RCTs, and they will not be appropriate on all occasions, but they can often be extremely useful as a means to assess objectively how well particular programmes are working.

Because of this, RCTs are playing an increasingly important role in the evaluation of policy and are now especially common in developing countries where Professor Duflo's work has been particularly influential. IFS's own Centre for the Evaluation of Development Policy (EDePo) has been involved in evaluating the results of RCTs from India to Colombia in areas such as microfinance, reproductive health, early childhood development and education. The evidence gathered in such trials has had an important influence on policy for two reasons. First, estimates of program impacts estimated via a well designed and implemented RCT are clean and easy to communicate. Second, the availability of a RCT gives researchers a better understanding of individual responses to different initiatives - allowing them to estimate richer and more credible behavioural models which can be used to give valuable insights into possible reforms. In other words, such trials need not just give a thumbs up or thumbs down for a particular policy but can also tell us how they can be improved. For example, data from the RCT of Mexico's PROGRESA (a programme that among other things paid grants to families who sent their children to school), allowed IFS researchers to simulate responses to various changes in the grants. This led to recommendations to refocus the grants toward older pupils who were found to be more sensitive to the financial incentive - a reform which is now itself being piloted in two northern cities of Mexico.

By comparison, RCTs have been quite rare in the UK. While there are many examples of small scale trials, there are only a few examples of full policy evaluations of the kind seen in other countries (such as the Employment Retention and Advancement demonstration, and the Skills Conditionality Pilot) and several examples of policies that could have been trialled (such as the "synthetic phonics programme", and the Work Programme), which have instead been rolled out nationwide without any prior robust evaluation.

Are there perhaps good reasons for the lack of RCTs in the UK? One objection to their use might be their cost. However, the costs of conducting trials need to be set against the costs of implementing large and costly programmes before we have a good understanding of their effectiveness, and therefore a lack of evidence of whether the programme could be improved, should be expanded or whether it should be scrapped. Indeed, the fact that external donors to developing countries often demand RCTs of the policies they are funding, suggests that they can help to ensure value for money. A second objection is that it is unfair to "make guinea pigs" out of certain regions or people. Arguably, however, rolling out untested policies nationally makes guinea pigs of us all! There have also been occasions where policies have been piloted non-randomly in certain areas before being extended nationwide - and for which full randomisation would have been only a small, beneficial, and one would have thought relatively uncontroversial, extra step.

Evidence from RCTs now plays an increasing role in formulating policies in developing countries. The UK could gain by following their example. As Esther Duflo and her co-author Abhijit Banerjee write in their book Poor Economics we should "accept the possibility of error and subject every idea, including the most apparently commonsensical ones, to rigorous empirical testing".