Creating a repository of impact studies on services and programmes that support the development of young people
See the studies in the repository.
Why do we need a repository?
To continue investing in services and programmes for young people we need to be more systematic in our attempts to show their value. To do this we are creating a data bank of quality assured studies that assess the impact of programmes and services. This bank of studies can be used by a range of stakeholders, including those charged with providing or commissioning services in a range of settings.
What kind of evidence do we need?
We need evidence of 'impact' from programmes and services for young people - so that we know what works. Ideally to do this we need to compare the outcomes of young people who receive a particular service (or range of services) with similar young people who do not receive it. We need to also consider the full range of different outcomes that youth services may impact upon, including personal and social development, educational achievement, labour market success, crime and the young person's well-being.
What makes for 'good' evidence?
It is not enough to show that a particular youth service was delivered: we need to show 'impact'. Hence measuring the activity level associated with a service or programme, such as the number of hours a young person spent in a youth activity or the number of counselling sessions conducted, is not sufficient. Rather we need to determine whether the outcomes for young people who spent time in a youth club or received a specific programme are better than for similar young people who did not. Generally this requires quantitative (or numeric) evidence on one or more outcomes that the service is trying to influence. These data must be collected both before the service or programme is delivered and afterwards (the so called before-after approach). This allows an assessment of the 'distance travelled' by the young person, especially important for those working with more disadvantaged young people. Likewise, it allows one to consider the change in local circumstances, such as crime rates, brought about by the activity. Ideally you also need before and after data on a similar group ('control group') of young people who do not receive the service or programme. Then you can be confident that you are producing 'impact' if the outcomes of the young people who did get the youth service improve more than those who did not get the service.
How can I get 'good' evidence?
Collecting quantitative data on young people, particularly using a before-after control group strategy, can be daunting. Generally it is best to seek advice on research design, the numbers of young people to include in your study and the type of data you need to collect. There are a range of people who can potentially advise you, including those in university social research and economic departments. Often seeking advice will also save you time and money because some outcomes can be monitored using administrative data (e.g. crime rates, education attendance and achievement) and require very little effort from the service provider to obtain.
What counts as 'good' evidence?
We have a document that provides more detail on the kinds of studies that can be undertaken to evaluate the impact of youth services or programmes and explains in some detail how evidence can be judged in terms of quality.
Broadly there are two factors to take account of when judging a study. The first is the magnitude of any impact from the service or programme. In the table below we show how one might judge whether this has a high or low impact. See further information on how to judge whether an impact is moderate or high. Ideally one would wish for all services or programmes to have been evaluated sufficiently well that effect sizes of their impact can be estimated (as recommended by Social Research Unit et al, 2011). Many evaluations do not achieve this ideal however, and so we adopt a somewhat less robust pragmatic approach to judging impact.
The second factor is whether the study is robust in the sense that it proves convincingly that providing a particular service or programme causes young people to have better outcomes. To judge robustness a number of factors are taken into account. We have summarised these factors and provided a guide to our scoring system (levels 0-6) as overleaf. In summary, a robust evaluation would ideally include the following three key factors:
Of course in a number of circumstances collecting these kinds of data is difficult but often making use of existing administrative data sources can help. Seeking statistical advice on both the design of the evaluation and on how to estimate the impact can also be helpful to produce high quality evidence.
These guidelines have been discussed and agreed with the Department of Education. We have consulted with stakeholders working with young people and had helpful feedback on the approach proposed. We are aware of work by the Social Research Unit at Dartington that has focused on the quality of evidence needed before a service or programme can be considered likely to succeed when implemented (Social Research Unit et al, 2011) and our proposed approach complements theirs. Specifically, the criteria we apply at the upper end of our scale (level 4 or above) are very similar to those used by the Social Research Unit (2011) to assess evaluation quality. (Their work also looks at impact, intervention specificity and system readiness - in other words, whether the programme is ready for implementation in service systems). However, since our aim is to provide a tool with which to evaluate the current evidence base, our scale is more finely graded at the lower end to ensure that the quality of less robust studies (which currently dominate in the services or programmes targeted at young people) can still be judged. In other words, we have more grades at the lower end of the quality scale in order to distinguish evaluations that are really extremely weak from those that show some promise, acknowledging that the weaker studies are some way off establishing causality in a robust way. Again, this work complements a forthcoming publication by the Social Research Unit (2012) which explains for programme developers how to grow an innovative idea into a system-ready evidence-based programme.
Our repository will be important for two types of stakeholders.
Firstly, some users, including those commissioning services or programmes for young people, will want to interrogate the repository to obtain evidence of what really works to improve outcomes. The repository is open access: view an assessment of the impact and quality of a particular study. The main focus of the repository is services and programmes that support the development of young people; these may be delivered in a range of settings, including education settings. The repository does not include programmes designed to support more general teaching and learning in schools or colleges. We take youth interventions to include those that are aimed at:
Secondly, those who deliver or evaluate services or programmes for young people will want to have their evaluation studies assessed for impact and quality and then deposited in the repository so that users can know whether their service has genuine impact. This will be particularly important as those commissioning services will increasingly demand high quality evidence on 'what works'. To do this, send full details and a copy of your study or evaluation report to Richard White at DfE and it will be independently assessed by researchers at the Centre for the Analysis of Youth Transitions. Each study will be given an impact grade and a quality grade and the process of assessment may take up to 8 weeks. You will be sent a copy of the CAYT assessment before it is put in the repository and you will then have an opportunity to discuss any queries arising. Then the one page summary of the evaluation study will be deposited in the repository for easy access by those wishing to know whether a particular youth service or programme works.