The role of measurement and evaluation in policy and governance

Appropriate measurement and evaluation are critical for innovation policy and governance, allowing policy makers and analysts to:
  • Assess the contribution of innovation to achieving social and economic objectives
  • Understand the determinants of and obstacles to innovation, which is crucial for designing effective innovation policies
  • Establish the impact of policies and programmes, and whether or not policy has contributed to correcting or ameliorating the problem it set out to resolve (e.g. tackling market failures that affect innovative entrepreneurship, such as inadequate availability of finance, skills, advice and technologies). 
  • Evaluate the effectiveness of different policy approaches, thereby enabling government to make informed decisions about the allocation of funds. Evaluation can assist decision makers to assess the relative effectiveness of policies and programmes and help them to make judgments about where to place their efforts in order to obtain the greatest benefits for given costs. Thus, it can contribute to improve effectiveness, value for money, and appropriateness of policy and programme interventions ex post and justify future interventions.
  • Continuously improve the design and administration of programmes. Evaluation is a key tool for learning about how well policies and programmes are delivering, what problems may be emerging, what practices work well and what should be done better in the future. 
  • Stimulate informed debate. The results of evaluations may encourage public debate that can offer opportunities to a mix of stakeholders – from programme sponsors and managers to beneficiaries – to reflect upon the appropriateness and performance of policies, programmes and institutions.
  • Enhance public accountability of relevant policies.
An important consideration in evaluation is to demonstrate a programme’s or policy’s ‘additionality’, i.e. to consider the extent to which desirable outcomes would have occurred without public intervention (the ‘counterfactual’). There are different forms of additionality, namely: 
  • Input additionality – the extent to which intervention supplements or substitutes for inputs provided by other means, e.g. the market, or by other actors, e.g. firms’ own resources. 
  • Output additionality – the proportion of outputs that would not have been created without public intervention. 
  • Behavioural additionality – the difference in behaviour of a target population from public intervention. The concept of behavioural additionality emphasises that programmes have wider and more sustained effects than those that are most obvious to measure and that persistence of effects is of high value. Behavioural additionality concerns itself less with inputs and outputs and more with sustained changes in the behaviour of target groups, induced by contact with any stage of a programme or policy. 
A focus on additionality raises questions around the ability to accurately attribute observed outcomes to the public intervention under evaluation. Two countervailing tendencies are common here: first, the so-called ‘project fallacy’, whereby outcomes that are in reality cumulative and dependent upon the interaction of several factors are wholly (or mostly) attributed to the intervention under evaluation; and second, a tendency to under-estimate the effects of an intervention because of a narrow evaluation focus or because of the timing of an evaluation (where effects might not yet have occurred or have occurred so long ago that beneficiaries fail to attribute them to the public intervention). Awareness of these tendencies is important, even if the problems they create cannot be fully solved.
Evaluation provides one source of information among many others in shaping policy and programme management processes, and appreciating this is important in informing expectations around its usefulness. Furthermore, utilisation of evaluation results is often indirect, and some evaluation theorists and practitioners refer to the important ‘enlightenment’ role of evaluation, which, while difficult to account for, would seem to be extremely important. In this regard, the evaluation process, if open and participatory, can also provide useful benefits to those who participate.
The feasibility of using evaluation depends upon the competency to carry out such assessment work. The necessary skills tend to be acquired over time so that outsourcing to specialist units or private consultants is common practice. Outsourcing also gives the appearance of providing independent assessment, though this may not necessarily be the case. A further key feasibility issue concerns the ability to utilise the process and results of evaluation in shaping future interventions. If this utilisation ability is lacking, then evaluation will be largely a waste of time and effort.
Image description here.
Printer-friendly versionPDF version