The Impact of Direct Support to R&D and Innovation in Firms

The direct support of R&D within companies has a comparatively long history, dating from the efforts made by governments, particularly in the immediate post-Second World War period, for the support of industry programmes deemed to be of national importance and developing into a series of large-scale manufacturing support programmes which reached a peak in the 1970s. Since this time, there has been a shift away from the direct support of single R&D projects within large individual firms, towards a focus on direct support to SMEs, or by the creation of a more generic innovation friendly environment through the provision of tax credits, for example, or by facilitating access to credit in less direct ways. At the same time, the 'grand programmes' have been replaced by programmes targeting mission oriented objectives, included the so-called 'grand' or 'societal challenges' which engage a broader range of innovation actors from the private and public sectors. In the face of economic constraints arising from the credit liquidity crisis of 2008, the rationale for direct support initiatives can also be provided by a desire to maintain business R&D activity (for example, within specific industry sectors or economically disadvantaged regions) or to more generally mitigate the adverse financial climate within which firms currently operate.
This report focuses on the evidence of the effectiveness of publicly supported schemes that aim to promote or enhance the performance of R&D activities within companies. More specifically, in order to avoid overlaps with other reports in this series, coverage is restricted to supply-side measures which provide finance, specifically in the form of grants or loans, to support R&D undertaken by firms alone. This excludes demand-side measures which form the subject of another report in this series. Similarly, support for collaboration with other firms, in the form of networks, or with knowledge providers such as universities and public research organisations are also dealt with in separate reports.
The rationale for the provision of direct support for R&D is founded on the assumption that R&D conducted within firms will, directly or indirectly stimulate innovation that leads to the production of new marketable products, processes or services. This view is strongly based on the linear model of innovation, thus explaining the long history of this type of measure, which ultimately derives from the traditional notion of public industrial policy. Direct measures satisfy the classical economic rationale for public intervention, being linked to the capacity of firms to appropriate investments made and the relative importance of spillovers associated with their R&D efforts, i.e. in an effort to compensate for firms' propensity to under invest. The shift towards a focus on SMEs has been supported by arguments over the comparative efficiency of financing R&D activities in smaller companies, which offers access to an increased range of clients although there are counter-arguments over the relative size of spillovers that can be gained from the support of larger firms. One of the key benefits of direct measures to support R&D as a policy instrument is that they may be targeted at specific areas where government intervention may make a difference (i.e. of economic significance, or of regional, national or supra-national policy concern); on the other hand, they are less effective at dealing with broad policy concerns (such as a lack of industry R&D investment) where instruments such as fiscal incentives may be more appropriate.
Overall, despite their relative simplicity in comparison with other innovation support schemes, the evaluation of direct measures also exhibits a number of particular problems. First is the timing and periodicity of evaluations, with the desired effects of the measure arising at a variable speed from its implementation. Thus, uptake and management issues will manifest themselves rapidly, while, at the other extreme, months or years may elapse until prototypes have been generated or new products, processes or service introduced to the market. Similarly, organisational and behavioural changes will take time to generate and become embedded, whilst the sustainability of these and other desired effects will require even longer time frames. Many of the anticipated impacts of direct support measures are readily measurable: R&D expenditure, growth, profitability and employment, for example, all lend themselves to the construction of quantitative indicators which are generally easily obtained. However, information on less tangible outcomes such as skills, innovation capabilities and capacities, and spillover effects, etc. is less easily captured in the form of comparable statistics. Next, in common with many other types of policy intervention, it is difficult to identify the types of outcome and impact that arise from the direct support of R&D in the absence of counterfactual examples or benchmarks established prior to the establishment of the funding. Finally, the direct outcomes of public support may be difficult to distinguish from other forms of support, particularly as the size of the target firm increases.
Overall, the available evidence on the operation of direct measures seems to focus on a number of outcomes and effects, including rationales, user characteristics, governance aspects, input additionality, output additionality and behavioural additionality effects. This set of outcomes and effects was used to structure the analysis of the evidence.
The final section of this report offers a series of general lessons and conclusions based on the evidence reviewed, both from the academic and policy literature. Our first observation concerns the overall finding within both the theoretical literature and from the evaluation of other policy areas that the impact of policy intervention exhibits a skewed distribution - the 'average' success of a programme tends to be based on a small number of successful cases which is accompanied by a long 'tail' of less or non-successful cases. However, only a limited number of academic studies touch upon this issue. Secondly, most of the studies reviewed considered one point in time and did not examine the longer time frame, thus the persistence of effects arising from the policy interventions was not generally measured (although this forms a critical element for the assessment of behavioural additionality).
Turning to the report's conclusions, the first is that the issue of input additionality and, to a lesser extent, output additionality, form the cornerstone of most of the academic work on the subject of direct support for R&D. Here, crowding-out effects are more often found in firm level studies rather than at studies focused at the industry/country level. Various academic studies have tried to explain these results, noting that government and privately financed R&D are complementary up to a 10% subsidisation rate, while above 20% they fully substitute. Other influential factors include industry type, firm size and the wider economic context. Lastly, the 'halo-effect' can be significant - companies that have been successful in attracting support in the past tend to be more successful in the current programme.
In contrast, it appears that policy evaluations tend to focus on the continued relevance of the rationale of intervention and on its implementation performance. From the evaluation perspective, it is interesting that despite the longevity of this type of intervention there is still a policy imperative to seek assurance that the underpinning rationale is still being met.
Most evaluations seem to point towards evidence that the projects being supported would not have gone ahead or would have been slower, with less depth, or less technical sophistication than if the support had not been available. This finding was more convincing for younger and or smaller firms.
It is also clear that the implementation process (especially the means by which successful applicants are selected) is critical to the eventual success of the programme overall. In this respect, the most successful firms (i.e. those demonstrating the greatest benefits from the scheme) tend to be those with prior experience in performing R&D and those which have been previous recipients of government support. Accordingly, the recommendations for programme management that arise from evaluations tend to be aimed at encouraging programme managers to adopt a more pro-active interaction with potential applicants (through the provision of advice at the proposal stage, or by offering complementary services, including marketing support and training).
The evaluations in this particular study share a common feature with those of many other schemes in that there are strong calls for less bureaucracy and greater administrative simplification, while at the same time the evaluators would often like to see a greater amount of monitoring (in order to simplify their tasks and reduce the need for basic data/information gathering).
A further key point is that complementarity greatly contributes to the overall success of the measures examined, although this is based on the findings of a small number of studies. Nevertheless, all the relevant evaluations point towards a far greater level of success for firms (particularly small firms) in measures that combine direct and indirect support. In this case, direct support appears to drive higher levels of technological development and the use of more advanced technologies, while the indirect support (such as advisory services and coaching/training) covers other aspects of the development process.
In summary, the key lessons for policy makers to emerge from the analysis concern:
The need for a better targeting of measures (in order to optimise the chances of recipients demonstrating a successful outcome) although this raises the question of how to avoid picking winners (and, indeed, if this should be avoided?)
The optimization of the benefits that can be derived when direct measures are delivered alongside or as part of a complementary set of services and further support.
However, it is also clear that simply encouraging firms to undertake (invest in) more R&D is not enough and evaluations should focus to a greater extent on output and behavioural additionality effects, such as the delivery of products, services, jobs, and other lasting and persistent effects). On these issues, the available evaluation evidence is scarcer and more mixed in its conclusions.
Image description here.