Globally, governments are exploring new ways of procuring public services to improve effectiveness and efficiency. A high-profile example of this trend over the past decade is the development of Social Impact Bonds (SIBs).
SIBs offer an opportunity to explore the use of evidence to inform public policy and commissioning decisions. They can build a stronger culture of robust evaluation and evidence-based decision making and create a heightened focus on outcomes.
A paper in Public Money and Management highlights three ways SIBs can encourage evidence-informed policy-making by:
SIBs are pay-for-performance programs in which private-for-profit or social investors provide up-front finance towards the delivery of a public service. Investors may subsequently receive a payment based on the results achieved by the project. The return to investors is partly generated by the cost savings to government. Payments to investors are conditional on the service provider actually achieving specified results.
New South Wales, Queensland, South Australia and Victoria have SIB programs. The New Zealand Government has given the green light to piloting SIBs and the Australian Government is also implementing initiatives to encourage the development of the social investment market.
The paper drew on a three-year evaluation of the ‘SIB Trailblazers’: nine projects in health and social care funded by the English National Health Service to explore whether to commission a service locally through a SIB.
The proposed projects targeted at a diverse set of population groups and issues. These included:
1. The strength of the evidence base matters
The three proposed Trailblazer interventions with the strongest evidence base were initiated. Evidence included:
The Trailblazer interventions that were not initiated lacked research evidence of effectiveness. This was noted as a specific factor that contributed to the decision not to commission the services
2. SIBs offer opportunities for evidence generation
Local administrative and descriptive data were routinely analysed and used to guide local decision-making. This enhanced use of data was cited by project managers as a central advantage of SIB-financed work compared with their prior experience.
These findings align with the arguments of SIB proponents. They suggest the use of SIBs:
However, the research findings present a more nuanced picture. There is an assumption among SIB proponents that:
While local data in the SIB-financed Trailblazer sites informed decisions about payments among the respective parties, this sometimes led to conflict. Much of the increased local data collected in these sites did not relate directly to the service-user outcomes in the commissioner contract but instead focused on service provider processes.
3. SIBs offer opportunities for formal evaluation
There is little rigorous counterfactual comparison of SIBs versus alternative methods of finance to deliver the same service to the same type of users. As such, there is a lack of evidence of the costs and benefits of SIBs compared with alternative approaches to procurement. Two of the Trailblazer SIBs commissioned local impact evaluations which assessed program effectiveness against a counterfactual financed by government funds.
The Trailblazers demonstrate that SIBs can promote evidence-informed program implementation. The findings suggest that SIB financing may bring added value to an intervention as it is seen as a way to increase the evidence base.
The research also raises questions about how we judge what a ‘positive evidence base’ is, and what counts as ‘good’ evidence for policy and practice. An interest in knowing in advance that a program has a ‘positive evidence base’ may orient SIB proponents towards academic research and interventions that have already been developed and evaluated. This involves using established research designs such as randomised controlled trials and systematic reviews.
There are advantages in policymakers carefully considering the evidence underpinning different interventions. Where their primary question is ‘what works’ (i.e. the question is one of relative effectiveness), there are established hierarchies of evidence based on study design. With a focus on evidence strength and quality, these approaches favour quantitative study design.
Prioritising quantitative evidence over qualitative evidence in SIB-financed interventions is understandable given the need to measure effectiveness in order to pay the investors. However, this may limit the potential for program learning, stifle innovation, increase pressure on provider staff and create perverse incentives for “parking” harder-to-serve clients and “creaming” those easier to support.
ANZSOG’s The Bridge is a research translation project which produces Research Briefs, like this one, which summarises academic research of relevance to the public sector.
Sign up for the Bridge