The Open Public Services White Paper proposes increasing the private and third sector delivery of public services. However, there is widespread scepticism amongst the public against private provision of public services. This scepticism needs to be recognised and assuaged, for instance through a quality-linked reward structure and improved like-for-like comparisons between public, private and third sector service delivery.

A key aim of the White Paper is to make information on the quality of public services more transparent, in order to raise standards in service provision. This is part of a continued focus on ‘choice and voice’ to improve service quality, using performance indicators (PIs) to compare service providers.

While there is now a large body of evidence on how organisations respond to the publication of PIs, there is less on the extent to which PIs actually improve public service quality, and even less on the costs of achieving any such improvement. This is partly due to the difficulties of isolating the effect of PIs themselves within broader programmes of reform.

An evidence review by Deborah Wilson at the ESRC-funded Centre for Market and Public Organisation shows that the effectiveness of using PIs in conjunction with user-based accountability mechanisms such as consumer choice is mixed. PIs are more effective within a top-down or bureaucratic accountability mechanism, such as a targets regime with explicit rewards and/or sanctions for performance relative to target – although ranking providers on the basis of alternative PIs can lead to undesirable as well as desirable responses.

Key findings

  • The simplest PIs measure the outcomes of a provider at some specific date (for instance, the percentage of pupils in a school who achieve five GCSEs at grade C or above). These PIs show only one dimension of a potentially complex output, and also fail to take account of the characteristics of the users and how these might affect the measured outcome.
  • Performance ranking based on this type of indicator risks unfairly penalising providers serving disadvantaged populations, and creates the incentive to boost performance score by adjusting the quality of the intake.
  • Risk-adjusted or value-added PIs address this issue by taking account of differences in intake quality - but generally they still only reflect one dimension of output.
  • Composite indicators attempt to combine many dimensions of a provider’s output into a single figure (for instance, the star rating of hospitals in England). However, these indicators are complex and opaque, and are extremely sensitive to the methods used to produce them.
  • The same provider is likely to have different positions in ranking exercises depending on which aspects of performance are measured - resulting in conflicting rankings. With transparency comes complexity, and a trade-off for individuals choosing between public service providers. Moreover, the ranking of providers on the basis of any one specific PI may be largely spurious if the statistical uncertainty involved in calculating it is not explicitly taken into account.
  • In some cases PIs have improved measured performance. Sometimes this has been at the cost of distortions to other, unmeasured aspects. More generally, there are many examples of responses aimed at boosting ranking position rather than improving performance per se.
  • PIs have been most effective in achieving performance improvements when used as part of bureaucratic accountability mechanisms, particularly when the output is clear and focused.

Policy relevance and implications

  • The White Paper emphasises the need for publication of information on the performance of service providers. Where data are published to encourage explicit comparison, ranking and league tables are bound to follow.
  • Such explicit comparisons of certain aspects of performance give providers incentives both for improving measured performance and for boosting their rankings through other, less desirable, means.
  • The White Paper emphasises using performance information to aid user choice alongside a continued, explicit role for the state, for example as guarantor of rising minimum performance standards. In this model PIs form part of concurrent user-based and bureaucratic accountability mechanisms.
  • The evidence suggests that PIs are most effective within a system of bureaucratic accountability, but evidence on the use of PIs in conjunction with user-based accountability mechanisms such as choice is much more mixed. There is a danger that providers may focus their efforts on attempting to rank highly on potentially conflicting performance measures to avoid bureaucratic sanctions, rather than responding to the needs of the full range of individual service users.