fbpx
Skip to content

Alf Rattigan Lecture – Whatever happened to evidence-based policy making?

25 November 2018

News and media

Share

Gary Banks

Few within government would deny that evidence-based policy-making is important to achieving good outcomes.  Australia’s history provides ample support for that. But it is also apparent that practice over the past decade has fallen short of the ideals espoused. 

In this, the third Alf Rattigan Lecture, Professor Gary Banks will consider why that has been so and what might be done, at the political and bureaucratic levels, to moderate the increasing tendency for policy to be made ‘on the run’. 

A transcript of the lecture is available in text or PDF formats below.

Read the 2018 Rattigan Lecture (PDF)

View previous Alf Rattigan Lectures

Introduction

Ten years ago, while still at the Productivity Commission (and prior to my time with ANZSOG), I was invited by ANZSOG to speak in a Public Lecture series it held jointly with the ANU.  The title of my address was ‘EBPM: What is it? How do we get it?’. The importance of evidence-based approaches to policy had long been recognized by aficionados, but it had become officially embraced some years before by the Blair Government in Britain under the banner ‘what matters is what works’. More recently, our own incoming Prime Minister Kevin Rudd had declared that evidence-based policy making (EBPM) was ‘at the heart of being a reformist government’.

While there was much commendable talk about evidence based policy, there was rather less action in both countries. In Australia there were some promising developments under the fledgling ‘National Reform Agenda’; however other policy initiatives not only lacked evidence, they seemed to lack even proper deliberation. Hence the questions in the title.

Other policy observers at the time apparently felt the same, as my presentation attracted a fair bit of interest, including from the press, which naturally depicted it as an ‘attack’ on the government. However, the purpose was not (just) to be critical; but to support the government’s declared aspirations for EBPM by clarifying what it entailed and noting some conditions for it to be put into effect. If anything, however, my contribution may have had the opposite effect, as there has been little mention of EBPM by political leaders ever since!

On receiving this further kind invitation from ANZSOG, therefore, it seemed natural to provide a sequel of sorts to that earlier lecture. In seeking to answer the new question posed by its title I will revisit the concept of EBPM and then trace more recent developments in policy-making in Australia against basic tests for ‘good process’ — in which evidence plays a part but is not the whole show. I will conclude with some thoughts about how rhetoric and reality might again become more aligned in the future, and how the public service can help.

It is a privilege to be able to do so as part of this new lecture series from ANZSOG in honor of GA Rattigan, and to follow the excellent contributions of Paul Kelly and Fred Hilmer.

Through his leadership of the Tariff Board and Industries Assistance Commission, Alf Rattigan became a pioneer of evidence-based advice in an area of economic policy that had previously been characterised mainly by ideology and vested interest. Under Rattigan and his close adviser Bill Carmichael, whose contribution then and subsequently, including as Chairman of the IAC, deserves greater recognition, the organization initiated path-breaking research that laid bare the costs of the ‘protection all round’ philosophy and its adverse distributional impacts.

The evidence the organization produced in its inquiries and research over the years heightened not only public awareness of the costs of protection, but also sparked active support for reform among those industries and sectors that were unwittingly bearing it. As Kelly notes, the volte face in assessing industry’s claims for preferment faced trenchant opposition and required a leader possessed of integrity and resilience. Rattigan the public servant clearly had what it took to be ‘frank and fearless’ and Australia is a more prosperous place for it today.

What are we talking about?

“When I use a word”, Humpty Dumpty said in a rather scornful tone, “It means just what I choose it to mean – neither more nor less”. “The question is” said Alice, “whether you can make words mean so many different things”. (Lewis Carroll)

One of the challenges in talking about EBPM, which I had not fully appreciated last time, was that it means different things to different people, especially academics. As a result, disagreements, misunderstandings and controversies (or faux controversies) have abounded. And these may have contributed to the demise of the expression, if not the concept.

For example, some have interpreted the term EBPM so literally as to insist that the word ‘based’ be replaced by ‘influenced’, arguing that policy decisions are rarely based on evidence alone. That of course is true, but few using the term (myself included) would have thought otherwise. And I am sure no-one in an audience such as this, especially in our nation’s capital, believes policy decisions could derive solely from evidence — or even rational analysis!

If you’ll pardon a quotation from my earlier address: “Values, interests, personalities, timing, circumstance and happenstance – in short, democracy – determine what actually happens”. Indeed it is precisely because of such multiple influences, that ‘evidence’ has a potentially significant role to play.

So, adopting the position from Alice in Wonderland, I am inclined to stick with the term EBPM, which I choose to mean an approach to policy-making that makes systematic provision for evidence and analysis. Far from the deterministic straw man depicted in certain academic articles, it is an approach that seeks to achieve policy decisions that are better informed in a substantive sense, accepting that they will nevertheless ultimately be – and in a democracy need to be — political in nature.

A second and more significant area of debate concerns the meaning and value of ‘evidence’ itself. There are a number of strands involved.

Evidentiary elitism?

One relates to methodology, and can be likened to the differences between the thresholds for a finding of guilt under civil and criminal law (‘balance of probabilities’ versus ‘beyond reasonable doubt’).

Some analysts have argued that, to be useful for policy, evidence must involve rigorous unbiased research techniques, the ‘gold standard’ for which are ‘randomized control trials’. The ‘randomistas’, to use the term which headlines Andrew Leigh’s new book (Leigh, 2018), claim that only such a methodology is able to truly tell us ‘what works’

However adopting this exacting standard from the medical research world would leave policy makers with an excellent tool of limited application. Its forte is testing a specific policy or program relative to business as usual, akin to drug tests involving a placebo for a control group. And there are some inspiring examples of insights gained. But for many areas of public policy the technique is not practicable. Even where it is, it requires that a case has to some extent already been made. And while it can identify the extent to which a particular program ‘works’, it is less useful for understanding why, or whether something else might work even better.

That is not to say that any evidence will do. Setting the quality bar too low is the bigger problem in practice and the notion of a hierarchy of methodologies is helpful. However, no such analytical tools are self-sufficient for policy-making purposes and in my view are best thought of as components of a ‘cost benefit framework’ – one that enables comparisons of different options, employing those estimation techniques that are most fit for purpose. Though challenging to populate fully with monetized data, CBA provides a coherent conceptual basis for assessing the net social impacts of different policy choices – which is what EBPM must aspire to as its contribution to (political) policy decisions.

Evidence ain’t evidence

‘Everyone is entitled to his own opinion, but not to his own facts.’ (Daniel Patrick Moynihan)

We subject all facts to a prefabricated set of interpretations.’ (John F Kennedy)

A more fundamental issue is that evidence itself is generally not immutable, particularly when moving beyond raw data to analysis and interpretation.

Take gambling regulation. As the Productivity Commission has argued, a balanced policy approach would seek to minimize harms to ‘problem gamblers’ without unduly affecting the enjoyment of recreational gamblers. But getting accurate data on how much time or money people spend on it is hard (with the ABS Household Expenditure Survey showing much smaller numbers than are consistent with industry revenue!) let alone the consequences of spending ‘too much’. And, as the Commission found in its reviews, what constitutes ‘too much’ is deeply contested. Assumptions about values and behaviour that are integral to estimation can differ, and consequently the Commission’s own estimates of the social costs and benefits ranged widely. (PC 1999, 2010) Similar issues can arise in other policy areas, particularly in the social and environmental domains.

So what counts as ‘evidence’ to some need not be acceptable to others, even when methodologically sound. And this of course affects its credibility in a policy sense.

Misuse and abuse

In many cases, however, the evidence will not be ‘sound’.  It has become common in policy advocacy for data to be concocted, cherry-picked or manipulated to suit a predetermined position. Such ‘policy based evidence’ – a term that may have been coined in jest but is seriously apposite — has a long pedigree and even a textbook (Huff’s How to lie with Statistics) to support it!

A topical example is the political debate about rising ‘inequality’ in our society, in which selected indicators have been used to draw conclusions unsupportable by the weight of evidence. As the PC observed in its recent research report on this topic, this can lead to policy approaches that are misdirected and ultimately ineffectual in terms of their own objectives. (PC 2018) For example, focusing on the share of income going to the top 1-5 per cent of income earners, may suggest that in the cause of greater ‘equality’ financially successful members of society need to be taxed even more, when what is really needed, according to the Commission, are policies to enhance the living standards and earning potential of those at the bottom. Punitive tax rates at the top end can actually make this harder to achieve.

For some time, economic modeling has been one of the instruments of choice for policy-based evidence, which unfortunately has tended to undermine the public credibility of modeling more generally. Quantitative models have the advantage of opacity combined with an ability to make different ‘design’ and data choices that can shift the results in desired directions. For example, the modeling in support of schemes proposed to overcome the electricity policy ‘trilemma’ associated with reducing carbon emissions, has raised more questions than it has answered, particularly about the basis for projected electricity price falls.

One’s own ‘facts’

That evidence is so often misused in policy debates may tell us something about how people respond to evidence itself. Increasingly, evidence is judged not on its merits, but by who is using it and for what purpose.

Many have remarked on the increasingly ‘tribal’ nature of our society. In such a world, people are increasingly skeptical about information associated with the ‘other camp’ — and, I might add, increasingly gullible about any produced by their own. The inequality debate is again a case in point. But so too is climate change, compounded by the fact that most people (lawmakers included) understand neither the science of the ‘greenhouse effect’ nor the economics of different policy responses.

At the extreme, many simply choose to ignore or disregard any evidence or analysis that runs counter to their own views — views formed on the basis of sentiment, values or ideology. This has no doubt to some extent always been so. But while it was traditionally confined mainly to religious topics or the less educated, we now observe it happening more widely, even at our universities. A recent instance was the attempt to ‘de-platform’ noted Australian psychologist and author Bettina Arndt (daughter of the late Professor Heinz Arndt here at the ANU), because of her dissenting interpretation of AHRC survey data on the prevalence of sexual assaults on campus.

It’s what you do with it

Darryl: This is beautiful, darling! What do you call these things again? Sal: Rissoles. Everyone makes rissoles, darl. Darryl: Yeah, but it’s what you do with them. (The Castle.)

In short, evidence-based policy making faces the challenge that this thing we call ‘evidence’ is rarely the uncontested and objective policy resource that we might imagine it to be. Rather, it can be a battleground of conflicting views, assumptions and interpretations. And therefore the notion that ‘evidence’ should win the day in its own right, appealing as it may be to the research and evaluation community, is fanciful.

That is not to say that evidence cannot be influential in policy decisions – far from it. But it does mean that how (and by whom) it is generated, discussed, tested and utilized matters greatly. To borrow from The Castle, it’s not the evidence, ‘it’s what you do with it’.

The processes by which policy decisions are informed and made effectively determine what role evidence has to play and how well it plays it. Processes may vary according to the issue at hand and its timing. They reflect institutional capabilities and above all the attitudes and capacity of government leadership — primarily at the political level, but bureaucratic leadership too, a point to which I will return.

At a general level, we could define a  ‘good’ policy-making process as one that informs and engenders support for political decisions by promoting an understanding of

• the causes and nature of a policy problem or ‘issue’
• the relative merits and trade-offs in different options for dealing with it, and
• whether the option ultimately chosen turns out as intended.

Clearly to achieve these things, there needs to be a central role for the production of evidence, but also for consultation, deliberation and explanation.

Is ‘good process’ a chimera? 

Laws are like sausages: it’s best not to see them being made. (Attrib. to Otto von Bismarck)

All this seems self-evident and would no doubt be broadly accepted by policy practitioners. But that has not stopped it from becoming the subject of academic dispute.

Take for example the well-known concept of the ‘policy cycle’ in the Australian Policy Handbook – now in its 6th edition. This sets out a series of ‘how to’ steps to assist public servants turn policy ideas (from whatever quarter) into recommendations. These include identifying the ‘issues’, analysing options, conducting consultations to test findings and policy ideas, implementing a decision and ex post evaluation. (Althaus, Bridgman and Davis, 2018) It has been criticized on a couple of grounds.

One is for a seemingly rigid sequential approach to policy development, given the inevitable need for feedback loops and iterations. This is a valid point, but one the authors appear to have accommodated in their model.

A second critique goes further, suggesting that such a model is ‘rationalist’ or (even worse!) ‘managerialist’ — conceiving of policy-making as a form of logical problem-solving, when the reality is a lot messier and more random (as presented in Lindblom’s famous ‘Science of Muddling Through’).

Notwithstanding claims to the contrary, this seems to confuse the normative with the positive. Models of ‘good process’ are about what should be rather than what is. Recalling Bismarck’s aphorism about laws and sausages, there is no doubt that policy practice often deviates from principle. However, in suggesting that an ordered approach to policy-making like the Policy Cycle is unrealistic or unachievable, academic critics may not have been paying attention (unlike the ‘pracademic’ authors of the handbook).

For one thing, the steps in the Cycle differ little from those set out in Regulatory Assessment requirements that apply in all Australian jurisdictions.  (Not to say that these are necessarily always followed.) (OBPR, 2016)

More importantly, there is a large number of documented instances of policy initiatives conforming to the tests for ‘good process’; policies  that (as a result) have mostly turned out well. The OECD report Making Reform Happen is full of them (OECD 2010). Indeed, many have occurred in Australia in times past, beginning with the unilateral trade liberalization to which I have made reference.

Arguably Australia’s most extensive and successful economic reform program was the National Competition Policy, highlighted in last year’s Lecture by Fred Hilmer. In its 2005 review, the Productivity Commission found that the success of this far-reaching, complex and politically sensitive set of reforms derived from broad acceptance that pro-competition reforms were needed and would be in the public interest. But such acceptance did not come about by accident. It was underpinned by credible evidence and analysis, wide public engagement, strong administrative support and coordination, and political leadership that was supportive of all this and capable of communicating a compelling policy narrative. (PC 2005)

So when the Rudd Government declared its commitment to evidence-based policy a decade ago, the terminology may have been novel but not the concept. While of course the earlier period had its lapses, particularly in the final stages of government, there were enough good examples for it to earn the title the ‘reform era’ and for the OECD in one of its country studies to describe the policy achievements of the ‘Australian Model’ as ‘remarkable’. (OECD 2004)

Decline and fall

Looking back, that seems to have been the highwater mark, with examples of good process since then outweighed by instances of poor process (or no real process at all).

This assessment finds support in independent reviews covering around 40 different policy initiatives against ten ‘business case’ criteria developed by Professor Ken Wiltshire (IPAA, 2012; Per Capita, 2018 and IPA, 2018).  The ‘Wiltshire Tests’ have much in common with the steps of the Policy Cycle, including establishing a need for policy action, setting objectives, analyzing and publicly testing options, etc, as well as more ambitious specific requirements such as green/white papers. None of the forty policies were considered to have satisfied all criteria. In the more recent joint exercise involving separate reports by the IPA and Per Capita, both organisations agreed that a majority of the policies failed — which also demonstrates that good process is not a partisan issue.

If anything, the studies may have understated the extent of policy failure. Firstly, only five out of ten criteria needed to be satisfied for a ‘pass’, regardless of the significance of what was omitted. Thus the abolition of 457 skill-based visas made it above the line, despite the absence of any cost benefit analysis. Secondly, the binary ‘yes/no’ indicators generally convey a more positive picture than is warranted. For example, satisfying a ‘public interest’ criterion simply required an affirmative answer to the question ‘Is there a statement of the policy’s objectives couched in the public interest?’ (In my experience some of the worst public policies are ones that have been ‘couched in the public interest’.)

In a similar exercise of my own — using a smaller number of ‘good process’ indicators and a more differentiated assessment tool – not one of a dozen major policy initiatives came close to meeting the overall benchmark.

Among the more notable examples in the table, measures announced in the 2014 Budget relating to university fees, youth unemployment benefits and a Medicare co-payment were sprung on the public with no warning and little to support them (indeed the university initiative was  contrary to  Commission of Audit advice); the NBN concept was literally developed on the ‘back of an envelope’; and, as one commentator put it, the public justification for the Bank Tax was essentially bank robber Willie Sutton’s famous line that ‘that’s where the money is’ (with the implicit further rationalization that no-one likes the banks anyway).

In most other cases, the front end of the policy development process relating to problem and options analysis, performed better than testing of findings, communicating proposals and implementing them effectively. For example, the possibility of GST reform was put back ‘on the table’ through a Treasury White Paper and supported by various academic studies, but little effort was made to explain it publicly and it was withdrawn at the first whiff of opposition (and in record time). Industrial Relations reform, after a long hiatus that followed the Work Choices episode, was sensibly advanced through a PC inquiry, but the Commission’s report has been left to gather dust.

Proponents of ‘muddling through’ theory may think that, being the norm, the neglect of good process does not matter. But the evidence suggests otherwise.

For one thing the success rate for policies with process deficiencies has been exceptionally low. Several such initiatives had to be abandoned (the budget measures, GST reform, carbon trading, RSPT). Of those that were implemented, some were subsequently partially reversed (NBN, NRA) and others terminated (Work Choices, MRRT, Carbon tax).

Most of the policies that did become operational gave rise to significant unintended consequences.

For example, the spending programs are experiencing blow-outs on an unprecedented scale, even allowing for the usual ‘optimism bias’. The NBN was originally ‘guesstimated’ to cost $41 billion, but is now looking like at least double that, with some $30 billion to be written off. The latest estimate of the whole-of-life cost of the ‘made in South Australia’ submarine program has more than doubled to $200 billion, with those familiar with the Collins Class misadventure experiencing a strong sense of deja vu. Even the NDIS, the design of which had the benefit of a full Productivity Commission inquiry, will cost the budget at least  50 per cent more than initially estimated, due in part to subsequent ‘on the run’ changes to eligibility criteria.

Other programs suffering process failures have had features that facilitated widespread gaming and corruption. Documented instances include the VETFEE-HELP scheme (phony courses) and Family Daycare program (‘phantom’ children), with several billion dollars squandered without adequately meeting the programs’ social objectives.

During the GFC, when overspending was perhaps intentional, there were unintended consequences of different kinds, the most tragic being the lives lost under the Home Insulation Program, which the Royal Commission attributed to undue haste and insufficient attention to implementation risks.

Finally, as detailed in a recent paper for the Queensland Productivity Commission, when it comes to policies to promote higher living standards, there have been more anti- than pro-productivity interventions over the past decade, such that the reform ‘to do list’ has actually grown larger (Banks, 2018).

Can we do better (again)?

There is nothing a government hates more than to be well informed; for it makes the process of arriving at decisions much more complicated and difficult.  (Lord Keynes)

It may be true, as Adam Smith famously said, that ‘there is much ruin in a nation’. But it is hard to see how Australia can sustain high living standards for its citizens in the future with such an approach to public policy.

While our economy has done well in terms of (demand-driven) aggregate activity, it is the efficiency of the supply side which ultimately determines the living standards of the community. Australia’s burgeoning population, and the unprecedented migration numbers underpinning it, appear to be disguising our structural problems, as did the wool boom during the ‘lucky country’ era. With attention on the macro aggregates, poor microeconomic policy has been largely overlooked as a factor in Australia’s weak productivity performance and the low growth in real incomes.

Obstacles to good process

Restoring a greater measure of evidence-based policy and good process confronts the problem that these are not the natural order of things. For one thing, as identified in humorous vein by Keynes, they make the business of government ‘much more complicated’. Policy development takes longer, requires greater resourcing and involves more choices and tradeoffs.  Such an approach may also bring forth information and options that are inconvenient to a policy course preferred on political or other grounds.

And at the end, a government may receive few plaudits for its efforts. On the contrary, EBPM will often be strongly resisted by interest groups, fearing (not without reason) that their claims would not withstand scrutiny. The media may berate a government for delay. For their part, government ministers will wish to look ‘decisive’ (cue Yes Minister episode) and, if put on the spot, may feel compelled to offer an instant ‘solution’. This is typified in the regulatory sphere by the well-known ‘regulate first, ask questions later’ phenomenon. Recent instances are the Banking Executive Accountability Regime (BEAR) and Superannuation initiatives, about which senior Treasury officials were reportedly unable to answer basic questions from industry representatives.

Good process will therefore require backing from political leaders if it is to prevail. Most observers would accept that during the ‘reform era’ Australia experienced this when it counted. And just prior to my talk on this topic a decade ago, we’d had a ringing prime ministerial endorsement of EBPM. While reality soon fell short of the rhetoric, there continued to be at least tacit acknowledgment of the value of evidence and good process.

That is much less apparent today. Such policies as the Bank Tax and Pumped Hydro were proudly announced without even the pretense of evidence to support them. (Examples could also be cited at the state level, like Victoria’s new ‘footy Friday’ holiday, “because Victorians work hard”.)

Indeed on several occasions messaging from political leaders has actually been dismissive of the need for evidence, or even logical  argument. A case in point is the recent widely reported comment that ‘instinct’ was more useful than a Productivity Commission review in assessing the likely gains from joining the Pacific trade agreement.

Yet, as just noted, the track record for the ‘policy on the run’ alternative has  been far from encouraging. And the accompanying policy surprises, misfires and reversals have no doubt contributed to the current low trust in government itself. They are arguably implicated too in the low polling (and resulting high turnover) of Prime Ministers; and in the recent rise of minor parties and independents.

The proliferation of cross benchers in our parliaments, and their understandable desire to have a bigger say, have in turn made it harder for an elected government to get legislation through unscathed. But that too will not have been assisted by poor process. A more activist Senate surely requires more, not less, attention to making a soundly-based case.

What about the Public Service?

To the extent that there is political learning (or re-learning) about the connection between bad process and bad politics, interest in good process could possibly re-emerge. But given the extended period in which the downsides of ‘policy on the run’ have been on display, one must surmise that either our political leaders are slow learners or the obstacles are greater than one imagines.

Either way, the political class will need considerable help if it is to get off a policy treadmill that is clearly taking us (and them) nowhere. The traditional source of such help of course has been the public service. So, with the interests of ANZSOG in mind, I will conclude with a few thoughts about what the public service can do to help secure good process and better policies.

The answer should be ‘a lot’, given that we are essentially talking about its core business. Distinct from ministerial advisers in the private office, public servants have responsibilities extending beyond the servicing of a minister’s perceived needs. Among the most important of these are stewardship over institutional and procedural features that transcend the existence and policy orientation of any particular government. And while the public service cannot be called ‘independent’ in a substantive sense, under the Westminster system that has served this country well it needs to be apolitical or non-partisan, on which its oversight of the system itself depends. That means advising ministers on process as well as policy, including resisting attempts to circumvent it where this would not be in the public interest.

So if there have been failures of ‘due process’ in public policy, it is not unreasonable to suggest that the public service must share some of the blame. The issue is whether (a) it has lacked influence in such matters, (b) it has known what was needed but failed to act, or (c) it has actually been complicit. The first raises issues mainly of capability; the others deeper matters of integrity and ethics. Based on exchanges with many agency heads while overseeing ANZSOG’s CEOs Forum, it would seem that all three explanations have applied.

This assessment is reinforced by findings from a number of independent inquiries and reviews conducted in response to some of the more notable policy failures of recent years. The following quotes from three of these are indicative:

“The advice provided by public servants was, in many instances, poorly given, poorly received and poorly communicated.”

(Peter Shergold AC, HIP Review, Aug 2015)

“The leaders of the APS should examine whether the inability to have views seriously considered was circumstantial or whether it signals a more serious malaise”

(Bill Scales AO, NBN Review, July 2014)

“Over the life of this costly project, advice to government did not always meet the expected standard of being frank and fearless” 

(Victorian Auditor General, East West Link Review, December 2015)

In relation to the reference by Bill Scales (former Industry Commission chairman and head of Premiers Department in Victoria) to a ‘serious malaise’, I have come to the view that this is indeed the reality we face. It reflects significant changes in the operating environment of the public service over the years, which can be summarised in a few points:

• less secure, more ‘political’ senior appointments,
• a dominant ‘office’ with more political than policy expertise,
• decision-making in a hurry that draws on whatever advice is at hand.

These phenomena have been widely discussed, including in my Garran Oration for the IPAA (Banks, 2013). I simply note here that if this has become the ‘new normal’, it is not an environment conducive to building policy capability or to ‘frank and fearless’ advice – regarded by many public servants these days as an anachronism or a joke. Rather, it is an operating environment that promotes risk aversion, second-guessing and partisanship.

It weakens the incentives for senior officials to prioritize the building of capability in policy analysis. And makes it harder to attract and retain strategic policy expertise, the current lack of which is borne out by some of the ‘capability reviews’ overseen by the APSC.

However, as I have discussed previously, I don’t believe salvation lies with consultants, the increased reliance on which has been further eroding and displacing public service capability where it is most needed.

Nor can it be found in the current mantras of ‘responsiveness’, ‘innovation’ and ‘agility’. While the public service may once have needed to become more responsive to the government of the day, particularly after a change of government, it has leapt to the other extreme. And as for ‘nimbleness’, if this means responding quickly to the latest policy thought bubble, the service could do without it.  As former APS Commissioner Andrew Podger observes in his submission to the Thodey Review, “impartiality and due process rightly constrain flexibility in the APS. The public interest requires frank and fearless advice, not ‘agile’ advice” (Podger, 2018). Indeed, one might say on the basis of recent policy misadventures that there is already too much agility.

This latest review of the APS hopefully reflects recognition that, despite a similar major review that produced a ‘blueprint’ just eight years ago, there remains a pressing need for reform. However, the emphasis on technology in its terms of reference may be a distraction. Better use of technology and moves to ‘join up’ and share data are obviously worth pursuing and will no doubt enhance capacity. But the main obstacle to using evidence in policy development is not so much lack of (potential) supply as lack of demand. Remedying this will necessitate in-depth consideration of governance and other arrangements that shape incentives and the relationship between ministers, advisers and departments. I am not sure the present review has been constituted with that more fundamental need in mind.

Bottom line

It follows that restoring good process and a greater role for evidence in policy-making, requires systemic changes which can really only be secured through committed leadership. That seems a tall order in the current political environment. A problem needs to be recognised before it can be addressed and it is not clear there is yet sufficient recognition, at least where it counts. However, the costs and adverse consequences of ‘policy on the run’, including politically, seem likely to accumulate to the point where there will have to be a return to former ways. If not, then in the words of the (now unfashionable) Australian novelist Xavier Herbert, ‘poor fella my country’.