Why evaluation matters: five mistakes to avoid when starting a new program

  • Published Date: 07 May 2018

Evaluation is a crucial part of any government program, and a requirement for evidence-based policy, however many organisations do not understand the value of a good evaluation process.

Dr George Argyrous, part of ANZSOG’s Evidence and Evaluation Hub, works with governments to improve their evaluation capability and embed it in their day-to-day operations.

He says good evaluation is not something that happens at the end of a program, rather good evaluation processes must be implemented early in the development of a policy to improve outcomes.

“In the 1990s there was an emphasis on guidelines and process but this was not always backed up by the capabilities to implement them. I think we are recognising the importance of continuous evaluation in making sure programs are delivering as intended.”

Dr Argyrous shared five common mistakes that can stop organisations from undertaking effective policy:

1. Not being specific enough in what they want a program to achieve

“Often there are conflicting demands within an organisation and projects are poorly defined to avoid conflict in early stages between different parts of an agency. Resolving this can be difficult but working out these issues can save a lot of confusion and wasted effort in the long-run.

2. Not having a good program logic which identifies evaluation opportunities

“Program logic is important because it helps you make judgments and decisions about what you are trying to achieve.”

3. Not building internal capability to evaluate, or ignoring existing capability and relying on independent expertise.

“It would seem logical for agencies to take advantage of internal expertise in evaluation, but many fail to do so, whether through internal rigidities or a belief that only independent evaluation is legitimate.

4. Not thinking about evaluation from the beginning.

“You need to include evaluation from the beginning. It’s not something you do once a program is complete, good evaluation takes place right through the rollout of a program and provides a basis for continuous improvement.”

5. Not using administrative data as the basis for evaluation

“Organisations are often collecting the data that could be used for evaluation, but are either not sharing it or neglecting to use it to examine the effectiveness of programs.”

Dr Argyrous said that the first two points were especially important – because rigorously outlining expected outcomes leads to good program logic. He said that these principles were important for all programs, but especially important for those being delivered in a hurry, and in response to political considerations.

“Good program logic in particular is even more important in these cases, so that gaps and flaws can be identified quickly. I have known cases where a minister’s office only recognises the problems with their pet idea when a rigorous program logic is presented to them.”

Dr Argyrous says that many organisations do not take advantage of their own capacity to conduct internal evaluations, and that an independent evaluation was not always necessary provided the process was not biased and would stand up to outside scrutiny.

“However, it is important that any evaluation is able to step away from the passion that program staff have about a program, as it can cloud an evaluation,” he said.

He says that good evaluations have debunked, or at least queried, the credentials of many programs that were originally accepted without question.

He cites the example of the NSW Government’s Reading Recovery program, an intensive one-to-one reading program for students, which was the subject of critical evaluations in recent years.

“Evaluation is fundamentally about knowing if policy implementation or service delivery is working. But every good evaluation adds to your understanding of what your audiences need, how they use your services and what could be improved,” Dr Argyrous said.

“I think many governments are doing evaluation better than they had in the past, for example the  Victorian Government now has an outcomes framework.”

Dr Argyrous has worked in the evaluation of complex policy areas like disaster recovery, which demand a multi-agency approach across levels of government, and has also provided evaluation advice on major government projects, such as the NDIS and de-radicalisation programs.

ANZSOG’s Evidence and Evaluation Hub is a centre of expertise developed to strengthen the capacity of the public and not-for-profit sectors to generate and use evaluation and other types of evidence to support decision-making and practice.

It works with governments and not-for-profits to analyse specific projects, strengthen organisational capacity and conduct research aimed at lifting the quality of evaluative practice in Australia and New Zealand.