View from the Outside Let me give you my personal history for it

Document Sample
View from the Outside Let me give you my personal history for it Powered By Docstoc
					                                     View From the Outside
CES Annual Conference, Vancouver June 2003                          Gerald Halpern Page 1 of 5



                                   View from the Outside
Let me give you my personal history for it very much colours my answer to the question
of what kind of relationship should exist between evaluation and internal audit.
My background begins with psychology and well controlled experiments or at least well
controlled for sources of internal invalidity experiments. Then, as an Industrial
Psychologist, I moved to experiments in the real word be that the factory or the army
training schools or the pre-computer office. That was close enough to the real world so I
moved to education but even there we had opportunity for controlled experiments yes
even with true random assignment.
But in time, thinking I was getting even closer to that which mattered, I moved to
government – federal government no less.
I was here in 1981 when program evaluation was kick-started into being. Remember
program evaluation, the precursor to evaluation? Remember when Treasury Board did
the studies? Remember the days of Michael Rayner as Comptroller General
All of which is to say that I have worked inside the federal service, that I came to the
federal evaluation service as a professional evaluator, that I spent some seven years
with the Auditor General as an auditor of evaluation.
Now I work as a hired gun. I sell my services to Departments in the Federal
Government, to Ministries in the Ontario Government and to Departments of the North
West Territories Government and to one agency of the United Nations.
All of which is to say that my life experiences are those of an academically focused
evaluator who works in the real world of government, who is not tied to a regular
paycheque and who has enough business that he has the luxury of losing a client who is
not honest in his or her dealings.


And now, let me give the bottom line on what I think the relationship should be. I believe
that the two should not be merged. I see them as separate functions with very different
roles even though the roles are often complementary.
As Ernest Chadler has pointed out, IA is an assurance giving function. It is set up to
advise management on the adequacy of (1) systems in place for risk management, (2)
management control practices and (3) the trustworthiness of information for decision-
making and reporting.
Michel Laurendeau has described evaluation in terms of providing trustworthy
information on programs, policies and initiatives and on demonstrating the results and
cost-effectiveness of the programs, policies and initiatives. Demonstrating results
means going outside of the department housing the program and examining the results
that matter to Canadians – the outcomes. Michel also makes the point that the current
policy for evaluation keeps it separate from internal audit.
At the risk of being contentious, I am going to restate these differences but in my own
words.
       Internal Audit is centered on whether things are happening as intended. Was the
       designed operational process put in place? Were the expected results measured
       and recorded. Were things done right where right is “what was intended”. Even
       if the wrong things were being done or operations were far from efficient, if the
                                     View From the Outside
CES Annual Conference, Vancouver June 2003                          Gerald Halpern Page 2 of 5


        stuff of the Operations Manual was done, then you get brownie points. Stephen
        Leacock has given us a marvellous example of doing the wrong thing but doing it
        well. On a dark night, a man was on his hands and knees under a lamppost
        searching inch by inch for a lost object. A policeman came along and asked
        what he was doing. Looking for my house keys was the answer. And where did
        you drop them. Over there by the bushes was the answer . Then why are you
        looking here? Because the light is better. The Operations Manual was very
        detailed in the steps to take when searching; but it did not prescribe well where to
        search. Similarly, audit might verify that the indicators of a performance
        monitoring system are being used. But it may do this without asking if the
        indicators being used are in fact the correct indicators. For example, the call
        centre that records how long is spent on each call (a measure of efficiency) but
        does not survey clients to se if their needs are satisfied.
        Evaluation is centered on what does happen and whether what happens is in line
        with what was expected and whether additional, unexpected things happen. And
        for the things found to have happened: Are they attributable to the program or
        policy or initiative. Evaluation too has the lamppost problem in that the
        evaluators working in government frequently lack the means if shining light upon
        the real issues. Because they lack portable flashlights, they end up looking
        under the lamppost.


I am going to use my experience base to comment on what I see from the outside. I
have been ‘on the outside’ now for eight years, since 1995. Being on the outside is, of
course, a misleading label, since I am also on the inside as I work with clients to help
them define their problems and seek evidence based solutions. What do I see among
the government based evaluators.
First      I see a great deal of professional incompetence.
Second     I see a great lack of professional independence.
Third      I see an acceptance of false gods. I see people employed as evaluation
           professionals who do not consider it to be their ethical obligation to act in
           accordance with professional standards – instead they look to career-wise
           moves and pleasing senior managers.
Having described the many, let me quickly underline that I can, without difficulty, name a
number of highly professional, competent, dedicated professional evaluators. But they
are not the majority. And they are not who come first to mind when I think of the
evaluation function in government.
Unfortunately, the less than complimentary view that I have given is not new. It has
been with us from the beginning of the formal evaluation as shepherded by the
Comptroller General of Canada.
But even the OCG had antecedents in the federal government. The Treasury Board’s
1969 “Planning, Programming and Budgeting Guide,” was addressed to Deputy
Ministers and other senior officials and carried the following action message.
Departments were to establish small program planning and evaluation units reporting at
or near the deputy head level, staffed by professionals versed in quantitative methods
who would analyse the outputs of programs and outline alternative ways of achieving
                                     View From the Outside
CES Annual Conference, Vancouver June 2003                           Gerald Halpern Page 3 of 5


objectives. There was an emphasis on cost-benefit methods. Unfortunately, there was
not similar clarity on who the ultimate client was.
The 1969 PPB Guide gave rise to an exponential growth of position. In the three years,
1970 to 1973, there arose 3,503 person years classified as planning and evaluation. Of
these, 267 were senior executive positions. If the total 3,503 however, only 96 were
devoted exclusively to evaluation. In other words, there was little evaluation
infrastructure. Treasury Board officials examined the situation and concluded that line
managers saw evaluation as threatening and were not prepared to support the function
of evaluation unless the evaluators worked on their short term requirements which, of
course, was exactly not why evaluators were needed.
The TB response was to set up its own Planning Branch – if you won’t do it, we will!.
Douglas Hartle headed that Branch and they did a number of number of evaluation
studies intended for use by Cabinet. In Hartle’s retrospective assessment, the
evaluations had little impact having been suppressed by Program Branch which saw
them as a territorial intrusion and by departments not interested in reconsidering existing
programs.
All of this eventually resulted in the 1977 TB circular “Evaluation of Programs by
Departments” and a policy directive to departments to do what they were not doing:
conduct evaluations of their programs on a regular basis. About this time the Auditor
General announce that federal spending was out of control. The result was the
establishment in 1978 of the Office of the Comptroller General
Doug Hartle, who by now had returned to academe, testified at the Standing Senate
Committee on National Finance (Proceedings, 1990) that the Comptroller General’s
Office is not powerful enough to secure from departments competent evaluation reports
the centre for evaluation cannot ensure competent evaluations and nothing is happening
with them.
About the same time, c.1993, Duncan Campbell warned against the existing tendency to
have departments and managers determine the nature and the scope of program
evaluations, and to shape them according to their needs. (Canadian Centre for
management Development, Program Management and Program Evaluation: The
Shifting Interface.)
Since then, the most important changes have been
       (a) to leave program evaluation behind in favour of evaluation,
       (b) to align evaluation with Results for Canadians and
       (c) to make managers responsible for program evaluation with the exhortation
           that managers rely on evaluators for technical advice.
As a result, we are not much further ahead today than we were 30 years ago.
•   There is a severe shortage of qualified evaluators.
•   There is a distinct unwillingness on the part of managers to evaluate beyond
    operational program monitoring.
•   There is very little use of evaluation information for decision taking.
•   There is a continued reluctance to put the recommendations of RMAFs into effect.
Would combining evaluation with internal audit change any of that? Not in my opinion.
                                     View From the Outside
CES Annual Conference, Vancouver June 2003                          Gerald Halpern Page 4 of 5


Would creating more positions for evaluation and finding people to fill those positions
make a difference? Not in my opinion.
Would it make a difference if the new positions were staffed only with competent,
independent evaluators? I do not think so but it would provide a larger inventory of
frustrated professionals.
Would increased budgets to allow increased hiring of consultants make a difference?
Not in my opinion.
In my view, three conditions are needed and none interface with internal audit:
One
Internally, we need to give professional evaluators a power base so that they can tell it
like it is – tell it to Canadians – without being censored by managers. Why should
managers allow that to happen? What reward system could we conceivable have that
would encourage Deputy Ministers and Directors-General to do that? Perhaps someone
here has an answer for I do not.
But I do have a suggestion for the evaluators. Apply the Department of Justice model
for placing lawyers in departments. Place into departments competent (i.e. well trained),
independent, professionals whose sense of self-worth is tied to their profession (rewards
from the discipline not from the department manager). I think we have such people
around but not in large numbers and I suspect that with the right conditions, we can
attract many more. These professionals would have dual accountability: on the one
hand to a Deputy for the addressing the key issues of the department; on the other
hand to the Centre of excellence (or some group like it) for the quality of their work.
Two
My second suggestion is that we truly recognise and acknowledge the real complexity of
government programs. This means that we need theory driven evaluation. It means an
enhanced role for logic models. It means bringing out all of the goals (political, social,
economic, cultural) and explicitly stating them as program goals. It means the
preparation of in-depth program model and the serious consideration of the plausibility of
achieving intended results. Too many of our program models are paper exercises to
meet requirements and not value added products to inform both management and
Canadians.
Three
My third suggestion is that evaluators be prepared to do what the really courageous
auditors do – report what they find. When they discover that the program does not
acknowledge the full set of goals of a program, they must say so. If admission to the
rewards of a program (a contribution, a contract, a grant, a favoured policy) is
determined by a Ministerial override, we have to say so. If the goal complexity of a
program is mind boggling but legitimate we have to put our minds to work and rise to the
challenge. If we are to practice theory based evaluation – and I firmly believe that we
should, then we will have to take the time to understand the program and tell it like it is.
But we have to understand before we can tell. And our understanding should be based
on the program’s reality not on the manager’s wishes.
Couple this with a model that truly believes in the Results For Canadians ethos and
therefore asks the core questions and does this in full recognition of the goal complexity
of government.
                                     View From the Outside
CES Annual Conference, Vancouver June 2003                          Gerald Halpern Page 5 of 5


And what would it mean if these conditions were met?
Let’s take the situation at one department as described by Doug Macdonald. One way
of describing that department’s management structure is that it is a coalition of fiefdoms
more or less independent of each other. Since that kind of structure cannot be dealt with
from within, simply by-pass the details of the structure by having evaluation report dually
to the apex of the structure for issue prioritisation and to an outside authority for quality
control. Here I find myself in full agreement with Doug’s observation that there is a need
for “some form of effective external evaluation capability installed in the Canadian
federal government to complement the external audit capacity already existing in the
Office of the Auditor General.” [This does sound like Douglas Hartle speaking, does it
not?]
Consider what Michel has told us. The intent is to inculcate into the management and
culture of organisations a credible public performance reporting. I certainly agree. And I
have no fault to find with the life cycle approach to managing for results. But I do think
that there is a serious, fundamental, perhaps fatal, flaw. The Policy states that “To
ensure government has timely, strategically focussed, objective and evidence based
information on the performance of its policies, programs and initiatives to produce better
“Results for Canadians” Sounds good until you realise that the policy serves
government, and government managers, rather than Canadians. The information is
filtered through the departments, through the very managers whose performance is
being evaluated, and then the performance information is released. Some would say,
the sanitised performance information is released. As an evaluator working on contract
to departments, I could tell you of some experiences with program managers who fight
most vigorously to hide all performance reporting that they do not see as complimentary.
I especially remember two recent instances. In one, the evaluation shop said to me:
that is a good set of findings, they seem to me to be very plausible and backed by
evidence. The program manager later simply blew up, called me incompetent and
insisted that I change the report. I asked why, he said it was not a fair assessment. I
repeated that the evidence had been collected according to a methodology to which he
had signed on to but he just repeated that it can’t be correct. I still do not know what the
evaluation unit did with the report; I do know that I have not been offered any other
assignments.
In the other, the program people complained about one portion of the findings and
recommendations (the only portion that could possibly be seen as critical of
management). After several meetings to discuss, the DG for the evaluation function
finally said to the managers: “Find some other way to report the findings, suggest an
alternate wording, and Halpern will consider it. I was prepared to do so since I don’t
believe that people have to be hit over the head with a mallet and I had already tried to
be kind without hiding the basic facts. The manager never did come back with
alternative wording.
The part of the current evaluation regime that interests me the most is the call for
“credible performance reporting”. In the words of the policy: “objective and evidence
based information on the performance of … policies, programs and initiatives”.
I would not bring together IA and evaluation. I would have each focus on their
respective strengths: audit to avoid error (give assurance that processes and report on
results are trustworthy) and evaluation to achieve results (measure the right things,
measure them well and compare to expectations.

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:5
posted:3/24/2010
language:English
pages:5