Docstoc

evaluations-workshop-report

Document Sample
evaluations-workshop-report Powered By Docstoc
					WORKSHOP REPORT: Strengthening Organisational Capacities for Humanitarian
Evaluation, 28th September 2010, CAFOD Headquarters

EXECUTIVE SUMMARY

A workshop was organised in London on 28 September 2010 as part of ALNAP’s work on
supporting its members to strengthen their capacities to plan, manage, undertake and use
humanitarian evaluations. 31 people participated in this workshop, including heads of
evaluation departments, evaluation managers, evaluators and others with an interest in
evaluations. They represented the whole spectrum of ALNAP membership – donors, NGOs,
the Red Cross/Crescent, the UN, independents and academics. The aims of the workshop
were to provide a forum for peer-learning, to share experiences and to assist participants to
find ways of bringing about real change towards meeting the full potential of evaluations of
humanitarian action to encourage learning and ensure accountability.

For a number of years, ALNAP has been working on evaluation and produced, among other
things, a study on evaluation utilisation, a pilot guide on real-time evaluations and the
Evaluation Reports Database. Still, there is a continuing need to make a concerted effort in
encouraging better use of evaluations. In this connection, ALNAP has been undertaking
action research and has produced a draft paper on ‘Strengthening organisational capacities
for evaluation of humanitarian action’. The workshop formed part of this research.

The workshop consisted of four sessions. In the first, Alistair Hallam, who authored the draft
paper, presented a draft analytical framework for assessing evaluation capacities, consisting
of 23 key factors, placed under four categories (Leadership, Culture and Structure;
Evaluation Purpose and Policy; Evaluation Processes and Systems; Supporting Processes
and Mechanisms). Joakim Molander started the second session talking about his experience
of making changes for increased evaluation use in Sida. The participants then worked in
groups to map key factors onto the sphere of influence model, exploring the possibilities to
initiate change. In the third session, a keynote presentation was given by Michael Quinn
Patton who shared his thoughts on realising utilisation-focused evaluations. The workshop
concluded with the participants identifying the way forward for strengthening organisational
capacities.

Throughout the course of the day, the participants shared their wealth of experience, inspired
by three main presentations and the mapping exercise. The main findings from the workshop
include the following:

 The draft framework is useful in analysing evaluation capacities. It can be complemented
  by other frameworks in order to create a strategy for change.
 There are quite a few factors which can be influenced by the participants, including most
  of the factors in the ‘leadership and culture’ and ‘evaluation purposes and policy’
  categories. Three factors are marked as most significant: ‘clarify the purpose of
  evaluation’, ‘develop a strategic approach to selecting what should be evaluated’ and
  ‘improving monitoring’.
 The key component in strengthening capacities is ‘people’: users, leaders, evaluation
  supporters, informants, and of course, evaluators. Therefore, the personal factors –
  including personal relationships, communication skills and ability to influence – matter
  most.
                                                                                            1
 Evaluators must be advocators for the value of evaluation. Evaluations should be clearly
  linked to the ‘incentives for change’ for their users, so that they can see the value of
  evaluations and use the findings more easily.
 Evaluative thinking must be promoted throughout each organisation. The term
  ‘evaluation’ should be adopted and used flexibly, reflecting the context and the people.

Building on the workshop, ALNAP is going to facilitate the next steps below – they will
hopefully assist ALNAP member organisations to take actions which will strengthen
organisational capacities for humanitarian evaluation.

Next Steps:

 ALNAP has created a private online portal for the ‘recovering evaluators support group’
  on its website. Workshop participants and member organisations have been invited to
  join.

 Workshop participants are requested to share experiences of strengthening evaluation
  capacities through the portal.

 Participants are requested to share thoughts on linking accountability, learning and
  knowledge management and results-based approaches with evaluations through the portal

 ALNAP will prepare a briefing paper, aimed at influencing humanitarian leaders on
  evaluative thinking.

 ALNAP will, with interested participants, develop the self-assessment questionnaire and
  make it available for use by interested member organisations.

 ALNAP will publish the paper on ‘Strengthening Organisational Capacities for
  Evaluation of Humanitarian Action’ and participants are requested to volunteer for the
  peer-review.

 ALNAP will, if/when requested, connect to Evalnet/UNEG and other networks and
  conduct discussion sessions on the topic of evaluation capacities.

 Participants may consider conducting discussion sessions in their organisations on this
  topic. ALNAP is ready to assist when requested.




                                                                                        2
1. Opening session: Unpacking Organisational Capacities for Humanitarian Evaluation

The workshop was opened by Ben Ramalingam (ALNAP) who welcomed the participants
and introduced the aim of the workshop as providing a forum for cross-organisational
learning in tackling the issue of evaluation capacities in the humanitarian sector. Ben outlined
some obstacles to strengthening evaluation capacities. For example, many believe evaluations
are a waste of resources – ‘half of our budget is wasted, but we don’t know which half’ – too
technical and complex, and fail to speak ‘truth to power’. He emphasised that evaluations
could function potentially as a means to meet organisations’ strategic objectives and to bring
together wider stakeholders in assessing impact. In this way, strengthening evaluation
capacities should be understood to equate with improving organisational capacities as a
whole. He pointed out that change makers and evaluation champions must be used in
transforming organisational culture towards more evaluative thinking. Ben urged everybody
to share their thoughts and ideas on this issue throughout the workshop.

Alistair Hallam (Valid International) then presented the draft ALNAP paper ‘strengthening
organisational capacities for evaluation of humanitarian action’. The draft framework
contains 23 key factors in four categories;
    • Leadership, Culture and Structure
    • Evaluation Purpose and Policy
    • Evaluation Processes and Systems
    • Supporting Processes and Mechanisms

Alistair explained that those categories were interlinked, requiring each step to be in place in
order for evaluations to form a virtuous cycle informing organisational practice.

Prior to the workshop, the participants were asked to respond to a self-assessment
questionnaire on organisational capacity (link here) and 22 participants responded. Although
only indicative, the questionnaire provided valuable insights into how organisations thought
about themselves and confirmed the well-known fact that the organisations did not get as
much as they could out of evaluations and did not see much impact from them. Alistair
explained that the draft framework and the questionnaire were meant to facilitate the
participants’ in-depth analysis of their organisations’ capacities and a peer-to-peer review,
learning from each other. He called for comments on the draft framework and the
questionnaire.

Alistair further presented the key findings from the questionnaire. Many questions scored
50/50 agreements/disagreements, painting the picture of ‘the glass is half-full/half-empty’.

On the Leadership, Culture and Structure category, Alistair noted that there must be a strong
and committed leadership behind the evaluation processes or else successive efforts to
implement recommendations would not be prioritised. Most of those who participated in the
survey agreed that leadership was an important factor. While more than half considered that
‘evidence and data is actively sought to help decision-making at all levels of the
organisation’, this result did not show conclusively to what extent evaluations were valued in
practice. He said that the participants needed to lead in creating an evaluative culture, at the
same time noting that evaluations must be demand-led. It was pointed out that, rather
worryingly, less than half of the respondents said that they conducted stakeholder analysis.
Alistair explained that without engaging people in evaluations, there would be no ownership
                                                                                              3
or direct link to sell the evaluation messages in the organisation and its wider stakeholders.
To make evaluation a learning process, evaluations could not be a box-ticking exercise and
should be less like financial audits.

In terms of the factors in the Evaluation Purpose and Policy category, most strikingly, an
overwhelming majority agreed to the statement that ‘there is capacity within my organisation
to reflect on, absorb and act upon the findings of evaluations’. Alistair pointed out that even
though most of the organisations said that they had evaluation policies and capacities, they
did not appear to be implemented very well in practice. Evaluations were not always
integrated into the policy/programme cycle, often missing the right time to inform decision-
making, and therefore lowering the credibility of evaluations. With the high financial costs of
evaluations, lessons learned should be reflected in the programme and wider organisational
learning. Alistair also referred to dissemination as a key strategic issue: evaluations tended to
be distributed in the hope that people would read them, without any guarantees.

One of the most agreed statements in the Supporting Processes and Mechanisms category
was that ‘monitoring of humanitarian programmes could be improved within my
organisation’. More than three quarters of the respondents indicated that there was a need for
more evaluation tools, contrary to the belief that there were too many already. While the
importance of involvement of beneficiaries was recognised, only half thought that it had
become standard practice. The role of networks such as ALNAP was recognised as useful in
developing evaluation policy and practice.

Alistair posed a question for discussion: ‘what is preventing organisational learning and
improved capacities from the evaluation process?’ He concluded that the current evaluation
capacities in the humanitarian sector were mixed. The challenges would be to make
evaluation more demand-led and to create a virtuous cycle between evaluations and practice.
(Alistair’s presentation slides are here).

The participants were then asked to identify their current concerns on evaluation capacities.
Some of the points raised are summarised below.

 There are more sources for learning than just evaluations and there should be more
  emphasis on learning rather than on evaluation.
 There must be more efforts by all at increasing the sense of ownership and sharing
  experiences.
 There are too many tools in some areas but not enough in others.
 One needs to start with self-assessment, be clear and upfront on what to be accountable
  for.
 One should plan what to evaluate and avoid overloading.
 There is a tension between learning and accountability. Currently, neither accountability
  nor learning needs are adequately met.
 It is worrying to find that few organisations are engaged in stakeholder analysis.
 There are some evaluation champions within organisations, outside of evaluation units –
  they need to be found and brought into the process of expanding evaluation capacities.




                                                                                               4
2. Workshop Exercise: Spheres of Influence

Joakim Molander (Sida) opened the group-work session by sharing his experience in his
Agency. He and his colleagues in the evaluation department embarked on a strategic change
within the organisation, towards more utilisation-focused, learning-emphasised evaluations.
This was undertaken in the context of re-organisation of Sida as a whole, started in October
2008, and strongly led by management. This process included consultations with Sida staff
across the organisation (some 900 staff in 50 countries) to determine what kind of culture the
organisation wanted to achieve and to change the policy framework and organisational plan
accordingly.

The evaluation department found that the general perception in the organisation was that
evaluations were seen to be too academic and reports too long. As a starting point, referring
to Patton’s idea of utilisation-focused evaluation, Joakim and his team explored what kind of
information was needed by different units of Sida and how evaluations could be linked to it
and utilised. Based on its findings, the strategy for change was put into place considering
what the evaluation department could change directly and what areas of the organisation it
could influence. Most tangibly, the evaluation department devised the following approach:
 Ask the question - is there energy for an evaluation idea? if so, go with it; if not, shelve it.
 Plan evaluations more ‘consciously’, looking at: a) who is involved? b) who are the
    intended users? c) how will the findings be communicated? d) what is the ideal outcome?
    e) who are the users by proxy?
 Create and use space for learning to allow for reflection. Encourage an evaluative culture,
    using informal lunches and networking sessions both inside and outside the organisation.

Through this experience, Joakim and his team also noted a number of important factors.
Evaluations should be considered as more a process than a product, offering staff learning
opportunities. It was a challenge to identify who exactly key stakeholders were; involving
them in an evaluation process could be objected to, based on the belief that it would
compromise integrity and independence in evaluation. It was also critical to identify
evaluation champions at policy and learning levels in order to tune into the demands for and
encourage the use of evaluations. (See this case study on the portal here).

The Sphere of Influence Exercise:

After Joakim’s presentation, which they found very
inspiring, the participants were divided into six
groups – Consultants/Academics, Donors, NGOs
(two groups), the Red Cross movement and the UN.
They undertook a mapping exercise using the sphere
of influence model which is commonly used in
creating a strategy for change in management. It was
used in this workshop as a means to start thinking
about the virtuous cycle, placing the factors
identified in the draft framework.

There were 23 cards which corresponded to the key
factors of the draft framework. The participants were asked to map those factors discussing
whether each factor was under their direct control, in the environment of direct influence, in
                                                                                                5
the environment of indirect influence or outside of control or influence. At the end of the
mapping exercises, each group was asked to prioritise which were the five most important
aspects to be changed in their organisations.

The summary of the findings from this mapping exercise is as follows. (The consolidated
table of the sphere of influence is here).

   The majority of groups considered the areas of ‘Evaluation Purpose and Policy’ and
    ‘Evaluation Processes and Systems’ to be under their direct control, while most of the
    factors in ‘The Leadership, Culture and Structure’ area were placed in the sphere of
    direct influence. A large number of the factors in the ‘Supporting Processes and
    Mechanisms’ were located under the sphere of indirect influence or outside the influence.

   There were 14 factors marked as the most significant. Five factors topped the rankings:1
            - Clarify the purpose of evaluation
            - Develop a system for involving key stake holders throughout the process
            - Improve monitoring throughout the process
            - Promote an evaluation culture
            - Develop a strategic approach to selecting what should be evaluated.
All but two factors selected as most significant were marked under either direct control or
direct influence, indicating that the important factors were within the sphere of influence of
the participants.2

   Both the UN and the Red Cross/Crescent Movement groups identified the largest number
    of factors in their direct control whereas the Donors group and one of the NGO groups
    had more factors under their direct influence. Not surprisingly perhaps, consultants
    marked the largest number of factors as falling outside of their sphere of influence.
    Interestingly, they felt that their control was over evaluation tools rather than over the
    technical quality of evaluations.

During this exercise, the participants actively discussed what factors would work in their
organisation and what obstacles may be encountered. It was noted that the level of influence
on the staff and the leadership of an organisation depends on the level of those who initiate
change within the organisation, and that they need to be supported by evaluation champions
at different levels. Although significant, the personal factors had their own limits. The timing
was also raised as a key element both in terms of a change cycle and a project/programme
cycle of an organisation. The participants also discussed that good baseline data was
necessary for evaluation and should be strengthened. It was also noted that some of the
factors of the framework needed more clarity.




1
  The first three factors were marked as significant by three groups each, and the other two by two groups each. Nine other
factors were also marked as most significant by one group each: ‘Ensure leadership is supportive of evaluation’, ‘Increase
the internal demand for evaluation information’, ‘Ensure evaluation processes are timely and form an integral part of the
decision-making cycle’, ‘Emphasise quality not quantity’, ‘Improve the technical quality of the evaluation process’, ‘Use a
mix of internal and external staff to encourage a culture of evaluation to develop’, ‘Develop a policy to effectively
disseminate findings’, ‘Involve beneficiaries throughout the programming cycle’ and ‘Engage with media demands for
information’
2
  The two factors were ‘Emphasise quality not quantity’ and ‘Engage with media demands for information’.
                                                                                                                              6
3. Keynote Presentation: Michael Quinn Patton


Michael Quinn Patton provided a lively keynote presentation through the video link from
Minneapolis. He stressed the need for contingency-based evaluations that could be adapted to
different contexts and organisational capacities. His presentation started with the emphasis on
the five most critical factors in evaluation use – ‘people, people, people, people, people’.

He went on to explain four common evaluation challenges with illustrations: the tension
between accountability and learning, speaking truth to power, creating openness to reality-
testing and commitment to engage, and working effectively with leadership. Another more
practical challenge was pointed out as linking evaluations to organisational mechanisms such
as performance reviews, planning and budget cycles.

Michael stressed the importance of reframing accountability, linking it to learning and
performance improvement. In this sense, an evaluative framework should aim to create a
culture for internal accountability, rather than simply responding to pressures from external
and donor-driven demands. Another factor highlighted was the commitment on speaking
truth, even if the senior management and donors did not want to hear it. ‘Jesting is serious
business, because speaking truth to power is serious business.’

Many participants nodded to the remark that the most important thing was to adapt
evaluations to context and clarify the value of evaluation – ‘e-valu-ations’. Michael related
an experience when, realising that leaders were not attending evaluation training, he changed
the topic to be about leadership. By reframing it, added value of evaluations was sold to them
as providing the data to improve organisational performance.

Michael stressed a process use of evaluation – evaluative thinking in the organisations should
be promoted. This implied improving organisational capacities for ‘Reality-Testing, Results-
Oriented, Learning-Focused’ leadership and culture.

Another useful analysis from Michael was the comparison between the ALNAP draft
framework and Cornell University’s work on evaluation capacities. Four categories for
capacity-building matched between these two
studies. A self-assessment matrix seen here was also
presented as another analytical tool complementing
the draft framework.

Michael concluded by saying that above and beyond
capacity-building in each organisation, there was a
need to increase efficiency and effectiveness of the
entire humanitarian system. He encouraged the
participants towards adaptive learning, as seen in
this workshop so that organisations could support
each other and learn together. (His presentation can be seen here).

                                                                                             7
Questions and Answers with Michael Quinn Patton:

Q: How can we effectively manage the knowledge produced by
evaluations and the tension between learning and accountability?
How can we link evaluations better to a results-orientated
approach?

A: [Michael responded by offering the example of IDRC and their commitment to
evaluative thinking]. IDRC is a quasi-governmental organisation that collaborates with
NGOs worldwide. At a certain time, there were over 1,000 unfinished reports, even though
the senior managers were supposed to complete them. The President of IDRC made it a
priority to finish those reports and the evaluations department listed and published all
unfinished reports to make responsibilities clear.

This experience illustrates that those reports were lacking in value and not contributing to
learning within IDRC. The senior managers were leaving old reports to work on new ones as
there was no incentive to complete the old reports which were not chiming with the project
cycle.

When learning occurs in an organisation, it is typically in the first six months of a project
and most learning is done in a practical manner. IDRC created a project report completion
programme where they were reviewed by peer managers. This enabled the managers to learn
from each other and provided an incentive to report well.

The IDRC evaluations task force reviews all reports and conducts interviews across the
organisation in order to generate priority areas of lessons learned. There is a worldwide
Annual Learning Programme meeting to create a culture for learning and accountability and
to harvest lessons learned. This helps create an adaptive and real-time knowledge-
management system in the organisation.

Q: There is a cultural difference between IDRC and agencies in the humanitarian sector.
How can IDRC’s experience be adapted to the humanitarian sector? In your presentation,
there is no reference to the word ‘humanitarian’, for example?

A: There is a need to link recommendations to those who will implement them. One of the
lessons learned from the above example is that any actions happen in a context but
evaluators tend to offer context-free recommendations. Those points are also applicable to
the humanitarian sector.

Q: What can evaluators do when they find themselves in a hostile environment resistant to
evaluations?

A: People might be resistant for good reasons: they might have had bad experiences and
lack confidence or not trust the evaluative process. Find out what people/leaders care about
and use this as a leverage to convert them to the merits of evaluations. Find out what people
most respond to and play their game, listen to them and help them to become leaders of the
modern age through the use of evaluation.
                                                                                           8
4. Closing session: Way Forward

In this session, two reflectors, Margie Buchanan-Smith (independent consultant) and Jock
Baker (Care International), were asked to provide ideas about applying the lessons from the
workshop; following that, the participants
discussed the way forward in groups which
then presented their findings.

Margie pointed out that a new way of
thinking about evaluations was required,
moving towards people, process and
learning, asking the exact purpose of
evaluations. She suggested that the language
of evaluation should also become user-
focused, not using ‘evaluation’ if it was a
loaded and problematic term. It was found that evaluations may not always be the best
vehicle for learning or accountability and another way could be adopted. Margie suggested
that the draft ALNAP framework may be complemented by the ‘context, evidence, links’
framework, as drawn here.3

Jock found that ‘evaluators are jesters in hell!’. It was pointed out that identifying the
‘leaders’ was not an easy task as those in power and those with real influence exist in a
complex myriad of an organisation. He also pointed out that the outcome of evaluation use
was dependent on people, especially the most influential players. Jock thought that an
evaluation workshop should be considered as a leadership workshop.

Responding to those reflections, the participants discussed the ways to engage leadership on
evaluations – it was pointed out that one could try but being totally robust with them could
only be done by somebody like Michael. One of the groups also pointed out that the
important actors should be considered in terms of individuals not organisations and another
thought that more effort should be made to facilitate inclusion of stakeholders/users in
evaluations. Another group wondered whether one could map the linkages between
influencing strategic/leadership/culture, clarifying the purpose of evaluation and encouraging
stakeholder participation.

The participants also proposed that in order to explore the relevance, why and how certain
evaluations manage to make an impact should be examined, and the tension between learning
and accountability should be studied further, setting minimum standards for both. It was also
pointed out that there is a crucial absence of standard indicators for the whole sector. One
participant suggested that case studies should be collected to illustrate how to link evaluation,
accountability, learning and knowledge management with a results-based approach.

In exploring the steps for change towards increased evaluation use, the cost of organisational
change and innovative ways cannot be ignored. One participant suggested that, in devising
strategy, thought should be given to whether it would lead to a step change or gradual shift.
Another thought of drawing boundaries for evaluations, so that the limitations would clearly
be understood.

3
    D. Start and I. Hovland (2004) Tools for Policy Impact: A Handbook for Researchers, ODI, London.
                                                                                                       9
The participants brought up ideas about how to be more creative and effective in evaluation,
for example by using videos. Ben said that innovations, how to ensure stakeholder
engagement and sharing of knowledge-management ideas all form part of ALNAP’s
pioneering work.

The other ideas raised in this session included:

 The need to know more about evidence-based innovative approaches in dissemination
  and knowledge management.
 The need to come up with good sales pitch/dissemination techniques.
 The need to know good practices in utilisation.
 To have practical case studies of balancing/separating learning and accountability.
 To bring the conversations on evaluation capacities to other networks such as Evalnet,
  UNEG, IPDET
 To use networks to encourage donors to accept new frameworks/ideas about evaluations.

The participants also commented on the workshop. One thought that it was disappointing that
more donors didn’t attend: donors, NGOs, programme managers and evaluators should all be
engaged with this discussion. Another comment was that it was good to have a small
gathering where people could learn from each other to create positive and innovative
thinking. It was also mentioned that this workshop was useful in connecting people and
inspiring them to improve evaluative processes.

Ben concluded the session by saying that ALNAP is working towards more utilisation-
focused evaluations step by step and analysing what could produce positive changes in the
humanitarian sector.

Way Forward:

John Mitchell (ALNAP) concluded that efforts should be made to create a virtuous cycle of
change and to adopt a new language, allowing speaking truth to power. ALNAP was ready to
support the participants in strengthening evaluation capacities in their organisation.
Following this workshop, ALNAP was going to:

 Analyse the mapping exercises and include them in the workshop report
 Create briefing papers for humanitarian leaders about evaluative thinking
 Collect case studies of experiences of strengthening evaluation capacities
 Collect case studies to illustrate how to link evaluation, accountability, learning,
  knowledge management and a results-based approach
 Create a private online space on ALNAP’s website for this ‘recovering evaluators
  support group’ to share ideas
 Ask for people to volunteer for a peer advisory group to review the draft ALNAP
  framework for analysis.

The workshop was concluded on a positive note – the participants agreeing that the next steps
should bring even more interaction among ALNAP members on issues around evaluation
capacities.

                                                                                          10

				
DOCUMENT INFO