Document Sample
2010-operational-review-haiti-earthquake-2010-operational-review-haiti-earthquake Powered By Docstoc
					                                                 Operational Review: Haiti

objectives of the         Question           REVIEWED INITIATIVE:
To compare CNA    In Which context was the   Sudden on-set disaster, earthquake affecting capital, the govt and many agencies affected. Earthquake
implementation    CNA implemented            7.0 Haiti Jan 2010
contexts          (Sudden onset, slow
                  onset, complex
                  emergency, protracted
                  crisis, etc..)?
To compare CNA What were the top 3           1. To identify current geographic and sectoral needs in affected areas.
objectives across objectives for carrying    2. Develop a common baseline shared among clusters.
contexts          out the needs              3. Priorities for humanitarian action.
Core team         How much after the date    Core team operational 1 day after decision taken
composition       of that decision was the   7 persons were fully dedicated to the task (2 CDC, 2 IMMAP, 2 ACAPS and one national logistician)
                  core team constituted?     Most lacking expertise were logistic and consensus building skills. Also representation of the
                  In total, how many main    government within the core team was lacking
                                             Ideal core team should have been composed of 11 persons 100% dedicated to the task
                  team members were
                  there (100% dedicated)?
                  What expertise was most
                  lacking among team
To compare        What were the sector       Health, food and nut, WASH, shelter and NFI and specific questions on gender, HIV and protection. Each
sectors analysed assessed by the need        cluster asked for additional questions, mainly due to lack of understanding of IRA tool. Advantage of the
                  assessment?                IRA tool is that it is an intergrated too – includes and analytical tool.

                                             What about other corss-cutting issues? Need to prioritise urgent issues to be addressed. Many sectors

                                              had a list of 25 questions. HIV teams wanted add another 5 questions.
                                              Many wanted to know why didn’t include non-life saving activities.

                                              Can add additional 16 questions (4 per sector), but 37 questions more were requested. Had omake
                                              choice and to refuse some new questions

                                              Aim of tool is that – one person does a section of the tool – rather than one to do whole document.
To compare data   What was the data           IRA tool box. Software available to generate report.
collection tool   collection tool?            Used as no other alternative yet, IRA is well known even if not always used. Pros and cons of this tool,
                                              problems with sampling methodology, only integrated tool available automatically generated. So when
                                              proposed it was accepted. Other solution was develop own tool. Information on team makeup and
                                              training available, easy to use tool. Today would make the same choice.

                                              If wanted to do analysis quicker e.g. do in one week? Would you have chosen this? Yes, but sampling
                                              would have needed to be different.
                                              Paper questionnaire and PDAs were used for data collection
Sampling          What was the sampling       Quadrat sampling. In each quadrat was selected two sites the closest from the centroid
method            method used during the      Sampling based on level of affected by earthquake (10 communes mostly affected). Sampling was too
                  assessment?                 complicated – should have gone for less representative data to get information quickly but when
                                              wouldn’t have reach the level of accuracy and details requested by clusters. Sampling method – each
                                              section, department, section and unit, aim to go to each section. Go to centre of grid – by GPS and go to
                                              nearest village. For structural problems all areas red. On 31 Jan preliminary results produced on most
                                              affected area – raw data were processed. Then we moved to outside of the most affected areas, and
                                              compare with some from less affected areas to get an idea. Data entry tool was designed to be a light
                                              database. Quality of information collected poorer. Automatically functions not always used.
                                              HCT wanted concrete defined information, expectations of clusters, donors, govt high. Felt need to be
                                              able to map the city at a section level to know where to send aid/e.g. trucks now. Expectation that it
                                              would identify needs and numbers affected clearly. Our prior knowledge of the tool was too low, on the
                                              level of detail that would be available. Also those involved in the decision making to chose tool, not
                                              there when it came to analysis due to staff changes.
To compare CNA    How many people were        128 different people involve, one team leader, one numerator (paper), one with PDA. But processed
data collector    used as a data collector?   PDA data, as this was quicker to process. Perhaps a waste of resources as duplication of resources 2
team                                          numerators. Teams were composed of half men and women. Each team has at least one or two creole

composition/ski   What were the skills of   speaker
lls represented   the data collectors?
and skill gaps
                  What expertise was most   70% students, needed Creole language and social science and technology, from Haiti, 20% skilled expats
                                            (CDC, plus some representatives from cluster but irregularly), IFRC provided 60-70 people but we were
                  lacking among
                                            not able to provided them with vehicles, as UN cars (not all agencies could us UN cars e.g. IFRC staff)
                  enumerators members?
                                            and also no ability to spend money as UN. 10% belonged to other background e.g. civil protection,
                                            journalists. Participation of govt poor. Per diems paid.

                                            70% were students (Social science, statistician, local language), 20% skilled expatriate ( relief and public
                                            health background from Clusters, IFRC, OCHA, UNFPA, etc..) and 10% were belonging to other entities (
                                            Civil protection, reporters, medical staff, university etc..)
                                            Most lacking expertise were relief workers and people from clusters (especially health, water and
                                            sanitation, nutrition and food). Resulted in quality of data issues and information gathered.
To compare        How long was it from      11 days to preliminary reports.
timeline          initiation of the team    22 days to produce weighted tables. (lost 4 days weighing the data, CDC report produced 5 days pre-
between the       work to presentation of   final report without weighted data).
constitution of                             34 days to production of final report.
                  final results?
an operational
team and the                                Resulted in two different final reports.
production of
results                                     Initial timeframe plan to deliver results day 1-2 Feb 2010.
                                            Initial Flash appeal produced at day 3, revised flash appeal by day 18th Feb. 2-3 week’s initiative.
                                            Validation of the Multi cluster assessment 7 days after the earthquake
                                            Preliminary results: 11 days after decision taken
                                            Raw data and tables available 18 days after decision taken
                                            Weighted tables and graphs available 23 days after decision taken
                                            Final report available 35 days after decision taken
                                            Day 18 Jan – lots of reports about IDPs so requested assess needs outside of most affected areas to
                                            capture information on IDPs, similarly issues at border, concerns re cross border movement. National
                                            coverage – but probably this was a sequency mistake, Better should have been to concentrated on most
                                            affected area first then go wider. Sampling based on level of affected by earthquake (10 communes
                                            mostly affected). Sampling was too complicated – should have gone for less sites assessment to get

                                              information quickly. Sampling method – each section, department, section and unit, aim to go to each
                                              section of the most affected area.
To compare       What were the main           No adequate infrastructure for training and debriefing of enumerators, photocopies, etc..
logistical       logistical challenges in     Very difficult access to complete set of Pcodes and shapefile on administrative level 4
challenges       carrying out the CNA in      Not possible to hire staff within the deployment framework
                 the areas of:                Time allocated for field data collection was really stretch according to flight schedule allowed by
                    Interview staff          There were too many ,logistical problems to address compared to the potential numbers of participants
                                              (IFRC had much more resources but no transportation means were available)
                    Security
                    Payrolls/supplies
                                              Infrastructure – problem with lack of training venue
                    Data collection
                                              Security – level 3, restricted access to field, need for armed escort at the beginning, numerators not
                    Analysis and reporting
                                              allowed into compound and team not able to go out of the compound to brief the teams. Helicopters
                                              coming from DR in morning resulted in poor filled questionnaires, especially on health.
                                              Transportation – not allowed rent vehicles, dependant on agencies providing vehicles, fuel was an issue.

                                              Lost 12 days due to logistical/coordination/facilitation challenges.
To compare       What were the main           Regular meeting with cluster leader were made to update them on the ongoing work and results so far,
multi actors     challenges in carrying out   but scarce participation to the data collection process in term of Human resources due to work overload
coordination     the CNA in the areas of:     Civil protection was involved at the beginning but were assigned to other task really quickly.
                  Working with clusters or   Reaching expectations of the different groups and levels of buy-in varied over time. Clusters interested
                   NGOs                       in output but engagement varied and not available for analysis.
                  Working with donors        Lots of agencies in engaged, but actual participation poor.
                  Working with national
                   government                 Donors looking for big picture, scale of emergency difficult to enable this. E.g. population movement
                                              makes it hard to capture information.

                                              Everyone looking for daily updates took time, large numbers of actors, increasing daily.
                                              Rotation of staff – also increased the number enquiries, length of stay of people short.
To compare the   Was an analysis plan built   Analytical plan was initially agreed and part of the tool chosen for the assessment ( IRA toolbox) and was
way analytical   for the data and the         supposed to generate automatic report. Severity ranking by administrative level was one of the
framework have   indicators collected?

been used during                               expected outputs. For technical reasons it was not possible to go ahead with the tool so it was
CNA                What were the main          requested to each cluster to choose the ten most important indicators among the data and to interpret
                   challenges when building    them . Maps were also provided for the most important indicators.
                   and using this analytical
                   model during the CNA?       Comments from the clusters indicated that there was no clear instructions on how to interpret the data
                                               and that implicated a delay by delivering the comments of each cluster ( estimated impact of at least 6
                                               IRA has an analysis plan as supposed to generate a report, based on Sphere standard and given a
                                               ranking. Colour coded red. Not able to use the matrix, but we used PDA’s and started to clean the
                                               database, and when data set came (problems, e.g. can enter a number when should enter a letter),
                                               using PDAs helped us with analysis. P-codes not available at this level.

                                               49/280 variables used only in the final report (76 variables extracted considered reliable, divided into 5
                                               different sets/strata = rural / urban, camp/non-camp, border/non-border, at commune level, IDP in PAP
                                               or out of PAP in camp or not in camp)

                                               Back up plan not able to use IRA matrix – dataset – we processed data quickly, problem of ownership of
                                               data, felt clusters own the data. Used excel.

                                               Only 2 or 3 variables available, lack of comparable data across sectors.
To compare the     Roughly, how much did the   Around 0,5 M USD ( 91 hours of chopper)
cost of CNA        entire CNA process cost?    ACAPS:40000USD
exercises and                                  In kind donation: 0,46M USD
the breakdown      What portion of these
between direct     resources was donated or
funding and in-    in-kind?
kind donation
                   What portion of these
                   resources came from a
                   single major donor?

Link between       How, and how much, did      Really poor information on this issue due to the fact the team withdraw immediately after final report
assessment and     the assessment contribute   release. The only interesting figures is that one month after the report was released, it was the second

Evidence based    to decision making about      document the most downloaded from the OneResponse website after the location of IDPs.
analysis and      priority setting for:         Major findings:82% of the visited sites outside the most affected areas reported presence of IDPs,
decision making    Geographic areas            especially at the border ( that led to a cross border assessment early March 2010). Other results were
                   Key social groups           already known but at least were quantified...
                   Humanitarian sector
                    focus                       Not evident in flash appeal that data used, neither in PDNA or in MIT assessment (but data were
                   Recovery planning           transmitted to those bodies)
                   Preparation for ongoing,
                    or subsequent
                    assessment of needs

                  What were the major
                  important or surprising
                  results found from the CNA
                  and were they taken into
                  account for decision

To compare        Was the access to data free   Data were belonging to clusters and access was restrained to cluster leads until final green line was
access to data    for any person requesting     received from HCT office the 17/02/2010
                  them?                         Final report, tables and graphs are available on line (oneresponse website) and OCHA IM section is now
                                                hosting the data base and the questionnaires and raw database is available under request.
                  Which agency was hosting
                  the data after the
                  assessment report was
To compare        Who were the main users of    Clusters, they received the data too late for insertion within the revised flash appeal launched the
initial target    the results of the CNA?       18/02/2010 (also no evidence was found within the document that data from the RINAH were used for
audience and                                    the context analysis).
end users of the                                Complete data set was delivered to the PCNA representatives and to the MIT survey coordinators
To compare the   What were the major            No post monitoring on the use of the report is available so far. Some comments were received that the
expectations met unmet expectations of          final report was not reflecting needs (comments received from OCHA HoO) or not enough clearly to

for each CNA       NGOs, HCT, government,         allow decisions.
                   and donors?                    Gender considered that data were not disaggregated enough to allow a gender analysis
                                                  No feedback from Donors.
                                                  FAO – not happy, as not included as planting season was coming up
                                                  Lack of feedback from donors and govt, but team left day after report produced.
                                                  Complaints that document only in French. Was supposed to be translated but was not.
To compare data    How many variables were        280 variables in the initial questionnaire
volume             produced in the final data     76 variables were processed and produced in the final dataset, break downed into 5 strata =rural/urban,
produced and       set and in the final report?   camp/non-camp, border/non-border, at commune level, IDP in PAP or out of PAP in camp or not in
analysis/interpr                                  camp) = 344 variables. Lack of cross cluster analysis.
etation capacity   How long did the data          Within the final report, 49 variables were interpreted by clusters and 23 maps produced for
at field level     analysis take? (in days)       representing the most important findings.
                                                  Data analysis took 13 days (overlap with the data collection period, it was over 6 days after data
                   How long did the data          collection was completed)
                   interpretation take?           Data interpretation took 7 days to clusters once complete graphs and tables were produced

                   Roughly, how much % of         76 variables used only used out of 280 – approx 40%
                   garbage data was
                   considered as not reliable
                   and discarded from the
                   analysis process?
To compare         Was a monitoring system        MIT HADR initiative was in charge to monitor needs during 8 weeks after the disaster.
existence and      implemented and                Need assessment task force Haiti was ensuring the coordination of ongoing assessment.
functioning of     functioning for ensuring
indicators         follow up of the indicators
monitoring         over time?
system after the
Reference                                         RINAH Final report (under multi cluster rapid assessment)
document                                          RINAH Timetable
                                                  RINAH Flyer
                                                  MIT HADR Final report
                                                  RINAH organigram

Number  Top 10 Challenges faced during the implementation of the CAN                                              Lesson learned
1      Managing expectations and defining clear objectives.                      Ensure clarity on realistic expectations of assessment by all actors – from
                                                                                 clusters, donors, govt.
2        Logistics to complete assessment in a timely manner.                    Buy in from different actors and deployment modalities.
3        Dissemination results-                                                  ACAPs team could have stayed in field after to disseminate/monitor
                                                                                 results. Dissemination plan!
4        Ensuring appropriate response analysis is in response with findings,    Follow up required to review if the findings are used appropriately,
         action on results.                                                      monitoring.
5        Lack of analytical capacity within clusters to interpret data.          Part capacity building agenda for NATF / ACAPS.
6        Build consensus around phase II and tools to be used, and realistic     Clearer tools and guidance that build on experiences and help establish
         expectations.                                                           clarity around expectations – produce papers based on case studies that
                                                                                 demonstrate what is doable in rapid assessment – clear messages!
7        Consistent participation of actors.                                     Clear guidance for the field survey teams and clusters re: their
                                                                                 participation and expectation.
8        Defining sampling methodology – statistical versus purposive            Training / clarity for cluster leads and assessment experts deployed
         sampling.                                                               guidance on sampling.
9        High number of indicators / variables requested by clusters,            Clear guidance on indicators to be used at each phase of a disaster with
         increased length of analysis.                                           some possibility to adapt according to the situation.
10       Attempting to do a statistical analysis for rapid assessment, results   Purposive sampling and tools on how it should be done.
         in time delays for results and an incomplete data – its not possible!