Document Sample


James Copestake, Centre for Development Studies, University of Bath

September 1999

This paper is concerned specifically with the requirements of microfinance

organisations (MFOs) that have explicit poverty reduction goals, but are also aiming

to become increasingly financially self-reliant. They need information on their impact

in order to improve the services they offer. But impact assessment (IA) work has

more often been carried out primarily to comply with the accountability requirements

of their public sponsors. This note argues in f`vour nf rdorienting IA towardr thd

LFN‟r nwn rtrategic decision processes, and integrating it more closely with internal

monitoring (1).

       This issue was discussed at the third virtual conference on „microfinance

impact assessment‟ organised in October 1999 by the „Consultative Group to Aid the

Poorest‟ of the World Bank. Participants from the USAID funded „AIMS‟ programme

(Assessing the Impact of Microenterprise Services) drew upon pilot studies in Mali

and Honduras to argue the case for intensive studies (Edgcomb & Garber, 1998;

MkNelly & Lippold, 1998). Simanowitz, in contrast, drew on his experience with the

Small Enterprise Foundation in South Africa to argue for an integrated approach

(Simanowitz, 1999). MFOs and their operating environments differ in so many ways

that both approaches have their place. Nevertheless, this note argues that there is more

scope for integrated approaches than the AIMS work has generally suggested. But it

also argues that more attention needs to be paid to how data collected through impact

monitoring is analysed. To this end, practitioners need to become as familiar with the

distinction between positive and interpretive approaches to data analysis as they are

with the distinction between quantitative and qualitative approaches to data collection


Impact assessment studies and market research compared

A starting point for this argument is to contrast a stereotypical donor funded IA study

with commercially oriented market research (see Table 1). One difference between

the two approaches is in choice of terminology. One talks about primary stakeholders

and intended beneficiaries, while the other talks of customers and clients. For some,

these words may suggest irreconcilable ideological differences. But there is also a

considerable overlap between a concern for empowerment and sustainable livelihood

improvement, and for client satisfaction and securing repeat business. With respect to

financial service, both approaches recognise the need to understand the complexity of

users‟ requirements - for risk management, consumption smoothing, coping with

shocks, as well as mobilising working and investment capital (Wright, 1999).

       The specialised consultants often recruited to carry out donor commissioned

IA studies necessarily have to collaborate closely with MFO staff, who may also

recognise the value of the work being undertaken. But the contractual structure of

such studies often limits the extent of staff participation, and allows them to distance

themselves from the findings. Their operational relevance is also often limited by the

time lag from data collection to presentation of findings, and by the restricted focus

on just one cohort of past entrants. They are primarily intended to inform wider donor

debates over the contribution that MFOs can make to development. Given the relative

remoteness of this audience, the credibility of findings rests on methodological rigour

(3). Partially as a consequence, managers of MFOs have looked elsewhere for

answers to more immediate operational questions. Being closer to the ground, they are

often better able to judge the plausibility of explanations of impact, even if these are

written up less rigorously. But limited funds and pressure of competing tasks often

limit the quality of the information they can obtain. Preoccupation with immediate

questions can also restrict their ability to think about larger issues, or to view the

impact of their work from fresh perspectives.

Table 1
            IMPACT ASSESSMENT                                                                             MARKET

                                                           INTEGRATED IMPACT MONITORING & ASSESSMENT?
            STUDIES                                                                                         RESEARCH
WHO         Programme sponsors.                                                                             Senior management.
WHY?        Public accountability – „proof‟ of                                                              Strategic management decisions – or
            impact. Strategic funding decisions.                                                            „improvement‟ of impact (new or altered
                                                                                                            financial products, area expansion,
                                                                                                            assessment of portfolio quality, monitoring
                                                                                                            operational procedures and staff
WHEN?       Determined by donor programming                                                                 Conthnuous, with ad hob bursts in
            cycles. There may be a base-line                                                                rerpnnre tn rpdchfhc needs.
            survey, but these are often poorly
            uthlisdd by later studies.
WHAT?       Socio-economic indicators of                                                                    Profiles of potential, new, graduating and
            impact on the well-being of                                                                     exiting clients. Client satisfaction
            intended beneficiaries (business                                                                (problems, gaps between expectations and
            profits, household resource                                                                     service actually received). Indicators of
            profiles). Identify who used the                                                                portfolio quality (credit-equity ratios, rate
            services and who didn‟t.                                                                        of return on capital etc).
                                                                                                            Comparisons with services provided by
                                                                                                            other agencies.
WHO         Specialists, usually appointed by                                                               Internal staff and local consultants.
BY?         donors, often mid-way through a
            project cycle.
HOW?        Data collection – sample surveys,                                                               Data collection – routine entry and exit
            focus groups and participatory                                                                  forms, ad hoc surveys, piloting of new
            appraisal exercises, narrative case                                                             products, focus groups, semi-structured
            studies, semi-structured interviews                                                             and key informant interviews.
            with key informants.                                                                            Data analysis - Statistical analysis of
            Data analysis - Statistical analysis                                                            determinants of client satisfaction.
            of variation in changes in the well-                                                            Interpretive reports drawing upon
            being of intended beneficiaries.                                                                qualitative and quantitative data.
            Interpretive reports drawing upon
            qualitative and quantitative data.

Scope for integrated impact monitoring and assessment

Given their different orientation, and contrasting criteria of what constitutes plausible

evidence of impact, it has become common for these two tasks to be kept separate. As

a consequence technical discussion of them has also been polarised. Development

specialists have concentrated more narrowly on how impact assessment should be

done. Meanwhile market research has generally been the domain of practitioners and

management consultants. It has also received less funding from donors because many

new MFOs have replicated existing models, rather than seeing the need to invest in

their own market research and product development (cf. Dunn & Arbuckle, 1999).

       Table 2 provides more details of what an integrated impact monitoring and

assessment system might look like. The key feature is a common „spine‟ of routine

collection of entry and exit data for all clients, supplemented by repeat interviews

with a randomly selected sample of clients from each new cohort of entrants. Ideally,

the latter should then be interviewed each year until they leave the programme. The

data can be used for routine reporting of client attitudes to the services they are

receiving, and analysis of how this differs according to individual, livelihood and

household characteristics. It can also be used for comparisons with data from

secondary sources, such as poverty assessments and enterprise surveys, to reveal who

participates in the programme and who doesn‟t. Such information should permit staff

regularly to re-evaluate their understanding of what impact different services are

having on different categories of users. It may also prompt ideas for more focused

market research and product development.

       As the number of interviews grows, so more rigorous analysis of impact also

becomes possible. This requires data on the change in indicators of performance over

time for a sample that experienced different levels of „treatment‟ or use of the services

under study, during a specified period. Data is also needed on underlying or pre-

treatment factors (such as age, education, location) that may influence both levels of

treatment and subsequent performance, so that this „selection bias‟ can be controlled

for statistically (4).

Table 2
WHO           Useful at all levels – from users to sponsors. Formally endorsed by MFO board, after consultation with
FOR?          other staff, shareholders, sponsors and other stakeholders.
WHY?          Strategic management and funding decisions. Improved responsiveness of staff and programme to
              client needs. Client self-reflection.
WHEN?         A „spine‟ of routine and continuous data collection, analysis and reporting (of exit and entry data,
              client profiles). Periodic additional studies and analysis – to meet demands for both market research
              and external accountability.
WHAT?         Socio-economic indicators of impact on client well-being. Reasons for entry and exit. Client
              satisfaction. Portfolio quality.
WHO           Specialist staff working in close collaboration with operational staff, possibly on secondment from a
BY?           consulting organisation that can also offer technical support and additional resources for periodic
              additional studies.
HOW?          Routine data collected from all clients on joining and (if possible) leaving the programme. Repeat
              interviews with a rolling sample of clients stratified by intake cohort. Follow-up focus groups,
              narrative case-studies and semi-structured key informant interviews with people identified through the
              rolling survey. Client profiles, entry and exit data can be reported to management and donors through a
              regular reporting system. As it accumulates over time, it also provides a database for more rigorous
              impact analysis as required.

Putting these proposals into practice raises issues about overall governance of the

MFO, staffing, and detailed methodology. First, a sufficient „meeting of minds‟

among the main stakeholders is necessary. Adequate time needs to be given to

enabling them to reach a consensus over how the system can satisfy their multiple

requirements. This point is of wider importance. Like all financial intermediaries

MFOs are built on trust, and this evaporates quickly if there is discord and doubt over

its strategic plan for achieving long-term viability.

          With respect to staffing, assigning data collection to operational staff has been

criticised for distracting them from core activities, and may increase response bias. On

the other hand, encouraging field staff to participate in open-ended interviews with

their clients, and to discuss what they learn, can help to motivate them and can

enhance data quality. Better staff-client understanding can also help to ensure that the

MFO‟s wider poverty mission is not lost in the rush to expand. By reducing drop-out

rates it may also make good business sense (Simanowitz, 1999). Overall, the balance

between staff and specialist data collection will vary between MFOs according to

many factors, including their mission, the nature of their products (standard or

tailored, for example) and staff turnover.

       Long-term employment of staff to oversee data collection and analysis is

expensive, and technical specialists can easily become isolated or caught up in

overcomplicated systems (Hyman & Dearden, 1998:274). Consequently, there is a

strong case for sub-contracting this work to consultants, who can absorb fluctuation in

demand for their expertise and also more easily called to account for delivering on

planned outputs. But it is important that MFO management play an active part in

tendering and contracting processes, rather than allowing these to be taken over (and

made excessively complicated) by external sponsors.

       Choice of specific methods of data collection and analysis raise a third set of

issues. Group based activities may provide useful general information on strengths

and weaknesses of different financial products. But more detailed information on

impact requires individual interviews. Careful attention is needed to ensure that these

strike an appropriate balance between open-ended questions and closed questions,

without taking more than a couple of hours to carry out (Sebstad, 1995). Turning

to data analysis, clear protocols need to be developed for routine data entry, checking

and analysis so that internal reports on the findings of batches of interviews can be

generated quickly and cost-effectively. These need to cover both quantitative or pre-

coded data, and qualitative (principally narrative) data.

        For more rigorous impact analysis, of the kind that may be needed to satisfy

external sponsors, it is necessary to ensure that a sufficient number of interviews are

completed in time to feed into strategic reviews. With a continuous data collection

system, the timd interval over which impact ir being leastred will vary. Trdnd `nd

re`snn`lhtx dffects must then be controlled for statistically, increasing the required

sample size (5). Where possible, analysis should also be based on changes measured

through repeat interviews. However, when turnover of clients is rapid it often too time

consuming to establish a sufficiently large sample this way, and so it must be possible

to carry out some analysis at least on the basis of recall alone.


This note suggests that MFOs can provide sponsors with reliable evidence of impact

in a way that also generates information of more immediate operational utility to

themselves. The key requirement is to move away from time-bound and externally

controlled IA studies to systems that build upon a spine of routine impact monitoring.

The data thus generated can be interpreted relatively quickly by staff, with the benefit

of their local knowledge and insight, and also analysed using more formal methods to

satisfy the requirements of more remote stakeholders.


        This paper originates from discussions based around ongoing work in Kenya,

Peru, Malawi and Zambia. Earlier versions of it were presented to a meeting

organised by SEPAR in Huancayo, Peru in January 1999 and to the Small Enterprise

Development Network, in London in September 1999. I am grateful for feedback on

earlier drafts from participants at these meetings, as well as from Anton Simanowitz,

Carolyn Barnes and Monique Cohen.

          „Positive‟ analysis aims to attribute impact of an intervention through

statistical analysis of quantitative data. „Interpretive‟ analysis relies on those with

close knowledge of the programme to identify and describe the most important causal

links between programme and impact, and to support their observations with

quantitative and qualitative data drawn from multiple sources. On the quite distinct

difference between qualitative and quantitative data see Moris and Copestake, 1993.

          Rigour may be defined as the logical deduction of conclusions from fully

stated evidence and assumptions. It is as relevant to interpretive as to positive

approaches to IA (Copestake, 1999).

          A formal control group is not necessary for this purpose, so long as there is

wide variation within the sample to „treatment‟ that is not perfectly correlated to any

other incidental variable, such as location (Moffit, 1991).

          Sebstad & Chen (1996) suggest a target of 500 to permit rigorous positivist

analysis, although Copestake (1999) obtained meaningful results with a sample

of 420.


Copestake, J, S Bhalotra and S Johnson (1999) Assessing the impact of microcredit on
poverty. [submitted to Journal of Development Studies]. Also available at

Dunn, E & J G Arbuckle (1999) Technical note on the relationship between market
research and impact assessment in microfinance. Submitted to Monique Cohen,
Office of Microenterprise Development USAID. Washington, D.C. This and other
AIMS project documents can be downloaded from

Edgcomb, E L & C Garber (1998) Practitioner-led impact assessment: a test in
Honduras. AIMS project. Submitted to Monique Cohen, Office of Microenterprise
Development USAID. Washington, D.C.

Hyman, E L & k Dearden (1998) „Comprehensive impact assessment systems for
NGO microenterpise development programmes.‟ World Development, 26(2):261-76.

MkNelly, B & K Lippold with A Foly and R Kipke (1998) Practitioner-led impact
assessment: a test in Mali. AIMS project, USAID. Washington, D.C.

Moffitt, R. (1991) „Program evaluation with non-experimental data.‟ Evaluation
Review, 15(3):291-314.

Moris, J, & J Copestake (1993) Qualitative enquiry for rural development: a review
London: IT Publications.

Sebstad, J & G Chen (1996) Overview of studies on the impact of microenterprise
credit. AIMS project, USAID. Washington, D.C.

Sebstad, J., C. Neill, C. Barnes with G. Chen (1995) Assessing the impact of
microenterprise interventions: a framework for analysis. Management Systems
International, submitted to the Office of Microenterprise Development, USAID.
Washington, D.C.

Simanowitz, A (1999) “Understanding impact. Experiences and lessons from the
small enterprise foundation‟s poverty alleviation programme – Tshomisano” Small
Enterprise Foundation, South Africa. Paper for the third virtual meeting of the CGAP
working group on impact assessment methodologies.

Wright, G A N, D Kasente, G Ssemogerere and L Mutesasira (1999) “Vulnerability,
risks, assets and empowerment – the impact of microfinance on poverty alleviation”.
MicroSave-Africa and Uganda Women‟s Finance Trust: Kampala.