wp survey edited by 0s7iRTF


									                         MICE (Measuring Impact under Cerif) project
              Impact indicators/measures: a survey of existing work

Workpackage 2 of this project comprises a survey of existing work undertaken on impact
indicators. For this purpose a team of four, Richard Gartner (King's College London), Mark Cox
(King's College London), Anna Clements (St. Andrews University) and Brigitte Joerg (DFKI)
examined a series of documents (listed below) which have already attempted to draw up sets of
indicators and measures of impact or have suggested strategies for doing so.

These indicators in these documents were then extracted and arranged into a hierarchy, which
is shown by the diagram overleaf.

The provenance of each node on the diagram is indicated by the source document from which it
was extracted as follows:-

   1. Research Excellence Framework: Second consultation on the assessment and funding of
      research - Appendix D - http://www.hefce.ac.uk/pubs/hefce/2009/09_38/

   2. Research Excellence Framework impact pilot exercise: Findings of the expert panels -

   3. Measuring the impact of research - report from Australian National University -

   4. REF impact pilots: Earth and Environmental Sciences -
      http://www.hefce.ac.uk/research/ref/impact/EarthSystems_EnvironmentalSciences.pdf) -
      and English Language and Literature -

   5. Draft impact case studies previously undertaken at King's College London – unpublished

   6. Research Councils UK: Research Outcomes: Project Invitation to Tender Research:
      Outcome Types - http://www.rcuk.ac.uk/documents/oocp/OutputTypes.pdf

In addition, we also examined the following documents from which we extracted no further
indicators/measures, but which proved very helpful in clarifying the issues involved:-

   1. The ENQUIRE project final report -
   2. Assessing Europes’s University-based Research -
   3. Pure activities model (St. Andrew’s University) - unpublished
It should be noted initially that we made no attempt to add to the indicators and measures which
we extracted: at this stage we are merely surveying work already undertaken on defining these
and attempting to order them in a hierarchical manner.

We make two initial definitions which will underline our work:-
    impact indicators: these are the broader, essentially semantic, concepts which indicate
     that some impact has been achieved: they include, for instance, such concepts as
     'improved patient care', 'improved public services' and 'improved public awareness'
    impact measures: these are measurable factors which provide evidence for whether a
     form of impact delineated by an impact indicator has occurred: these include such factors
     as ''audience figures', 'increased revenue' and 'website hits'.

Throughout the diagram we have used unformatted text to delineate impact indicators, and bold
text to do so for impact measures.

It will be seen that we attempt to provide a basic taxonomy for indicators and measures: at the
highest level, we differentiate between generic indicators/measures (which could feasibly apply to
any type of impact in the diagram), and those which are specific to given areas of activity. The
latter we divide at the highest level between the economic/commercial and
social/cultural/environmental, each of which is subdivided further. Any measures (shown in bold)
on these area-specific branches are assumed to be most relevant to their given parents (for
instance the lives saved measure is most relevant to the improved health outcomes indicator).
Where we feel a generic measure may be of relevance to a given area-specific indicator, we
indicate this by italics (for instance, we recommend using the generic measure survey results as
evidence for changes in attitudes to science).

This hierarchy of indicators and measures will form the basis for Workpackage 3 in which they
will be mapped to CERIF.

Richard Gartner
Mark Cox
Anna Clements
Brigitte Joerg

To top