hefce word default template by lindash


									Joint funding bodies’ review of research assessment
Invitation to contribute

Response from University College London (UCL)

      General comment

1     Selectivity in research funding - through a form of research assessment
      which identifies and properly rewards research quality - is essential in
      today's higher education system. In general we believe that the RAE has
      been a thorough and rational assessment process, although the burden of
      the process has become excessive and, as HEIs have gained experience in
      preparing for it, the RAE has become less resistant to 'games-playing'.
      Research assessment should continue to be based on peer review and to
      reward excellent research through selective funding. The future assessment
      process needs also to be designed in such a way as to reduce the burden on
      HEIs preparing submissions (and on the panels which assess them) -
      although reconciling the need for a rigorous process with the desire for a
      'lighter touch' may be difficult to achieve. It should be possible to reduce the
      scope for manipulating the assessment process - so as ensure that the
      judgments of panels are seen to be fair and meaningful and command the
      respect both of the higher education sector and of national and international
      research interests.

      Purpose of research assessment

2     Two main purposes of research assessment are:

         to reward internationally excellent research (both financially and in terms
          of the kudos currently attaching to RAE ratings);

         to help HEIs understand the extent to which they are achieving research
          excellence and recognise the areas where there is scope for them to
          enhance their research activity.

3     The most important characteristics of an assessment process are that it
      should be:

         rigorous and fair to individuals and institutions

         transparent

         informative

         conducive to nurturing the intellectual values of different subject areas.
          Joint funding bodies’ review of research assessment - Invitation to contribute
                                      Response from UCL

       Definition of research 'excellence' or 'quality'

4      Research 'quality' or 'excellence' in this context should be defined in terms of
       intrinsic intellectual merit, impact and originality - as perceived by peer
       reviewers - rather than in terms of impact beyond the research community
       (although it is important to recognise that in some subject areas, e.g., certain
       clinical sciences, high impact factors may not be synonymous with journal
       impact factors).      There are now specific funds available to support the
       transfer of 'relevant' research to the business community and this type of
       commercially beneficial research should be capable of attracting funding
       other than QR funding. It is not the business of 'quality' research
       assessment to comment on or financially support the applicability of

5      There is a real need to clarify the definition of 'international excellence' in
       future research assessments and one of the least successful aspects of RAE
       2001 was the involvement of international referees.          If they are to be
       involved again, international referees will need much better guidelines and to
       be much more fully involved in the process than in 2001.

       Form of assessment: Expert review

6      As indicated above (see 1), we favour maintaining a research assessment
       system based on expert (peer) review.

7      Research assessment should take account of both retrospective and
       prospective elements of research activity but - as in RAEs hitherto - with a
       stronger emphasis on research achievements than on research plans. It is
       important nevertheless to ensure that less experienced researchers whose
       potential exceeds their achievements are properly accommodated by the
       assessment process. In principle, it would be desirable for research
       assessment to take a more holistic approach to research activity than does
       the RAE: the exercise, as presently designed, focuses so strongly on
       publications that vital elements of a healthy research infrastructure
       (supervision of students, running of research seminars, commenting on
       colleagues' work, etc) carry little or no real weight. In practice, however, we
       recognise and agree that a process designed to fund research excellence
       rather than to develop research culture will necessarily concentrate on
       achieved research outputs.

8      The present mix of objective data seems broadly appropriate, i.e.: research
       publications; research income; research assistant and research student
       numbers; research studentships; research environment and strategy
       statements. There may be a case for using data that is already collected for

          Joint funding bodies’ review of research assessment - Invitation to contribute
                                      Response from UCL

       other purposes - and thereby reducing the data collection burden of the RAE
       as presently designed. (E.g., in the financial arena HESA data would seem
       an obvious source; and it would make sense for the RAE definition of
       research income (or expenditure) to be the same as the HESA definition.)

9      The initial assessment should be at the level of the individual - i.e.
       assessments should take account of the research quality of each member of
       staff returned as research active in a submission. The overall assessment
       should be expressed as a rating for the total research activity included in that
       submission (which may comprise the activity of a department, an institute,
       staff of different units whose research embraces a common theme, or some
       other combination).

10     There is no preferable alternative to organising the assessment around
       subjects or thematic areas. Given the qualitative differences between
       subject areas, we would see some difficulty in moving to a schedule of fewer,
       more broadly defined units of assessment. While this might reduce the
       administrative burden of RAE, there is a clear tension between such an
       approach and the real and reasonable concern felt by academic departments
       with an unusual (and especially an unusual interdisciplinary) research
       portfolio that the existing schedule of units of assessment is not capable of
       adequately assessing their particular areas of research activity.

11     If a peer review system of research assessment is to continue (and we
       believe that it should) it will be important to address prevailing concerns
       about the definition and operation of assessment panels, particularly in the
       following respects (see 12-15).

12     There is a perception of unreasonable inconsistency between assessment
       panels in terms of the proportions of top ratings awarded. For example, in
       RAE 2001 the percentages of submissions rated 5* by the panels listed
       below were:

       History                   8.4
       Geography                 9.8
       Economics                 10.0
       Law                       13.3
       English                   15.7
       Classics                  23.1
       German                    23.8
       Italian                   31.6.

13     We are doubtful that this range reflects real differences in the amount of high
       quality research done in these subjects. In some biomedical areas too, the
       outcomes of RAE 2001 suggested real variations between panels in how

          Joint funding bodies’ review of research assessment - Invitation to contribute
                                      Response from UCL

       stringently or otherwise they defined international excellence (see also 5). In
       future assessments, it may be helpful to appoint suitably qualified individuals
       to ensure that research in areas that could be returned to more than one unit
       of assessment receives consistent treatment by the different panels

14     Different panels (according to their published working methods) read
       different proportions of the work submitted to RAE 2001. Where panels read
       selectively - e.g., only one publication per research active member of staff - it
       is quite unclear as to how the selection was made. In future, if panels are to
       read selectively, HEIs should be enabled to indicate a priority order of
       publications so that the output read is the work perceived by its author to be
       her or his strongest work.

15     In some units of assessment, there has been a striking correlation between
       those HEIs whose staff comprise the membership of an assessment panel
       and those HEIs which receive high ratings from that panel. There is bound
       to be some correlation (in general, it is appropriate for staff from high quality
       departments to feature strongly on assessment panels) - and we do not
       mean to suggest that assessment panels have conducted their work
       improperly. Nevertheless, the current extent of the perceived correlation
       between panel membership and a top rating will tend to undermine
       confidence in the process. The funding bodies would do well at least to
       analyse the results of recent RAEs in an attempt to acknowledge, even if this
       fails to allay, these concerns.

16     In summary, we believe that the principle of peer assessment of research
       commands widespread respect across the sector and should be retained.
       We have some concerns, however, about some aspects of the operation of
       assessment panels in RAE - especially the variation in transparency of
       assessment criteria between panels. There is evidence too that the specific
       subject focus of RAE panels to date has (in effect if not by intention) tended
       to disadvantage certain forms of interdisciplinary research.

       Method of assessment: Algorithm

17     We oppose a metric approach to research assessment, in principle and for
       practical reasons.       A metric approach would certainly be wholly
       inappropriate in the arts and humanities. Peer-reviewed journals are not
       always represented in citation indices and types of publication which are
       highly respected in particular subject areas (e.g., chapters in edited books)
       would be missed in a purely metric system. Similar shortcomings apply to
       many of the other metrics (e.g., reliability of completion rates when numbers
       of research students are small, differences in availability of external funding

          Joint funding bodies’ review of research assessment - Invitation to contribute
                                      Response from UCL

       in different subject areas). It would be extremely difficult to apply a metric
       systems consistently and fairly across all disciplines.

       Method of assessment: Self-assessment

18     Although it could help a research unit, through self-examination, to identify
       ways of strengthening its research activity and strategy, we do not believe
       that a self-assessment system is workable or appropriate to what we see as
       the essential purposes of research assessment (see 2). The process would
       be conducive to inconsistency and lack of objectivity and might also be more
       burdensome than the present system.         Reducing the number (and so
       broadening the definition) of units of assessment would make a self-
       assessment process even more difficult to implement effectively.

       Method of assessment: Historical ratings

19     Although we recommend (see 7), on grounds of objectivity, a predominantly
       retrospective approach to research assessment, based on achieved
       research outputs, we feel that a purely historical approach would tend to
       discourage research development and on that basis would be undesirable.
       While a system of historical ratings would have the merit of not needing to
       depend on speculation about an uncertain future, it would tend not to be
       sufficiently capable of acknowledging opportunities for and likely effects of
       planned change.

20     The present structure of QR funding could be said to be too 'historically'
       static. Although it is rightly designed to reward strong research activity
       rather than to help enhance less strong activity, there may be a case for the
       adjustment of QR year on year to account for significant changes in the
       number of research active HEFCE-funded staff in a department - or for a
       higher weighting of research assistants and research students relative to
       HEFCE in order to make QR funding more sensitive to changes in volume.

       Cycle of assessment

21     The intervals between RAEs have gradually increased. In order to promote
       stability and encourage reasonably long-term research planning, this trend
       should continue. The serious delay that is occurring in properly funding the
       results of RAE 2001 for departments rated 5 and below (a delay which has
       inevitably reduced the RAE's standing in the sector) effectively strengthens
       the argument for delaying the next research assessment: this should take
       place no earlier than 2008. A rolling programme of assessment would be
       most unlikely to decrease overall workload.

          Joint funding bodies’ review of research assessment - Invitation to contribute
                                      Response from UCL

       A single system of assessment?

22     All HEIs should be assessed in the same way. Having different assessment
       systems for different institutions would raise serious issues of comparability
       and consistency of assessment criteria.

23     Each subject or group of cognate subjects should be assessed in the same
       way although some variation between assessment criteria according to
       subject is inevitable. It will be important in any future assessment that
       assessment panels (as for RAE 2001) publish draft assessment criteria and
       working methods well in advance of the deadline for submissions. In order
       to promote confidence in the system, it will be no less important for panels to
       respond - to a greater extent than some did for RAE 2001 - to HEIs'
       comments on their draft criteria and working methods. The funding bodies
       should not permit fundamental differences of approach between panels
       where these differences cannot be publicly justified to the sector. If an
       assessment panel proposes to structure assessment abnormally it should be
       required to explain such an approach (e.g., where a panel proposes to give
       a single rating to each research group within a submission in order to
       calculate the overall rating of the submission, when most other panels give a
       rating to each individual within each group in order to arrive at the overall

24     If (as we believe) a prime purpose of research assessment is allocation of
       funding linked to research quality, it is not appropriate to allow HEIs such
       discretion as the current RAE allows.          In particular, the fact that a
       department can be rated 5* while returning only a limited proportion of
       eligible staff as research active tends to devalue the RAE process. We
       believe that in future research assessments HEIs should be required to
       include as research active all academic staff contracted to carry out either
       research or research and teaching. We believe that HEIs should not be
       expected to include in the RAE return (either as research active or not
       selected) staff whose contracts require them only to teach. It is offensive to
       the staff concerned (and unfair to the academic department as a whole) to
       characterise teaching-only staff - as the RAE effectively does - as weak
       researchers or non-contributors.

25     All subjects and all institutions should be assessed at the same time and with
       the same frequency. Although the current system is resource-intensive, we
       are not convinced (see 21) that a rolling programme of assessment would
       lessen overall workload or even modify workload to any beneficial extent.

              Joint funding bodies’ review of research assessment - Invitation to contribute
                                          Response from UCL

           Research assessment and teaching assessment

26         One aspect not captured in RAE 2001 was the ways in which research may
           enhance teaching, which could be regarded as one measure (among others)
           of research excellence. However, we are not in favour of combined teaching
           and research assessment: the requirements of teaching and research differ
           enormously; and each activity needs adequate and dedicated funding

           Et alia

27         In view of the excessive workload involved in sending to the HEFCs hard
           copies of research publications requested by the RAE 2001 panels, the
           possibility of having this element managed electronically in future – i.e. with
           all cited publications accessible on the Internet and available to the panels
           through provision of the relevant URL - needs to be seriously investigated.

28         In RAE 2001, there was persisting (and unresolved) lack of clarity - both in
           the HEFCs guidelines and in advice given by the HEFCs RAE Team - about
           the eligibility for inclusion in research student numbers of students registered
           for taught graduate programmes with a substantial research component.
           This must be clarified in any future research assessment which takes
           account of research student numbers.



To top