Review of Research Assessment

Review of Research Assessment

The Royal Society of Edinburgh (RSE) is pleased to respond to the Scottish Higher Education Funding Council's invitation to contribute to the UK Funding Council's review of research assessment. This response has been compiled by the General Secretary, Professor Andrew Miller and the Research Officer, Dr Marc Rands, with the assistance of a number of Fellows with extensive research experience.

As an independent assessment of research quality the Research Assessment Exercise (RAE) has provided a useful resource for universities and research funding bodies in making their own strategic decisions. However, while it has increased research productivity, it may also have slanted academic research patterns and strategies in favour of quickly publishable, more main stream, projects. It has also consumed far too much academic's time over the RAE year.

The specific questions identified in the Review paper are now addressed below:

Approaches to assessment

Expert review

(In which experts make a professional judgement on the performance of individuals or groupings, over the previous cycle, and/or their likely performance in the future)

Should the assessments be prospective, retrospective or a combination of the two?
Assessments should be a combination of both prospective and retrospective fact-based information. Prospective plans should be demonstrably feasible.

What objective data should assessors consider?
The expert review needs to be informed by as much objective data as is possible and should increasingly take account of the potential for fraud and misrepresentation in considering research submissions. Objective criteria could include:

  • number of peer-reviewed publications
  • number of citations over a given period and journal impact factors (which could be useful substitutes for citations with more recently-published papers)
  • Public recognition through prizes and Fellowships
  • numbers and success rates of postdoctoral students as well as research students.
  • external research income (Care needs to be taken, however, to avoid this measure simply reflecting the cost of research rather than the relative quality)
  • research grants
  • some means to show how the knowledge produced by research is useful, e.g. number of spin-off companies and patents.

In terms of the inclusion of teaching in the assessment, while it would be difficult and time consuming to undertake assessments of both teaching and research, there could be a place for the inclusion of the involvement of active researchers in teaching undergraduate and postgraduate courses. Higher education should be carried out in the context of research and an understanding of the research approach is valuable in any graduate level employment.

At what level should assessments be made – individuals, groups, departments, research institutes, or higher education institutions?

The RAE has changed quite appreciably over the years. In the early days, it was centred on the whole unit of assessment, with conclusions drawn about the overall quality of the department rather than attempting to drill down to the level of each individual. This changed in 1996, with far more emphasis on the assessment of individuals. The 1996 return allowed panels to deconstruct each submission, to assess each individual academic, and to derive an overall grade essentially from the aggregation of the individual assessments. This became highly explicit in the 2001 exercise, in which the wording describing the meaning of each grade was spelled out in terms of the performance of individuals. Whilst there is some evidence that panels used the RA5&6 submissions to discriminate at the boundaries, the overall exercise does seem, in very many panels, to have been carried out in a very mechanistic way, with each member of staff submitted being assessed as international, national or sub-national, and the overall grade then being determined by the proportion of staff in each of these categories.

This approach, however, can lead to some absurdity: a department may submit, at least in principle, 75% international, 5% national and 20% sub-national, and end up with a 3a. The department is likely to have been awarded a 4 on the basis of the RA5&6 documents, but two departments of similar size might find themselves funded very differently, with the lower-funded department having a larger number of internationally assessed academic staff. In reality, we should be funding international-quality academics wherever they are, and the fact that the submission can be deconstructed precisely to determine this means that there is no additional work needed.

Two further advantages of the assessment and funding based on individuals are firstly that universities can avoid tactical games about which staff are to be submitted, and we can avoid seriously demotivating staff by refusing to submit them at all. A new scheme might, therefore, be that all members of staff would be submitted, no average departmental grade would be given, simply the number of staff assessed as international and, if relevant, national.

Is there an alternative to organising the assessment around subjects or thematic areas? If this is unavoidable, roughly how many should there be?

The present subject divisions mostly reflect organisational structures, so could be kept but the difference between different disciplines must be understood and taken into account. For example, architecture is distinct from the technology based Built Environment.

Algorithm

(The use of an algorithm to assess research quality using appropriate metrics, leaving no room for subjective assessment.)

It is not possible to assess quality solely by metrics. Expert review is an important element in the research assessment process. One specific role is to make allowances for the inevitable differences between the metrics of different research fields, especially those in the arts and humanities, which would be very difficult to correct for automatically. The use of an algorithm would also increase the level of 'game-playing' on the part of institutions to maximise the score given by the metrics. Metrics alone would also not allow any evaluation of future plans.

Self-assessment

(In which institutions, departments or individuals assess themselves.)

Although self-assessment does have a role in outlining strategy development, research assessment could not rely totally on self assessment and it is not clear that self-assessment would add anything that expert review would not provide.

Historical ratings

(A policy that gives each institution a rating on the basis of its historical performance and/or the value of its research infrastructure.)

The RSE could not support the development of a system that implied only slow change as many departments have risen from grade 3 to grade 5 with institutional investment. Such a system would also encourage complacency for those holding high grades, while frustrating and demoralising those seeking to improve.

Crosscutting themes

What should/could an assessment of the research base be used for?
An assessment of the research base should be used to promote excellence in research, provide indications of status to students, sponsors and other potential stakeholders and to define the distribution of research funding across subjects and institutions.

How often should research be assessed?
Every five years would be appropriate.

What is excellence in research?
‘Excellence’ in research is defined by the level of contribution an individual researcher is making to his/her research field. ‘Excellence’ implies originality, accuracy, depth, imagination, enterprise, skilled and meticulous work, judgement and a flair for attacking rewarding problems successfully.

Should research assessment determine the proportion of the available funding directed towards each subject?
It has been commented that by giving a bonus to subjects for which the national average rating is high, the funding formula is effectively 'double-counting' good ratings unnecessarily. There should be no reason why a 5-rated individual in subject A should attract more funding than a 5-rated individual in subject B if the units of cost of the two subjects are the same.

Should each institution be assessed in the same way?
Currently institutions are not compared, but units of assessment in different institutions are. Direct comparisons should be made without allowance for the type of institution as any departure from a 'same for all' approach would remove the essential comparative aspect of the exercise and its results. Research quality should be judged as directly as possible and any adjustment for the type of institution should be made by some other route.

Should each subject or group of cognate subjects be assessed in the same way?
All subjects should be assessed within the same framework, to ensure consistency (especially if the results are used to distribute funding between subjects), but there should be scope to modify the types of output for assessment.

How much discretion should institutions have in putting together their submissions?
As noted above, there would be merit in considering the requirement for all members of a department to be submitted for assessment, if this is undertaken in a framework that recognises that research is not expected of all submission staff (e.g. in order to deliver teaching and administration), and which bases funding simply on the number of international and national quality academics.

Priorities: what are the most important features of an assessment process?
Important features of an assessment process include simplicity, fairness, objectivity, impartiality, transparency, rigour and robustness. The rules for the assessment process should also be widely understood many years before the assessment is undertaken.

Additional Information

In responding to this consultation the Society would like to draw attention to the following Royal Society of Edinburgh responses which are of relevance to this subject: Research and the Knowledge Age (April 2000); Review of Research Policy and Funding (April 2001) and Research and Knowledge Transfer in Scotland (September 2002).

 

Follow Us: