Review of Research Assessment

Review of Research Assessment

The Royal Society of Edinburgh (RSE) is pleased to respond to the UK Funding Council’ consultation on the review of research assessment. This response has been compiled by the General Secretary, Professor Andrew Miller and the Research Officer, Dr Marc Rands, with the assistance of a number of Fellows with extensive experience of the university research base.
This document marks a very substantial change in the basic thrust of the research assessment exercise, moving away, as it does, from an assessment of departments or units of assessment to the scoring of individuals. In principle, this is a welcome change, however, the proposed system is undoubtedly more complex than the existing model and the challenge facing those responsible for implementing the proposed system is substantial.

In connection with the Review’s recognition of the importance of the exercise in supporting the resource allocation models of the Funding Councils, one of the criticisms of the 2001 Research Assessment Exercise (RAE) was the subsequent decision not to provide limited funds for Units of Assessment (UoAs) ranked 3 or less in which there were, by definition, examples of international excellence. Many Higher Education Institutions (HEIs) had wasted resources in submitting staff for the RAE on, what proved to be incorrect, assumptions about the financial consequences of particular grades. With the move of research assessment from departments as a whole, to individuals, the funding of research should also change to be able to fund those international-quality academics in whatever quality profile department they are. HEIs must also know whether the cost of submitting to assessment is worthwhile in advance of the exercise, otherwise there is a waste of public resources.

The specific recommendations identified in the Review paper are now addressed below:

Recommendation 1: Any system of research assessment designed to identify the best research must be based upon the judgement of experts who may, if they choose, employ performance indicators to inform their judgement.

The RSE strongly agrees with this recommendation. While the review needs to be informed by as much objective data as is possible, the appropriateness of the different performance indicators needs to be judged by experts in each discipline. Specifically, in terms of research grant income, much high quality research in the social sciences can be accomplished with relatively low levels of research income. Over emphasis on research income may therefore favour research that involves high costs and neglect less expensive, but equally high quality research in other disciplines. With citation impact, all citations to any items published in a peer reviewed journal, including letters, commentaries and so on are included in the numerator, while only the count of major peer reviewed elements in each journal are included in the denominator. This grossly inflates the citation impact of journals that encourage letters and short notes at the expense of rigorously conducted and reported, peer reviewed papers, while underestimating the impact of journals that focus on the latter form of publication. Moreover, citation impact is also greatly influenced by citation cartels of groups of researchers who frequently cite each other’s work at the expense of high quality research published by others. Finally, citation impact is strongly influenced by the size of a discipline or even topic area, as well as by publication practice, greatly undermining comparability across disciplines.

Recommendation 2:

  1. There should be a six-year cycle.
    The RSE agrees with this proposal.
  2. There should be a light-touch ‘mid-point monitoring’. This would be designed only to highlight significant changes in the volume of activity in each unit.
    The RSE agrees with this proposal and that the results of such monitoring of volume indicators should be able to reflect significant increases, as well as falling off, of research activity.
  3. The next assessment process should take place in 2007-8.
    The RSE agrees with this proposal although preparations for pre assessment of research competencies will need to take place soon to allow sufficient time for institutions to plan for the new approach and to allow time for full adoption of the proposed changes.

Recommendation 3:

  1. There should be an institution-level assessment of research competences, undertaken approximately two years before the main assessment.
    The RSE agrees with this proposal. Some of the information conveyed in the current RA5 submission of the RAE is being proposed to be considered in a separate research competency submission some two years before the date of the RAE itself. In principle, this is fine. However, it is important that Funding Councils do not make this competency assessment too burdensome, consuming large amounts of universities time to little useful purpose. In this regard, we would strongly endorse recommendation 10, that the research strategy part of the RA5 be retained in the documents for the RAE. Inevitably, such strategies will be more difficult to write for larger units of assessment, since they will, of necessity, be less coherent. However, the research strategy is vital; it represents the only point in the document at which any future actions are indicated, and unless the RAE is to degenerate into simply an assessment of the past, this component should not only be present but be expanded in importance.
    In addition, as noted above, sufficient time needs to be allowed for institutions to plan and implement these proposals before the next assessment.
  2. The competences to be assessed should be institutional research strategy, development of researchers, equal opportunities, and dissemination beyond the peer group.
    The RSE agrees with the assessment of these competencies, as well as others such as knowledge transfer, although they should be based on clear performance indicators.
  3. An institution failing its assessment against any one of the competencies would be allowed to enter the next research assessment but would not receive funding on the basis of its performance in that assessment until it had demonstrated a satisfactory performance.
    In terms of this recommendation, the linkage between the competencies and being able to enter the next research exercise is less clear, and may be instead an issue for the institution and the relevant Funding Council to tackle on a case by case basis.

Recommendation 4

  1. There should, in principle, be a multi-track assessment enabling the intensiveness of the assessment activity (and potentially the degree of risk) to be proportionate to the likely benefit.
    As the RSE mentioned in its submission to the Roberts Review, units of assessment in different institutions are currently compared. Direct comparisons should be made without allowance for the type of institution as any departure from a 'same for all' approach would remove the essential comparative aspect of the exercise and its results.
    There also needs to be a way to address the small number of real research stars embedded in departments with relatively little research of quality. It is important that these pockets of excellence are fully assessed and funded appropriately wherever they are, and the fact that with the current proposal the submission can be deconstructed precisely to determine this, means that there is no additional work needed.
  2. The least research intensive institutions should be considered separately from the remainder of the HE sector.
    As noted above, a non research intensive institution may have one spike of very high quality research activity and we should be funding international-quality academics wherever they are.
  3. The form of the assessment of the least research intensive institutions would be a matter for the relevant funding council.
    The RSE agrees with this proposal.
  4. The less competitive work in the remainder of institutions should be assessed by proxy measures against a threshold standard.
    As noted above, any departure from a 'same for all' approach would remove the essential comparative aspect of the exercise and its results. Research of national and international excellence should always be fully assessed by an expert review assessment subject to the wishes of the submitting HEI being taken into account as well as the requirements of the relevant Funding Council.
  5. The most competitive work should be assessed using an expert review assessment similar to the old Research Assessment Exercise.
    The RSE agrees with this proposal.

Recommendation 5

  1. The output of the Research Quality Assessment should be a ‘quality profile’ indicating the quantum of ‘one star’, ‘two star’ and ‘three star’ research in each submission. It will not be the role of the assessment to reduce this profile to summary metrics or grades.
    The RSE agrees with the proposal of a quality profile indicating the quantum of 'one star', 'two star' and 'three star' research in each submission.
  2. As a matter of principle, star ratings would not be given to named individuals, nor would the profile be published if the submission were sufficiently small that individual performance could be inferred from it.
    There are mixed views on this issue. If the staff were named, they would be in a position to insist that all the income generated be spent entirely on their own research, with the threat of moving otherwise to an institution that will undertake to do this. Universities will struggle not only to develop younger staff, who require support even though they will not have 'earned' it through the RAE, but also to set up new areas of research, for which no individual researcher can be funded directly. It could also lead to rankings within departments that will have a most corrosive impact on any form of collegiality. Alternatively, transparency and the freedom of information act may require that named individuals do know how their research efforts performed. The profiles of small units will also be needed were collaborative research between two institutions, and hence the split of income would need to be determined.
  3. Panels would be given guidelines on expected proportions of three star, two star and one star ratings. These proportions should normally be the same for each unit of assessment. If a panel awarded grades which were more or less generous than anticipated in the guidelines, these grades would have to be confirmed through moderation.
    The RSE does not agree with this recommendation. The suggestion that guidelines should be given to inform the panel as to what proportions of each type of quality researcher might be present in each subject area nationally flies in the face of international benchmarking. It must be left to the panel to decide the absolute grading. This recommendation is suggested to avoid 'grade drift', but in avoiding this trap it has simply fallen into another, which is that of pre-judging the outcomes. How are such guidelines to be calibrated? Who has the expertise to decide beforehand what proportion of staff in each discipline, or set of disciplines corresponding to each unit of assessment, are international, or national in quality? Either the stars have some absolute meaning, or they are purely relative, in which case, the panels will have the unenviable job of ranking the entire research-active staff in each set of disciplines.

Recommendation 6

  1. There should be between 20 and 25 units of assessment panels supported by around 60 sub-panels. Panels and sub-panels should be supported by colleges of assessors with experience of working in designated multidisciplinary ‘thematic’ areas.
    The RSE agrees that the number of unit of assessment panels should be reduced to promote a broadening of inter-disciplinary edges and collaboration. There have long been concerns in the UK about the assessment of multidisciplinary, applied and practice-based research.

    There may, however, be a difficulty in some areas of identifying sufficient members of the colleges of assessors who have ‘experience of working in designated multidisciplinary thematic areas’. There may also be a perception problem if the Chair is very well known in one discipline. Those not in that discipline may feel there is a bias when in fact there may not be.

  2. Each panel should have a chair and a moderator. The role of the moderator would be to ensure consistency of practice across the sub-panels within the unit of assessment.
    The RSE agrees that each panel should have a chair and moderator and that within each, if possible, there should also be non-UK based researchers with experience of the UK research system.
  3. Each panel should include a number of non-UK based researchers with experience of the UK research system.
    The RSE agrees with this proposal.
  4. The moderators of adjacent panels should meet in five or six ‘super-panels’ whose role would be to ensure consistency of practice between panels. These ‘super-panels’ should be chaired by senior moderators who would be individuals with extensive experience in research.
    The RSE agrees with this proposal.

Recommendation 7

  1. The rule that each researcher may only submit up to four items of research output should be abolished. Research Quality Assessment panels should have the freedom to define their own limits on the number and/or size of research outputs associated with each researcher or group.
    The RSE agrees with this proposal, provided the requirements for any particular panel are known 2-3 years before submission. However, any decision regarding the number or type of research outputs should consider not only the needs of particular disciplines, but also the requirement to conduct long term projects, such as longitudinal studies, that may generate small numbers of substantial, high quality outputs at the end of a four or five year period. The six year cycle will be helpful in this regard. An upper limit, however, could be set to avoid requests for excessive numbers of submissions, and a shift to quantity rather than quality.
  2. Research Quality Assessment panels should ensure that their criteria statements enable them to guarantee that practice-based and applicable research are assessed according to criteria which reflect the characteristics of excellence in those types of research in those disciplines.
    The RSE strongly agrees with this recommendation. It will be important to establish firm and transparent performance indicators for these areas, well in advance of the assessment.

Recommendation 8

  1. The funding councils should work alongside the subject communities and the research councils to develop discipline-specific performance indicators.
    The RSE strongly agrees with this recommendation. However, it is will be essential that the main panels comprise disciplines that share a view about the value of specific indicators.
  2. Performance against these indicators should be calculated a year prior to the exercise, and institutions advised of their performance relative to other institutions.
    The RSE agrees with this recommendation.
  3. The weight placed upon these indicators as well as their nature should be allowed to vary between panels.
    The RSE agrees that the weight placed against the indicators as well as their nature will need to vary between different panels to recognise the different profile of activities in different subject areas.

Recommendation 9

  1. Where an institution submits to Research Quality Assessment in a sub-unit of assessment all staff in that sub-unit should become ineligible for the Research Capacity Assessment, even if they are not included in the Research Quality Assessment submission.
    As noted above, any departure from a 'same for all' approach would remove the essential comparative aspect of the exercise and its results. Research of national and international excellence should always be fully assessed by an expert review assessment subject to the wishes of the submitting HEI being taken into account as well as the requirements of the relevant Funding Council.
  2. The funding councils should establish and promote a facility for work to be submitted as the output of a group rather than an individual where appropriate.
    The RSE agrees that the scope for including groups, or cross institution submissions is useful.
  3. The funding councils should consider what measures could be taken to make joint submission more straightforward for institution
    The RSE agrees with this recommendation.
  4. Where an institution submits a sub-unit of assessment for Research Quality Assessment, no fewer than 80% of the qualified staff contracted to undertake research within the sub-unit of assessment must be included in the submission.
    The RSE agrees with this recommendation.
  5. All staff eligible to apply for grants from the research councils should be eligible for submission to Research Quality Assessment.
    The RSE agrees that all staff eligible to apply for grants from the Research Councils should be eligible for submission to Research Quality Assessment. This is also consistent with new European law regarding the status of contract researchers, and has significant impact on the sustainability of the research base and the concept of full economic costs.

Recommendation 10. Each panel should consider a research strategy statement outlining the institution’s plans for research at unit level.
The RSE agrees with this recommendation. Institutions should also be encouraged to develop much more robust long term strategic plans over a timescale consistent with estate planning and high profile international researcher recruitment.

Question 11  Burden for institutions. The review proposals have been designed to make the burden of assessment proportionate with the possibility of financial reward. Do you agree that this has been achieved?
Any competitive funding system should seek to provide fair and honest equality of opportunity, coupled with a choice for institutions about the extent to which they engage in the competitive process.

Question 12  Value of research assessment. What value do you place on the research assessment if the financial reward is likely to be small?
The freedom (and challenge) to seek to develop and improve is a strong institutional driver. However, it is up to each institution to balance the effort against the potential gain. Research assessments are also important for the status of institutions, influencing both staff and student recruitment.

Question 13  Equality of opportunity for all groups of staff. How successful do you consider that the proposals of the research assessment review are in this respect.
The proposals could systematically penalise staff in the institutions/departments that are excluded from ‘bidding’ for funding by being prevented from entering the RAE process.

Question 14  Overall approach of the review. Notwithstanding your views on any specific recommendations, and given the responses to the earlier ‘Invitation to contribute’, do you agree or disagree with the broad approach taken by the review to the question of research assessment?
As noted above, the move from an assessment of departments or units of assessment to the scoring of individuals is a welcome change, however, the proposed system is undoubtedly more complex than the existing model and the departure from a 'same for all' approach removes the essential comparative aspect of the exercise and its results.

Additional Information
In responding to this consultation the Society would like to draw attention to the following Royal Society of Edinburgh responses which are of relevance to this subject: Research and the Knowledge Age (April 2000); Review of Research Policy and Funding (April 2001); Research and Knowledge Transfer in Scotland (September 2002); Review of Research Assessment (December 2002); The Future of Higher Education (May 2003) and the Sustainability of University Research (September 2003).


Follow Us: