Initial findings from an international research evaluation working group suggest that the organisations behind university world rankings merit some scrutiny themselves.
Looking at six of the largest and most influential world university rankings, members of the International Network of Research Management Societies (INORMS) Research Evaluation Working Group used their five-step process called SCOPE to assess them on a number of community-developed criteria centred around good governance, transparency, measuring what matters and rigour.
The group found that while most of the ranking organisations made some efforts towards good governance, there were clear weaknesses in terms of declaring conflicts of interest.
The rankers’ aims and methods were generally transparent, although this was not necessarily borne out by others’ ability to replicate the data, data availability or financial transparency.
Most rankings underperformed when it came to measuring what matters, all failing to tailor their offer to different audiences and showing unfair bias to some groups. Finally, university rankings, which are most criticised for their methodological invalidity, generally scored very poorly when it came to implementing rigorous methods.
Convenor of the INORMS Research Evaluation Working Group, Dr Lizzie Gadd, said: “there is clearly work to be done here, and we hope that our rating clearly highlights what needs to be done and by whom. The world university rankings currently fail to meet community expectations around fair, meaningful, and responsible evaluation. We hope that this work will provide ranking organisations, and those that rely on them for decision-making, an opportunity to reflect and reconsider their approach.”
For further information, Rethinking the rankings, a blogpost by Lizzie Gadd and Richard Holmes can be found on the ARMA website at https://arma.ac.uk/rethinking-the-rankings/
Contact Lizzie Gadd at email@example.com, Twitter: @lizziegadd
For further information on INORMS, visit www.inorms.net.