Rankings and Accountability in Higher Education

Download as pdf or txt
Download as pdf or txt
You are on page 1of 296

UNESCO

Publishing
United Nations
(GXFDWLRQDO6FLHQWL¿FDQG
Cultural Organization

Rankings
and Accountability
in Higher Education
Uses and Misuses

EDUCATION ON THE MOVE


UNESCO
Publishing
United Nations
Educational, Scientific and
Cultural Organization

Rankings
and Accountability
in Higher Education
Uses and Misuses

P.T.M. Marope, P.J. Wells and E. Hazelkorn (eds)


Published in 2013 by the United Nations Educational,
Scientific and Cultural Organization
7, place de Fontenoy, 75352 Paris 07 SP, France

© UNESCO 2013

All rights reserved

ISBN 978-92-3-001156-7

The designations employed and the presentation of material throughout this publication do not imply the expression
of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of
its authorities, or concerning the delimitation of its frontiers or boundaries.

The ideas and opinions expressed in this publication are those of the authors; they are not necessarily those of UNESCO
and do not commit the Organization.

Cover design: Corinne Hayworth


Cover credit: © Peter Jurik / Panther Media / GraphicObsession
Design and printing: UNESCO

Printed in France with the generous support of the National Commission of


the People’s Republic of China for UNESCO
Contents

Foreword  .....................................................................................................................................................   5
University Rankings: The Many Sides of the Debate 
Mmantsetsa Marope and Peter Wells  ...................................................................................   7

Part 1  Methodological Considerations


Chapter 1  The Academic Ranking of World Universities
and its future direction
Nian Cai Liu  ...................................................................................................................................   23
Chapter 2  An evolving methodology: the Times Higher Education
World University Rankings
Phil Baty  ...........................................................................................................................................   41
Chapter 3  Issues of transparency and applicability in global
university rankings
Ben Sowter  .......................................................................................................................................   55

Part 2  Implications and Applications


Chapter 4  World-class universities or world-class systems?
Rankings and higher education policy choices 
Ellen Hazelkorn  .............................................................................................................................   71
Chapter 5  Rankings and online learning: a disruptive combination
for higher education? 
John Daniel  ......................................................................................................................................   95
Chapter 6  Ranking higher education institutions: a critical perspective
Peter Scott  .....................................................................................................................................   113
Chapter 7  Rankings, new accountability tools and quality assurance
Judith Eaton  .................................................................................................................................   129

Part 3  International Perspectives


Chapter 8  An African perspective on rankings in higher education
Peter A. Okebukola  ....................................................................................................................   141
Chapter 9  Rankings and information on Japanese universities
Akiyoshi Yonezawa  ...................................................................................................................   171
Chapter 10  The national and institutional impact of university
rankings: the case of Malaysia 
Sharifah Hapsah  ........................................................................................................................   187

3
Chapter 11  What’s the use of rankings? 
Kevin Downing  ...........................................................................................................................   197
Chapter 12  A decade of international university rankings:
a critical perspective from Latin America 
Imanol Ordorika and Marion Lloyd  .................................................................................   209

Part 4  Alternative Approaches


Chapter 13  If ranking is the disease, is benchmarking the cure? 
Jamil Salmi  ...................................................................................................................................   235
Chapter 14  U-Multirank: a user-driven and multi-dimensional ranking
tool in global higher education and research 
Frans van Vught and Frank Ziegele  ..................................................................................   257
Chapter 15  Towards an international assessment of higher education
learning outcomes: The OECD-led AHELO initiative
Richard Yelland and Rodrigo Castañeda Valle  ............................................................   281

Notes on contributors  ..................................................................................................................  301


Foreword

This book is the first of a series of studies that will consider trends in ­education
today and challenges for tomorrow. This new series has been created to
respond to demand from policy-makers, educators and stakeholders alike
for state-of-the-art analyses of topical issues. As the world’s leading agency
on educational matters, UNESCO is at the forefront of the intellectual debate
on the future of learning, with an unparalleled capacity to bring together
trailblazers from across the globe.

University rankings are one such issue. Since the turn of the millennium
­– and in particular following the release in 2003 of the Academic Ranking of
World Universities by Shanghai Jiao Tong University in China and the Times
Higher Education World University Rankings in 2004 – there has been a huge
increase in the attention paid to this topic in the mainstream media. In May
2012, Universitas 21 were the latest to join the throng with the launch of the
U21 Ranking of National Higher Education Systems in forty-eight countries.

UNESCO has followed closely the evolution of university rankings, having


previously published volumes on ranking methodologies (2005) and the
related issue of the ‘world-class university’ (2007 and 2009). The Organization
does not advocate the pursuit by universities of ‘world-class’ status or high
rankings as goals in themselves; rather, it aims primarily to build capacity for
responsible development, dissemination and informed use of rankings and
league tables, based on the recommendations of the 2009 UNESCO World
Conference on Higher Education.

With this in mind, UNESCO together with the Organisation for Economic
Co-operation and Development (OECD) and the World Bank organized
the ‘Global Forum on Rankings and Accountability in Higher Education:
Uses and Misuses’ at its Paris Headquarters on 16 and 17  May 2011. The
forum re-confirmed the fact that the advent of mass higher education and
prolife­ration of new institutional models in the sector over recent decades
has resulted in a welcome and unprecedented expansion of access and
choices in supply, while at the same time raising questions over the validity

5
and quality of provision. This has led in turn to many higher education
stakeholders –  including students, researchers, teachers, policy-makers
and funding agencies – floundering or becoming confused over the quality
of what is on offer.

Inspired by the Forum, the current volume brings together both promoters
and opposers of rankings, to reflect the wide range of views that exist
in the higher education community on this highly controversial topic. If
learners, institutions and policy-makers are to be responsible users of
ranking data and league table lists, it is vital that those compiling them
make perfectly clear what criteria they are using to devise them, how they
have weighted these criteria, and why they made these choices. It is hoped
that this information will enable stakeholders to make informed decisions
on higher education institutions.

UNESCO is very grateful to the authors who contributed to this thought-


provoking volume, which will help to improve mutual responsibility in
the use and values of higher education rankings and put an end to their
distortion and misuse. The editors’ efforts in pulling the different articles
together and ensuring that they form a coherent and stimulating whole are
also appreciated.

Qian Tang, Ph.D.


Assistant Director-General for Education

6
University Rankings:
The Many Sides of the Debate
Mmantsetsa Marope and Peter Wells

The emergence of university rankings


The practice of university rankings dates back to around 1900 with the
publication in England of Where We Get Our Best Men. This study examined
the backgrounds of the country’s most prominent and successful men of
the time with particular reference to where each studied, and as a conse-
quence providing a list of universities ranked by the number of distinguished
alumni they could lay claim to (Myers and Robe, 2009). Despite the fact that
the practice was soon emulated in other countries it was more or less met
with disinterest and little debate outside of closeted academic corridors.
Other studies followed over the next eighty years – largely of graduate pro-
grammes – using a bewildering array of criteria, but again passing relatively
unnoticed by society at large.

The general disinterest in university rankings began to change in 1983


with the pub­lication of ‘America’s Best Colleges’ by the US News and World
Report. For the first time information about undergraduate programmes
in America’s higher education institutions was made widely and publicly
available to the country’s high school population and their parents via a
widely read popular medium. A decade later, in 1993, the first ‘Times Good
University Guide’ was published in the United Kingdom, prompting – as
had happened previously in the United States – public debate as to which
institutions faired better or worse in the guide. The 1990s later witnessed
diverse lists, league tables and rankings around the world, numbering
everything from specialist subject schools, to MBA programmes and private
institutions, provoking as a result increasing wrangling and scrambling for
positions on such lists, as well as scepticism from those institutions that
appeared or did not appear on them.

7
The tide of attention paid to university rankings, however, well and truly
swept over the sector a decade later in 2003 with the release of the Academic
Ranking of World Universities (ARWU) by Shanghai Jiao Tong University in
China and the Times Higher Education World University Rankings a year
later. The topic has rarely been out of higher education headlines and the
mainstream media ever since. At the same time, the topic has progressively
attracted more and more anti-ranking debates, initiatives and even ­bodies.
As is evident in this volume, the debate on whether or not universities
should be ranked has not abated.

To rank or not to rank?


The explosion of university rankings perhaps signals the reality that we live
in a compared and ranked world. The twenty-first century is increasingly
compared and ranked along a myriad of dimensions. Based on levels of
GDP, countries are designated as part of either the first, second or third
worlds, and are ranked as developed, developing or least-developed, based
on a complex cluster of indicators. They are accorded human development
­rankings – low, medium, medium-high or high – and are ranked on income
as being low, middle, middle-high or high-income countries. They are also
ranked on their knowledge economy readiness, the purposeful use of ICT,
their levels of global competiveness, perceived levels of corruption and
more. Comparisons and rankings go far beyond the macro level of ‘worlds’
and countries, to the meso level of institutions such as restaurants, schools,
hospitals, airports, banks and, of course, universities.

Universities are among our canonical twenty-first century institutions. In


and of themselves, they are standard setters for how other aspects of our
‘worlds, countries and institutions’ are compared and ranked. It therefore
seems inevitable that universities would themselves be subjected to
­comparisons and rankings. However, being complex institutions and being
part of complex systems, it seems equally inevitable that comparisons and
rankings of universities would be anything but polemical.

Comparing and even ranking our ‘worlds, countries and institutions’ impels
the construction and use of common ‘yardsticks’ along whose gradations
these entities can be placed. Yet, unlike length, height and width, these ‘yard-
sticks’ are used to measure very complex, often multi-faceted, fast-changing,
contextually varied and even conceptually contentious phenomena. For

8 Rankings and Accountability in Higher Education: Uses and Misuses


instance, what is ‘development’? How varied are conceptualizations of devel-
opment? How solid or shaky is the ground on which those who number our
worlds on the basis of their development stand and why are their number-
ings accepted? Whom and what purpose do their numberings serve? Do they
serve those with the power to number the worlds, countries and institutions,
those who are numbered, or both? And if the power-base for numbering
these entities was to shift, could the worlds, countries and institutions be
re-numbered? Clearly these same questions can be asked of any ranked list,
including those of higher education institutions.

By necessity, the use of common ‘yardsticks’ to compare and rank


anything – worlds, universities, hospitals, schools, restaurants, gymnasia,
student performance on examinations, individual intelligence quotients
and so on – simplifies what are otherwise complex and dynamic realities.
Yet these ‘yardsticks’ are not only pervasive, they are constantly used to
make daily choices and even complex decisions. Whether it is the fairly
risk-free choice of a restaurant, the potentially life-changing educational
choice of a child’s kindergarten or university, or a life-saving judgement
when choosing a hospital for a major operation, comparisons and rankings
centrally inform our daily decisions. As will become apparent in the following
chapters, comparisons and rankings substantially influence not only
individual decisions, but also collective decisions. Specific to universities,
the influence of comparisons and rankings goes beyond individuals’ choices
of universities to country policy, strategic and investment priorities, and
even to countries’ strategic positioning and the competiveness of their
higher education institutions.

The question therefore seems to be less about whether or not universities


should be compared and ranked, but the manner in which this is undertaken.
Do the ‘yardsticks’ used to compare and rank universities fit the purpose?
And what is this purpose? Is it clearly delineated and communicated to
potential stakeholders? Where the latter is clear and transparent, it can be
hoped that an informed and discerning stakeholder-user will understand
the merits and limitations of the ‘yardsticks’, and can consequently benefit
from an appreciation of both. These are the questions.

Focus of the volume


This inaugural volume of UNESCO’s Education on the Move Series is a pro­duct
of the UNESCO ‘Global Forum on University Rankings: Their Uses and Misuses’,

University Rankings: The Many Sides of the Debate 9


held at UNESCO Headquarters in Paris on 16 and 17 May 2011. As a whole, the
series focuses on critical and even potentially controversial and polemical
issues in education. UNESCO has undertaken to lead global dialogues on
these issues with the express aim of advancing public understanding of these
issues and facilitating the informed use of knowledge generated from such
issues by diverse stakeholders.

University rankings feature highly among the hotly debated issues in


education. Accordingly, UNESCO has followed closely the evolution of this
trend particularly over the last decade as it has developed momentum
and mounting discourse. UNESCO has previously published volumes on
ranking methodologies (2005) and the questionably related issue of the
‘world-class university (2007 and 2009). However, its role has never been
one of advocating any individual published national or global rankings.
Consistent with its functions as a neutral broker of knowledge and as a
clearing house of ideas, UNESCO’s primary aim has been to encourage
the responsible development, transparent articulation, communication,
dissemination and use of university rankings and of league tables, given
a received appreciation that such lists will continue to form part of the
twenty-first century higher education sphere.

This specific volume contributes to the development of discerning stakehold-


ers in the evolution as well as the potential uses and misuses of university
rankings. It brings together a selection of key voices and diverse perspectives
in the ongoing debate on the ranking of higher education institutions. The
volume addresses the following key questions:

• What are the key methodological considerations of university rankings?

• What are the merits and therefore potential usefulness of university


rankings?

• What are the limitations and therefore potential pitfalls in the use of
university rankings?

• What alternative instruments may complement university rankings?

• How best can diverse stakeholders benefit from university rankings and
other complementary instruments?

10 Rankings and Accountability in Higher Education: Uses and Misuses


Overview of the volume

Methodological considerations of university rankings

The book is divided into four parts. Part I comprises three chapters that elab-
orate the methodological approaches used by three of the most prominent
‘ranking houses’. It adopts a critical introspective approach and presents not
only the methodologies, but also their evolution as well as their strengths
and shortcomings. Consistent with UNESCO’s intention to develop discern-
ing stakeholders, Part I informs the reader and/or user what they can and
cannot expect to get when they use the university rankings from these three
‘ranking houses’. The three explicitly highlight the limited coverage of their
‘world university rankings’ as they focus on about 200 (or 1 per cent) of the
nearly 17,000 world universities. Although varied in many respects, the 200
ranked universities have much in common:

[They publish] ‘world-class’ research carried out across national bor-


ders; they work with global industry; they teach from undergraduate
to doctoral level; and they compete in a global market for the top
students and academic talent (Baty, 2012).

The ranking houses also recognize that the scope of the currently main rank-
ings is thus limited:

different global rankings have different purposes and they only meas-
ure parts of universities’ activities. Bibliometric rankings focus on
research output, and URWU emphasizes the research dimension of
universities also (Liu, 2012).

The fact that rankings only embrace 1 per cent of the world’s universities,
and that they focus on research and even then mostly scientific research,
has attracted immense criticism from diverse stakeholders (see Parts II, III
and IV). For their part the ranking houses have neither been deaf to nor
blind of this criticism. Part I presents their earnest efforts to not only admit
and explain the scope of their methodologies, but also to demonstrate a
willingness to progressively improve on them to better cover what is com-
monly known as the scope of university functions – research, teaching and
social responsibility.

The new Times Higher Education World University Rankings, first


published on 16  September 2010 and again on 6  October 2011

University Rankings: The Many Sides of the Debate 11


recognize a wider range of what global universities do… the THE
world university rankings seek to capture the full range of a global
university’s activities – research, teaching, knowledge transfer and
internationalisation… Perhaps the most dramatic innovation for
the world university rankings for 2010 and beyond is the set of five
indicators designed to give proper credit to the role of teaching in
universities with a collective weighting of 30 per cent (Baty, 2012).

While these methodological improvements speak well of the ranking houses


as learning institutions, critics point to this methodological evolution as a
source of longitudinal incomparability of rankings and therefore a weakness
in itself. Yet, for the user of rankings, the fact that more or less the same
universities appear more or less in the same ranked position suggests a fair
measure of stability even as methodologies evolve.

Refreshingly, the ranking houses also recognize that no matter how much
they expand the base of indicators considered in their methodologies, they
can never exhaustively cover the full range of the universities’ functions and
activities. By their very nature indicators are selective and not exhaustive. As
such, they responsibly caution that,

none of the current global ranking systems can provide a complete


view of universities; taking any single ranking as a standard to judge a
university’s overall performance is improper (Liu, 2012).

UNESCO commends this note of caution to the user of rankings. It sub-


scribes to the use of rankings in complementarity with other credible
sources of information on the quality of a university, such as presented
in Part III, including quality enhancement efforts, evidence of value addi-
tion to learners, quality assurance and universities’ evidence-based self-
reporting on their quality.

The merits and demerits of rankings

It can safely be said that the explosion of interest in rankings has been
outmatched by the volume of criticism from virtually all spheres, including
academics, universities, policy-makers, development agencies, education
service providers and students (Parts II and III). Diverse as these constitu-
ents may be, they mostly start with an acknowledgement that ‘love them
or hate them, rankings are here to stay’. On the positive side, rankings
address the growing demand for accessible, manageably packaged and

12 Rankings and Accountability in Higher Education: Uses and Misuses


relatively simple information on the ‘quality of higher education institu-
tions’. This demand is greatly fuelled by the need to make informed choices
of universities, within a context of massification of higher education and
the widely growing diversity of providers. The growing base of stakehold-
ers also fuels this demand. Students use the information to choose where
to study, just as their parents and governments use it to place children
in the ‘best’ universities. Donors use rankings to best place their endow-
ments so as to realize the best potential value for their investments. The
private sector likewise uses them to identify promising partner institutions
in higher education, as do faculty when identifying research collaborators.
Policy-makers and universities themselves turn to rankings to learn more
about the strengths of their higher education institutions and to identify
potential areas for improvement. Governments also often use them to
gauge the global standing of their institutions and therefore their competi-
tiveness. However, as highlighted by Liu in Chapter 1, rankings are not and
should not be used as the sole source of information that guides decisions
pertaining to the quality of universities.

Simplifying the complex, dynamic and multi-faceted quality of higher


education institutions has been a consistent criticism of university
rankings. At the same time, the simplicity of rankings has promoted
the accountability of ‘ranking houses’, impelling them to explain their
method­ ologies with mature and critical introspection. While the
adequacy with which they do so could be arguable, it is unmistakable
that there is growing transparency on their part on what their rankings
can and cannot tell us. Simplicity has also sparked a healthy and much
needed national and global dialogue on the quality of higher education
institutions, how to best capture it and how to communicate it to stake-
holders. As exemplified in this very volume, serious debate is a good
source of knowledge creation and innovation. The range of complemen-
tary methodologies for assessing, comparing, communicating and even
improving the quality of universities exemplified in Part III of this book
speaks to this innovation.

Notably, rankings have also encouraged transparency of information and


accountability of these hallowed institutions, which hitherto have been
cloaked in exclusivity, academic freedoms and even restrictive prestige. More
and more, universities find themselves having to explain to the public their
performance on set criteria used by rankers and other quality monitoring
bodies. Rankings ‘have led to a revolution in the availability of data on higher
education institutions and intelligence to guide institutional and govern-
ment strategies for higher education’ (Sowter, 2012).

University Rankings: The Many Sides of the Debate 13


Several chapters here demonstrate the potential ‘pull-up factor’ on universi-
ties that appear lacking in some of the criteria used for rankings. On the other
hand, for those that ‘do well’ rankings can be a powerful incentive for sustain-
ing quality enhancement forces. Other voices documented reveal how diverse
universities and countries have used rankings to benchmark their institutions
and to inform the policy dialogue that drives improvements and even the
reforms of their overall higher education systems. Rankings can therefore be
indirect tools for driving excellence in higher education (Hapsah, 2012).

Critics argue that rankings can draw universities’ attention away from teach-
ing and social responsibility towards research or even scientific research.
However, ‘ranking houses’ acknowledge that they focus their attention on
research-focused universities, and thus are expanding their indicators to
take into account teaching. What perhaps is at issue here is whether there
should be rankings that emphasize other functions. Such developments
could facilitate the building of reliable indicators and databases on the
quality of research and social responsibility.

There have also been concerns that by applying a limited set of crite-
ria to world universities and given the strong desire to feature in the
top 200 universities, rankings could actually ‘McDonaldize’ higher educa-
tion institutions and render them irrelevant to their immediate contexts.
However, evidence equally shows that higher education institutions are
mature, sophisticated and complex enough to balance responsiveness
to the imperatives of globalization with responsiveness to the demands
of their immediate contexts (Downing, 2012). Invariably, universities are
found to use rankings as a supplementary rather than as a sole assess-
ment of their quality. This is in line with the complementarity of tools
advocated in this volume.

Rankings are said to favour the advantage enjoyed by the 200 best-
ranked institutions. These tend to be older (200+  years) established
institutions with 25,000 students or more, 2,500 faculty or more, and
with endowments of over US$ 1 billion and annual budgets of more than
US$2 billion. However, this is a distraction from the focus of rankings,
which emphasizes quality at the pinnacle and not so much the process of
getting there. The two are both legitimate questions; however, rankings
should be critiqued on what they set out to do rather than on what the
critics want them to do. In any case, if characteristics of the top-ranked
universities do not come together to make for a ‘world-class’ higher
education institution, then the obvious question is ‘which characteristics
could possibly do’?

14 Rankings and Accountability in Higher Education: Uses and Misuses


For the most part, rankings are criticized for how they are used rather
than on what they claim they do. Granted, unwise use of rankings is
a source of great concern, but the remedy to this challenge is public
education of users as purported in this book and not the elimination
of rankings. The Malaysian experience with rankings demonstrates how,
with progressive understanding of the merits and demerits of rankings,
countries and by implication regions can adapt rankings to make them
responsive to their contexts:

As the issues surrounding rankings became clearer the government


has taken a more holistic view about ranking. The Minister of Higher
Education has expressly articulated that universities should not be
‘obsessed with ranking’ (Khaled Nordin, 2011)… Instead the govern-
ment is focusing more on making the education system ‘world class’ to
accommodate the increasing entrants to higher education. Under the
Economic Transformation Programme (PEMANDU), several initiatives
have been identified for improving the supply as well as demand side
to increase access and enhance quality towards making Malaysia a
global education hub. Consequently in implementing the Ninth Plan,
the selection of research universities was completed (Hapsah, 2012)

A further criticism of rankings is that they divert resources from build-


ing ‘world-class’ higher education systems towards building ‘world-class’
higher education institutions. This is yet another issue of usage rather than
of rankings per se. It is quite difficult to envisage the possibility of having
‘world-class’ higher education systems without ‘world-class’ higher educa-
tion institutions. The artificial partitioning of the two asks the right question
for the wrong reasons. The right question regarding how best we can have
a ‘world-class’ higher education system absolutely has to be asked. It is a
question with a powerful equity imperative that recognizes that all deserve
quality higher education. But the wrong reason that rankings should be
abolished because they encourage the building of world-class universities
and not world-class higher education systems simply separates the chicken
from the egg. A critical question that none of the critics ask is: How can
countries attain and sustain world-class universities and higher education
systems and do so with sustainable resource efficiency?

Lastly, since performance in rankings can have an impact on ability to gener-


ate funding and partnerships, there is a perverse incentive for universities
to inflate their performance in order to climb up the ranking ladder. This
is a legitimate concern and one that generally comes with any high-stakes

University Rankings: The Many Sides of the Debate 15


assessment mechanism. Verifying the validity of information that universi-
ties provide to ‘ranking houses’ is a challenge that needs urgent attention by
those who use that information to rank institutions.

Complementary instruments to rankings

As noted, an indirect contribution of rankings is that they have stimulated


complementary methodologies that share in common the effort to address
the above-outlined weaknesses. As with the rankings, they themselves are
not without their limitations. UNESCO presents these selected method-
ologies for assessing the quality of higher education institutions and/or
systems as tools that should complement rankings. It does not present
them as ‘cures’ to the maladies of rankings, but rather again as a bal-
anced presentation of these methods and an honest presentation of their
strengths and weaknesses.

One of the arguably more ambitious approaches outlined here is an


attempt by the OECD to draw international comparisons of the learning
outcomes of higher education graduates. This dimension certainly speaks
more to the capacity of institutions to contribute to national develop-
ment agendas and to the personal and social fulfilment of students after
graduation. The strength of such a study is the focus it could place on the
importance of developing contextually relevant and demanding higher
education learning outcomes.

Moving from the institutional to the systemic level, the World Bank proposes
a benchmarking approach to run a ‘health check’ on tertiary education
systems around the world. As with all benchmarking exercises, the purpose
of the approach is said to not create a list of winners and losers, but to
offer a way for national higher education systems to compare themselves
to others of similar design, disposition and context, and from this starting
point to develop strategies for improvements. By looking at the system as a
whole rather than its constituent institutions the suggestion is that policy-
makers can elaborate a long-term vision for their tertiary strategy. Such
a holistic-therapy approach to the health of a system does, however, run
the risk of bypassing fundamental shortcomings at the institutional patient
level – treatable conditions that still need to be addressed in concert with
other complementary quality assessment tools if the body system is to
function properly. If indeed benchmarking is a ‘cure’, the reader should
take it with the full knowledge of its potential side effects; as indeed most
cures tend to come with some!

16 Rankings and Accountability in Higher Education: Uses and Misuses


Finally, with its stresses on the ‘multi-dimensionality’ of tertiary education,
the U-Multirank project provides another transparency tool that is unasham-
edly user-driven and allows for a broader analysis of the diversity of tertiary
institutions – not only those research intensive winners which dominate the
traditional ranked lists. Encapsulating the perspective that modern higher
education institutions are ‘predominantly multi-purpose, multiple-mission
organisations undertaking different mixes of activities (teaching and learn-
ing, research, knowledge exchange, regional engagement, and internation-
alisation’ (van Vught and Ziegele, 2012) the U-Multirank project is another
welcome addition to the institutional comparison toolkit.

Invoking the mantra that ‘no one size fits all’, one must be conscious of the
fact that, admirable as these additional quality measuring techniques may
be, they, like ranking initiatives, should not be taken in isolation or consid-
ered definitive. Clearly their level of sophistication compared to the crude
rankings of the last century is at once impressive, even beguiling. Yet, they
still cannot lay claim to capturing every individual characteristic and nuance
of every individual institution they seek to compare.

The advent of massification in higher education has driven the modern


university to stand out from the crowd, to innovate, to be creative and to
offer something new and different. In short, they have been emboldened
not to compete, but to be unique. Ironically, it is an institution’s very abil-
ity to depart from the standards and norms measured by benchmarks and
rankings that is the true test of its status as a leading quality higher learning
provider for the twenty-first century.

Conclusion
As a neutral broker of knowledge, UNESCO’s role is not to endorse any of the
above ranking methodologies, the diverse perspectives on rankings or the
complementary approaches to them. What UNESCO does seek to do here is
to identify and explore the critical issues inherent to the ranking phenom-
enon and to give the microphone of debate to the various stakeholders so
that they may share their views on improving the generation and applica-
tion of university rankings.

To that end, UNESCO welcomes the clear convergence of opinion on what


ranking tables can and cannot tell us from both the users and compilers of

University Rankings: The Many Sides of the Debate 17


university rankings in this publication. The key challenge now is to ensure
that this message reaches the ultimate readers of rankings and league
tables – be they students, governments or institutional leaders – so that
they in turn may become better informed and more discerning users of
such transparency tools.

Ultimately, it matters little whether a stated comparative objective is to


‘rank’, ‘list’, ‘score’, ‘benchmark’ or ‘map’. If such initiatives, regardless of their
results or the controversies they provoke, raise the profile and importance
of addressing the need for quality monitoring and quality enhancement in
higher education, then they have indirectly proven their worth.

As noted above, comparisons and rankings are in the DNA of 21st century
life. The world is overflowing with ranked lists, from the ‘top  10 must-see
cities’ to the ‘top 5 grossing movies’, some of which are based on indisputable
facts while others are more nebulous and subjective in their opinions. It is
consequently vital to retain some perspective when interpreting such lists.
The bestselling movies do not necessarily win critics’ approval or industry
accolades. Similarly, billions of people will never visit the world’s ‘must-see’
tourist attractions. This does not belittle the information; it simply renders
it reflective rather than definitive. The same reflection is therefore called for
with modern university rankings.

The 15,000+ institutions around the world that have not, do not and will
not appear on any ‘top’ list of universities continue their noble pursuits of
educating and nurturing learners hungry for knowledge and skills; of con-
tributing to the development of human and social capital; and of undertak-
ing important research for sustainable futures. Obsessing about joining and
climbing a league table or becoming ‘world-class’ ignores the greater role,
purpose and mission of higher learning institutions. This once again points
to the central tenet of this volume in the plea for a responsible and informed
use of university rankings. In her opening address to the ‘Global Forum on
Rankings’, the Director-General of UNESCO offered a timely reminder:

University rankings are a hotly debated issue. They are viewed in very
different ways by rankers, students, employers, pre-university level
schools and the higher education community. It is good to see that
international rankings are diversifying and moving towards more
broadly balanced criteria and becoming multidimensional, as are
national rankings… While competition and international comparisons
can be positive trends, a key challenge for us in UNESCO is to continue

18 Rankings and Accountability in Higher Education: Uses and Misuses


promoting the values of higher education and the three main mis-
sions of the university: research, teaching and community service.

The modern ranking era seems unwittingly to have been punctuated by a


decade-dependent series of key evolutions in 1983, 1993 and 2003. It is hoped
that this UNESCO volume will mark the maturing of university rankings in
2013, and further define a period of improved responsibility in the creation,
dissemination and application of higher education rankings.

References

Academic Ranking of World Universities – 2012 : www.arwu.org/

America’s Best Colleges, US News and World Report, http://colleges.usnews.rankingsandreviews.


com/best-colleges (Accessed 1 November 2012.)

Maclean, A. H. H. 1900. Where we get our best Men. Some statistics showing their nationalities,
counties, towns, schools, universities, and other antecedents: 1837-1897 . London.

Myers, L. and Robe, J. 2009. Report from the Center for College Affordability and Productivity: www.
centerforcollegeaffordability.org/uploads/College_Rankings_History.pdf (Accessed
20 August 2012.)

Times Higher Education World University Rankings 2011-12:


www.timeshighereducation.co.uk/world-university-rankings/

University Rankings: The Many Sides of the Debate 19


Part 1

Methodological
Considerations
Chapter 1

The Academic Ranking of
World Universities and
its future direction
Nian Cai Liu
History of Academic Ranking of World
Universities
The Chinese dream of world-class universities

In order to meet the challenges of globalization and the knowledge-based


economy and accelerate China’s modernization, the Chinese leadership
placed its hopes in the higher education field, including a number of
national research universities. At the 100th anniversary of Peking University
in May 1998, the then president of China issued a declaration that the
country would have several world-class universities. The result was the 985
Project, set up to build world-class universities in China. In the same year,
the Chinese government selected Shanghai Jiao Tong University to be among
the first group of nine universities to take part in the project. In fact, many
top Chinese universities at this time drew up strategic goals and timetables
for becoming world-class universities, and Shanghai Jiao Tong University
was no exception. As a professor and Vice-Dean of the School of Chemistry
and Chemical Engineering of the university, I became involved in the stra-
tegic planning process of developing Shanghai Jiao Tong University into a
world-class university and was later on appointed as Director of the Office of
Strategic Planning of the university.

I asked myself many questions during this process. What is the definition
of a ‘world-class university’? How many world-class universities should
there be globally? What are the positions of top Chinese universities
in world university rankings? How can top Chinese universities reduce
the gap between themselves and world-class universities? In order to
answer these questions we began to benchmark top Chinese universities
with world-class universities. This eventually resulted in a ranking of
world universities.

Positioning of Chinese universities

From 1999 to 2001, Dr Ying Cheng, two other colleagues and I worked on
the project to benchmark top Chinese universities with four groups of US
universities, from the very top to the less-known research universities,
according to a wide spectrum of indicators of academic or research perform­
ance. According to our estimates, the positions of top Chinese universities
fell within the 200–300 bracket globally. The results of these comparisons
and analyses were used in the strategic planning process of Shanghai Jiao

24 Rankings and Accountability in Higher Education: Uses and Misuses


Tong University. Eventually, a consultation report was written and provided
to the Ministry of Education of China.

The publication of the report resulted in numerous positive comments,


many of which invoked the possibility of undertaking a real ranking of world
universities. We also received encouragement from visitors and colleagues
from different parts of the world who, having learned about our study,
encouraged us to perform world rankings. They reminded us that univer-
sities, governments and other stakeholders in the rest of the world were
interested in the quantitative comparison of world universities.

Ranking of world universities

I decided to undertake the ranking project and the Academic Ranking of


World Universities (ARWU) was completed two years later in early 2003, and
published on our website in June of the same year.1.

Ever since its publication, ARWU has attracted worldwide attention. Numerous
requests have been received asking us to provide a ranking of world universi-
ties by broad subject fields/schools/colleges and by subject fields/programmes/
departments. We have tried to respond to these requests. The Academic
Ranking of World Universities by Broad Subject Fields (ARWU-FIELD) and the
Academic Ranking of World Universities by Subject Fields (ARWU-SUBJECT)
were published in February 2007 and October 2009 respectively.

Unexpected impact

Although the initial purpose of ARWU was to ascertain the global stand-
ing of top Chinese universities in the world higher education system, it has
attracted a lot of attention from universities, governments and public media
worldwide. Mainstream media in almost all major countries has reported on
ARWU. Hundreds of universities have cited the ranking results in their cam-
pus news, annual reports and promotional brochures. A survey on higher
education published by The Economist referred to ARWU as ‘the most widely
used annual ranking of the world’s research universities’ (The Economist,
2005). Burton Bollag (2006), a reporter at Chronicle of Higher Education wrote
that ARWU ‘is considered the most influential international ranking’.

1 The ARWU website address www.arwu.org changed to www.shanghairanking.com in 2009.

Chapter 1. The Academic Ranking of World Universities and its future direction 25


One of the main factors behind the impact of ARWU is its globally sound
and transparent methodology. It uses a few carefully selected, objective cri-
teria and internationally comparable and verifiable data. The EC Research
Headlines reported, ‘The universities were carefully evaluated using several
indicators of research performance’ (EC Research Headlines, 2003). Chancellor
of Oxford University, Chris Patten, said ‘the methodology looks fairly solid… it
looks like a pretty good stab at a fair comparison’ (Patten, 2004).

ARWU has been widely cited and employed as a starting point for identify-
ing national strengths and weaknesses as well as for facilitating reform and
setting new initiatives (e.g. Destler, 2008). Martin Enserink (2007) referred to
ARWU and argued in his paper published in Science that ‘France’s poor show-
ing in the Shanghai ranking… helped trigger a national debate about higher
education that resulted in a new law… giving universities more freedom’.

Methodologies of ARWU

Ranking criteria and weights for ARWU

In total, more than 2,000 institutions have been scanned and about 1,200
institutions have actually been ranked. Universities are ranked by several
indicators of academic or research performance, including alumni and staff
winning Nobel Prizes and Fields Medals, highly cited researchers, papers
published in Nature and Science, papers indexed in major citation indices,
and the per capita academic performance of an institution. Table 1 shows the
indicators and weights for ARWU.

Table 1. Indicators and weights for ARWU


Criteria Indicator Code Weight
Quality of education Alumni of an institution winning Nobel Prizes and Fields Medals Alumni 10%
Quality of faculty Staff of an institution winning Nobel Prizes and Fields Medals Award 20%
Highly cited researchers in 21 broad subject categories HiCi 20%
Research output Papers published in Nature and Science* N&S 20%
Papers indexed in Science Citation Index-expanded and Social PUB 20%
Science Citation Index
Per capita performance Per capita academic performance of an institution PCP 10%
Total 100%
* For institutions specialized in humanities and social sciences such as London School of Economics, N&S is not
considered, and the weight of N&S is relocated to other indicators.

26 Rankings and Accountability in Higher Education: Uses and Misuses


For each indicator, the highest scoring institution is assigned a score of 100,
and other institutions are calculated as a percentage of the top score. Scores
for each indicator are weighted to arrive at a final overall score for an insti-
tution. The highest scoring institution is assigned a score of 100, and other
institutions are calculated as a percentage of the top score. An institution’s
rank reflects the number of institutions that sit above it.

Ranking criteria and weights for ARWU-FIELD

Five broad subject fields are ranked in ARWU-FIELD, including Natural


Sciences and Mathematics, Engineering/Technology and Computer Sciences,
Life and Agriculture Sciences, Clinical Medicine and Pharmacy, and Social
Sciences. Arts and Humanities were not ranked because of the technical
difficulties in finding internationally comparable indicators with reliable
data. Psychology and other cross-disciplinary fields were not included in
ARWU-FIELD because of their interdisciplinary complexity.

Similar to ARWU, institutions in each broad subject field are ranked


according to their academic or research performance. Ranking indicators
include alumni and staff winning Nobel Prizes and Fields Medals, highly
cited researchers, and articles indexed in the Science Citation Index-
Expanded and the Social Science Citation Index. Two new indicators were
introduced: the percentage of articles published in the top 20  per  cent
of journals of each field, and engineering research e­xpenditure.
Furthermore, the time span for calculating Award and Alumni indica-
tors  has been changed. Table  2 shows the indicators and weights for
ARWU-FIELD.

Ranking criteria and weights for ARWU-SUBJECT

Five subject fields are ranked in ARWU-SUBJECT, including Mathematics,


Physics, Chemistry, Computer Sciences and Economics/Business. Similar
to ARWU and ARWU-FIELD, institutions are ranked according to their
academic or research performance in each subject field. Ranking indica-
tors include alumni and staff winning Nobel Prizes, Fields Medals and
Turing Awards, highly cited researchers, papers indexed in the Science
Citation Index-Expanded and the Social Science Citation Index, and
the percentage of papers published in the top 20 per cent of journals
in each subject field. Table  3 shows the indicators and weights for
ARWU-SUBJECT.

Chapter 1. The Academic Ranking of World Universities and its future direction 27


Table 2. Indicators and weights for ARWU-FIELD
Weight
Code

SCI ENG LIFE MED SOC

Alumni of an Not Applicable Alumni of an Alumni of an Alumni of an


institution winning institution winning institution winning institution winning
Fields Medals in Nobel Prizes in Nobel Prizes in Nobel Prizes in
Alumni
10%

mathematics and Physiology or Physiology or Economics since


Nobel Prizes in Medicine since Medicine since 1961
Chemistry and 1961 1961
Physics since 1961
Staff of an Not Applicable Staff of an Staff of an Staff of an
institution winning institution winning institution winning institution winning
Award

Fields Medals and Nobel Prizes in Nobel Prizes in Nobel Prizes in


15%

Nobel Prizes in Physiology or Physiology or Economics since


Chemistry and Medicine since Medicine since 1961
Physics since 1961 1961 1961
Highly cited Highly cited Highly cited Highly cited Highly cited
researchers in fiveresearchers in researchers in researchers in researchers in two
categories: three categories: eight categories: three categories: categories:
Mathematics Engineering Biology and Clinical Medicine Social
Physics Computer Science Biochemistry Pharmacology Sciences, General
Chemistry Materials Science Molecular Biology Social (Partly)
Geosciences and Genetics Sciences, General Economics/
Space Sciences Microbiology (Partly) Business
HiCi
25%

Immunology
Neuroscience
Agricultural
Sciences
Plant and Animal
Science
Ecology/
Environment
Papers indexed in Papers indexed in Papers indexed in Papers indexed in Papers indexed
Science Citation Science Citation Science Citation Science Citation in Social Science
PUB
25%

Index-Expanded in Index-Expanded in Index-Expanded in Index-Expanded in Citation Index in


SCI fields ENG fields LIFE fields MED fields SOC fields
Percentage of Percentage of Percentage of Percentage of Percentage of
papers published papers published papers published papers published papers published
in top 20% of in top 20% of in top 20% of in top 20% of in top 20% of
TOP
25%

journals of SCI journals of ENG journals of LIFE journals of MED journals of SOC
fields compared fields compared fields compared fields compared fields compared
to that in all SCI to that in all ENG to that in all LIFE to that in all MED to that in all SOC
journals journals journals journals journals
Not Applicable Total engineering- Not Applicable Not Applicable Not Applicable
Fund
25%

related research
expenditures

Note: SCI for Natural Sciences and Mathematics, ENG for Engineering/Technology and Computer Sciences,
LIFE for Life and Agriculture Sciences, MED for Clinical Medicine and Pharmacy, and SOC for Social
Sciences.

28 Rankings and Accountability in Higher Education: Uses and Misuses


Table 3. Indicators and weights for ARWU-SUBJECT

Weight
Code Computer Economics/
Mathematics Physics Chemistry
science Business

Alumni of an Alumni of an Alumni of an Alumni of an Alumni of an


institution winning institution winning institution winning institution winning institution winning
Alumni
10%

Fields Medals in Nobel Prizes in Nobel Prizes in Turing Awards in Nobel Prizes in
Mathematics since Physics since 1961 Chemistry since Computer Science Economics since
1961 1961 since 1961 1961
Staff of an Staff of an Staff of an Staff of an Staff of an
institution winning institution winning institution winning institution winning institution winning
Award
15%

Fields Medals in Nobel Prizes in Nobel Prizes in Turing Awards in Turing Awards in
Mathematics since Physics since 1961 Chemistry since Computer Science Computer Science
1961 1961 since 1961 since 1961
Highly cited Highly cited Highly cited Highly cited Highly cited
researchers in researchers in researchers researchers in researchers in
HiCi
25%

Mathematics Physics and Space in Chemistry Computer Science Economics/


category Science category category category Business Category
Papers indexed in Papers indexed in Papers indexed in Papers indexed in Papers indexed
Science Citation Science Citation Science Citation Science Citation in Social Science
PUB
25%

Index-Expanded Index-Expanded in Index-Expanded in Index-Expanded in Citation Index


in Mathematics Physics fields Chemistry fields Computer Science in Economics/
fields fields business fields
Percentage of Percentage of Percentage of Percentage of Percentage of
papers published papers published papers published papers published papers published
in top 20% in top 20% of in top 20% in top 20% in top 20%
of journals of journals of Physics of journals of of journals of of journals of
TOP
25%

Mathematics fields fields compared to Chemistry fields Computer Science Economics/


compared to that that in all Physics compared to that fields compared Business fields
in all Mathematics journals in all Chemistry to that in all compared to that
journals journals Computer Science in all Economics/
journals Business journals

Chapter 1. The Academic Ranking of World Universities and its future direction 29


Definition of indicators

Alumni are the total number of alumni of an institution who have won
Nobel Prizes and Fields Medals. Alumni are defined as those who
have obtained bachelor, Master’s or doctoral degrees from the institu-
tion. Different weights are set according to the periods of obtaining
degrees: 100 per cent for degrees obtained in 2001–2010, 90 per cent
for degrees obtained in in 1991–2000, 80 per cent for degrees obtained
in 1981–1990, and so on, and finally 10 per cent for degrees obtained in
1911-1920. If a person obtained more than one degrees from an institu-
tion, the institution is considered once only.

Award refers to the total number of staff of an institution who have won
Nobel Prizes in Physics, Chemistry, Medicine and Economics and Fields
Medal in Mathematics. The staff are defined as those who work at an
institution at the time of winning the prize. Different weights are set
according to the periods of winning the prizes: 100 per cent for win-
ners in 2001–2010, 90 per cent for winners in 1991–2000, 80 per cent
for winners in 1981–1990, 70 per cent for winners in 1971–1980, and
so on, and finally 10 per cent for winners in 1911–1920. If a winner is
affiliated with more than one institution, each institution is assigned
the reciprocal weighting. For Nobel Prizes, if more than one person
shares a prize, weights are set for winners according to their propor-
tion of the prize.

The calculation of Award and Alumni indicators has been changed For
ARWU-FIELD and ARWU-SUBJECT, with only alumni or laureates post-
1961 considered. The weight is 100 per cent for 2001–2010, 80 per cent
for 1991–2000, 60 per cent for 1981–1990, 40 per cent for 1971–1980
and 20  per  cent for 1961–1970. The Turing award is used for subject
ranking of computer science.

HiCi is the total number of highly cited researchers in twenty-one subject


categories. These individuals are the most highly cited researchers
within each category. The definition of categories and detailed proce-
dures can be found at the website of Thomson ISI.

N&S is the total number of papers published in Nature and Science in the
last five years. To distinguish the order of author affiliation, a weight of
100 per cent is assigned for corresponding author affiliation, 50 per cent
for first author affiliation (second author affiliation if the first author

30 Rankings and Accountability in Higher Education: Uses and Misuses


affiliation is the same as corresponding author affiliation), 25 per cent for
the next author affiliation, and 10 per cent for other author affiliations.
Only published articles and Proceedings papers are considered.

PUB is the total number of papers indexed in the Science Citation Index-
Expanded and the Social Science Citation Index in the last year. Only
published articles and Proceedings papers are considered. When cal-
culating the total number of papers of an institution, a special weight
of two was introduced for papers indexed in the Social Science Citation
Index.

PCP is the weighted scores of the above five indicators divided by the num-
ber of full-time equivalent academic staff. If the number of academic
staff for institutions of a country cannot be obtained, the weighted
scores of the above five indicators are used.

TOP is the percentage of papers published in the top 20 per cent of journals


in each field or subject. The top 20 per cent of journals are defined as
their impact factors in the top 20 per cent of each ISI category accord-
ing to the Journal Citation Report. Papers in the top journals of each
ISI category are aggregated into relevant fields or subjects for the cal-
culation of TOP. Only published articles and Proceedings papers are
considered. If the number of papers of an institution is too small to
meet a minimum threshold, the TOP indicator is not calculated for the
institution and its weight is relocated to other indicators.

FUND is the total engineering-related research expenditures for the past


year. This indicator is only used for ENG ranking. If the data for all
institutions of a country cannot be obtained, the FUND indicator will
not be considered for the institutions and its weight will be relocated
to other indicators.

Results and analysis

The list of the top 500 institutions for ARWU is published on the website.
Taking into consideration the significance of differences in the total score,
ARWU is published in groups of fifty institutions in the range of 100 to
200 and groups of 100 institutions in the range of 200 to 500. In the same
group, institutions are listed in alphabetical order. Table 4 shows the aver-
age performance of institutions in different ranking groups by indicator.

Chapter 1. The Academic Ranking of World Universities and its future direction 31


Table 4. Average performance of institutions by indicator
Alumni Award HiCi N&S PUB
Top 100 2.95 1.56 30.0 57.1 3 900

101–200 0.38 0.14 7.6 14.6 2 350

201–300 0.21 0.03 3.3 7.0 1 750

301–400 0.16 0.02 2.0 3.9 1 200

401–500 0.07 0.01 0.9 2.5 1 050

The list of top 100 institutions for ARWU-FIELD and ARWU-SUBJECT is


published on the website. Taking into consideration the significance of dif-
ferences in the total score, ARWU-FIELD and ARWU-SUBJECT are published
in groups of twenty-five institutions in the range of 51 to 100. In the same
group, institutions are listed in alphabetical order.

Phenomena of global university rankings

The boom in global university rankings

Almost one year and a half after the first publication of ARWU, the Times
Higher Education Supplement published its ‘World University Rankings’ in
November of 2004. After 2005, the ranking was co-published by Times Higher
Education and Quacquarelli Symonds Company every year as THE-QS World
University Rankings. The THE-QS ranking indicators include an international
opinion survey of academics and employers (40 per cent weight for ­academics
and 10  per  cent weight for employers), student faculty ratio (20  per  cent),
citations per faculty member (20 per cent) and proportions of foreign faculty
and students (5  per  cent weight for each) (THE-QS, 2009). In 2010, Times
Higher Education terminated its collaboration with Quacquarelli Symonds
and both began to publish their own global ranking lists. While the new QS
ranking fully retained the methodology of previous THE-QS rankings, the
Times Higher Education ranking increased its number of indicators to thirteen
and Thomson Reuters became its data provider (Times Higher Education, 2010).

Bibliometric indicators have been widely used to measure research product­


ivity and performance of universities, and several global university rankings
were made using this approach. They include the ‘Performance Ranking of
Scientific Papers for World Universities’ published by the Higher Education

32 Rankings and Accountability in Higher Education: Uses and Misuses


Evaluation and Accreditation Council of Taiwan since 2007 (Huang, 2007), the
‘Bibliometric Rankings of World Universities’ by Moed (2006), and ‘World Top
Universities’ by the Research Center for Chinese Science Evaluation of Wuhan
University (2006).

There have been other global university rankings. The ‘Ranking Web of World
Universities’ by the Cybermetrics Lab of CSIC (2004) uses a series of web
indicators to rank 16,000 universities worldwide. A French higher educa-
tion institution, École des Mines de Paris (2007), published the ‘Professional
Ranking of World Universities’ by calculating the number of alumni among
the Chief Executive Officers of the 500 leading worldwide companies. In
December 2011, the University Ranking by Academic Performance Center
of Middle East Technical University (2011) announced the world’s top 2,000
universities based on six indicators of research output. Up to now, more than
a dozen global university rankings have been published.

Methodological problems of global university rankings

Different global rankings have different purposes, and they only measure
parts of universities’ activities. Bibliometric rankings focus on research output,
while ARWU also emphasizes the research dimension of universities. These
systems do not assess well the fundamental role of universities – teaching
– and their contributions to society. Although the THE-QS ranking tries to
measure multi-faceted universities by combining indicators of different
activities, including some proxies of teaching quality, its practice largely failed
to convince others and the ranking was taken as a measure of reputation
and ‘not about teaching and only marginally about research’ (Marginson,
2007). Therefore, none of the current global ranking systems can provide
a complete view of universities. Taking any single ranking as a standard to
judge a university’s overall performance is improper.

For the moment none of the ranking indicators is perfect; while some seem
practically acceptable, others have serious flaws. The so-called ‘Academic
Peer Review’ used by THE-QS ranking might be the indicator most often
­criticized. First, it is an expert opinion survey rather than a typical peer
review in academic community; the respondents, even though they are
experts, can hardly make professional judgments on on such large entities
in their entirety. (Van Raan, 2007). Second, psychological effects such as the
‘halo effect’ (Woodhouse, 2008) and the ‘leniency effect’ (Van Dyke, 2008)
affect the results of the opinion survey, so that there is a bias towards well-
known universities and respondents’ universities.

Chapter 1. The Academic Ranking of World Universities and its future direction 33


Bibliometric indicators such as publications and citations are relatively
credible for measuring the research performance of large entities, but
problems and shortcomings still occur when they are used to compare
universities worldwide. Many global rankings choose Thomson Citation Indexes
as their bibliometric sources, therefore only publication output and only
those published in indexed journals are taken into account. This inevitably
leads to some bias against universities with strong humanities and social
sciences departments and universities from non-English-speaking countries.

Marginson (2007) criticized teaching-related indicators, such as student


faculty ratio and percentages of international faculty and students, mainly
because they cannot be used to adequately measure teaching quality. Some
indicators can be seen as proxies of teaching output, for example, num-
ber of alumni among CEOs of top 500 companies and number of alumni
who win Nobel Prizes or Fields Medals. But as the measured objects were
restricted within a tiny group, they say little about the general quality of
teaching output.

Some general criticisms on ranking practices hold true for global rankings. A
common phenomenon in global rankings is the arbitrary decision of weights
of indicators. Another criticism is that the difference between scores for uni-
versities with different global ranks may be statistically insignificant.

Use of global university rankings

Global university rankings, although of interest to prospective students and


employers, receive most of their attention from governments and univer-
sities themselves. With the emergence of the knowledge-based economy,
research universities are expected to play a key role in building the core
competiveness of countries. Therefore national governments are eager to
know the strengths and weaknesses of national universities at the global
level – information that was not readily available prior to the emergence
of global rankings. Global rankings provide comparative information on
university performance in different countries, which helps governments to
ascertain the international standings of universities. While some nations
were satisfied with the global rankings of their universities, others began
to sense a crisis. As Jan Figel, the European Commissioner for Education,
said to the media, ‘If you look at the Shanghai index, we are the strongest
continent in terms of numbers and potential but we are also shifting into
a secondary position in terms of quality and attractiveness’ (Blair, 2007).
Nowadays there is a clear trend for more and more nations to declare their

34 Rankings and Accountability in Higher Education: Uses and Misuses


ambition to have a certain number of universities among the top tier in
the world, regardless of their current standing. Furthermore, more and
more nations are using rankings as policy instruments for higher educa-
tion reform and even resource allocation.

Whether universities admit it or not, they care about rankings. For


better-placed universities, global rankings are effective tools for building
and maintaining reputations, both of which are important for attract-
ing talent and resources and gaining support from the general public.
Conversely, poor performance of universities (as compared with expec-
tations) and absences in global rankings may have a negative impact.
Because of the significant influence of global rankings, climbing up the
ladder has become a common desire of universities. In a survey of lead-
ers and senior managers of higher education institutions in forty-one
countries, Hazelkorn (2011) found that 82 per cent of respondents wanted
to improve their international position and 71  per  cent wanted to be
among the top quarter in the world. At the same time, over 56 per cent
of respondents said that their universities had established a formal
internal mechanism to monitor rankings and their own performance,
and 63  per  cent had already taken strategic, managerial or academic
action in response to rankings.

Future direction of Academic Ranking of


World Universities
Updating rankings annually

As the first multi-indicator ranking of global universities, ARWU has provided


trustworthy performance information on universities in different countries
for eight years. Students have used ranking results to select places to study,
universities have used them to benchmark themselves against peers and to
set up strategic priorities, national policy-makers have used them to compare
education strengths and promote reforms, and researchers have used them
to select samples for various analysis and studies. In order to continue meet-
ing these needs, we will update ARWU, ARWU-FIELD and ARWU-SUBJECT
every year. In addition, we will keep changes in ranking methodology to a
minimum to allow comparison of performance of particular universities or
countries across years.

Chapter 1. The Academic Ranking of World Universities and its future direction 35


Improving the methodology

ARWU has tried to rank research universities in the world by academic or


research performance based on internationally comparable third-party data
that are verifiable by all. Nevertheless, there are still many methodological
and technical limitations. Methodological limitations include the balance
of research with teaching and service in ranking indicators and weights,
the inclusion of non-English publications, the selection of awards, and the
experience of award winners. Technical limitations exist in the definition of
institutions, data searching and cleanup of databases, and the attribution of
publications to institutions and broad subject fields. We have endeavoured
to study the above-mentioned limitations and improve our methodology.

In order to better consider the function of education within ARWU, we are


currently collecting data on the educational experiences of senior executives
in Fortune Global 500 corporations, as the number of senior executive alumni
could be a good indicator of educational outcome of institutions. To resolve
the field imbalance in statistics of international academic awards, we selected
a list of around eighty international academic awards and are working to
classify them according to academic prestige and degree of internationality.
Furthermore, we keep a close eye on the development of advanced ranking
techniques and new international databases, and feasibility studies are
carried out whenever possible.

Diversifying the ranking

We are also studying the possibility of providing more diversified ranking


lists, particularly rankings for different types of universities with different
functions, disciplinary characteristics, history, size and budget, as well as
other factors. These studies are not being being done on the basis of new
methodology or new indicators, but through various classifications of uni-
versities. For instance, we have published a classification of ARWU top 500
universities by disciplinary characteristics, in which universities are classified
according to dominance in certain fields, such as engineering or medicine
(Cheng and Liu, 2006). These classifications allow separate lists of universi-
ties of the same type to be extracted from ARWU. Following the same idea,
we plan to develop classifications of universities from different perspectives
to enable a variety of comparisons among similar universities.

ARWU provides a list of 500 universities. This covers less than 5 per cent of
all 15,000 higher education institutions in the world (the number of higher

36 Rankings and Accountability in Higher Education: Uses and Misuses


education institutions was reported in the World Higher Education Database
2011).2 Hence, 95 per cent of higher education providers, especially those in
less developed countries, are invisible in the ranking. To help remedy this,
we plan to develop regional university rankings such as rankings of universi-
ties in Eastern Europe, South America, Africa or China. These regional rank-
ings will not only adopt the indicators used in ARWU, but will also consider
other indicators relevant to the region that may reflect universities’ global
competitiveness, directly or indirectly.

Profiling research universities

Since January 2011, we have cooperated with the Global Research University
Profile (GRUP) project, which aims to develop a database compiling facts and
figures of around 1,200 global research universities ranked by ARWU annu-
ally. An online survey tool has been designed to collect the basic information
of universities such as number of academic staff, number of students, total
income, research income and so on. We sent survey invitations to 1,200 uni-
versities and promised to provide participating institutions with an analysis
report based on data collected from all respondent institutions. In the invita-
tion letter we also explain that their data may be used to develop customized
rankings. The number of universities participating in the survey has been
very encouraging so far. In addition to the survey, we have managed to obtain
data from national education statistics agencies in major countries, including
the National Center for Education Statistics in the United States; the Higher
Education Statistics Agency in the United Kingdom; and the Department of
Education, Employment and Workplace Relations in Australia.

Although the comparability and quality of the survey data may not be as good
as that of data obtained from third parties, more useful indicators can be
developed to meet increasing demand to compare global universities from
various perspectives. We plan to employ the survey data and third-party
data to design a web-based platform in which users will be able to select
from a large variety of indicators and weights to compare the universities
concerned. In addition, we will undertake in-depth analysis of the survey
data in order to describe the characteristics of world-class universities and
research universities in different countries and worldwide. We hope that the
results will enhance our understanding of world-class universities and will
be helpful when initiating or adjusting relevant policies.

2 For further information, see: www.unesco.org/iau/directories/index.html

Chapter 1. The Academic Ranking of World Universities and its future direction 37


Contributing to the optimal development of university ranking in
general

We have been undertaking theoretical research on rankings in general,


seeking to contribute to the understanding of rankings. We have also
actively participated in international societies related to ranking, such as the
International Observatory on Academic Ranking and Excellence (iREG).3 An
ongoing effort of this organization is to conduct audits of existing ranking
systems. It is expected that the audit will urge rankers to compile and publish
rankings more responsibly and help users to identify the quality of different
rankings and wisely use rankings to inform various decisions.

References

Blair, A. 2007. Asia threatens to knock British universities off the top table. The Times
(21 May 2007): www.timesonline.co.uk/tol/life_and_style/education/article1816777.ece
(Accessed 15 April 2011.)

Bollag, B. 2006. Group endorses principles for ranking universities. Chronicle of Higher
Education: http://chronicle.com/article/Group-Endorses-Principles-for/25703
(Accessed 15 April 2011.)

Cheng, Y. and Liu, N.C. 2006. A first approach to the classification of the top 500 world universities
by their disciplinary characteristics using scientometrics. Scientometrics, 68(1): 135–50.

Cybermetrics Lab of CSIC. 2004. Ranking Web of World Universities: www.webometrics.info/index.


html (Accessed 15 April 2011.)

Destler, B. 2008. A new relationship. Nature, 453(7197): 853–54.

École des Mines de Paris. 2007. Professional Ranking of World Universities: www.mines-paristech.fr/
Actualites/PR/Archives/2007/EMP-ranking.pdf (Accessed 15 April 2011.)

Economist, The. 2005. A world of opportunity. 376 (8443): 14–16.

Enserink, M. 2007. Who ranks the university rankers? Science, 317(5841): 1026–28.

European Commission Research Headlines. 2003. Chinese study ranks world’s top 500 universities.
(31 December 2003): http://ec.europa.eu/research/headlines/news/article_03_12_31_
en.html (Accessed 15 April 2011.)

Hazelkorn, E. 2011. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. London: Palgrave Macmillan.

Huang, M.H. 2007. 2007 Performance Ranking of Scientific Papers for World Universities:
http://ranking.heeact.edu.tw/ (Accessed 15 April 2011.)

3 For further information, see: www.ireg-observatory.org

38 Rankings and Accountability in Higher Education: Uses and Misuses


Marginson, S. 2007. Global university rankings: Implications in general and for Australia. Journal of
Higher Education Policy and Management, 29(2): 131–42.

Moed, H.F. 2006. Bibliometric Rankings of World Universities: www.cwts.nl/hm/bibl_rnk_wrld_


univ_full.pdf (Accessed 15 April 2011.)

Patten, C. 2004. Chris Patten’s speech. The Guardian. (5 February 2004): www.guardian.co.uk/
education/2004/feb/05/highereducation.tuitionfees1 (Accessed 15 April 2011.)

Research Center for Chinese Science Evaluation of Wuhan University. 2006. World Top Universities:
http://apps.lib.whu.edu.cn/newbook/sjdakypj2006.htm (Accessed 15 April 2011.)

Shanghai Ranking Consultancy. 2010. Ranking Methodology. Academic Ranking of World


Universities – 2010: www.arwu.org/ARWUMethodology2010.jsp (Accessed 15 April 2011.)

THE-QS. 2009. World University Rankings 2009. www.timeshighereducation.co.uk/hybrid.


asp?typeCode=438 (Accessed 15 April 2011.)

Times Higher Education. 2010. World University Rankings 2010-11: www.timeshighereducation.


co.uk/world-university-rankings/2010-2011/analysis-methodology.html
(Accessed 15 April 2011.)

University Ranking by Academic Performance Center of Middle East Technical University. 2011.
World Ranking: www.urapcenter.org/2010/index.php (Accessed 15 April 2011.)

Van Dyke, N. 2008. Self- and peer-assessment disparities in university ranking schemes. Higher
Education in Europe, 33(2/3): 285–93.

Van Raan, A.F.J. 2007. Challenges in the ranking of universities. J. Sadlak and N.C. Liu (eds) The
World-Class University and Ranking: Aiming beyond Status. Bucharest: UNESCO-CEPES,
pp. 87–121.

Woodhouse, D. 2008. University rankings meaningless. University World News (7 September 2008):


www.universityworldnews.com/article.php?story=20080904152335140 (Accessed
15 April 2011.)

Chapter 1. The Academic Ranking of World Universities and its future direction 39


Chapter 2

An evolving methodology:
the Times Higher Education
World University Rankings
Phil Baty
Historical overview
The first ever global university ranking, produced by Shanghai Jiao Tong
University, appeared in 2003. A year later, in 2004, the Times Higher Education
(THE) magazine published its first global university ranking, and has continued
to do so ever since. Although a second runner up, THE was the first global rank-
ing of universities to sample the views of academics across the world, as well
as to include the latest measures of research excellence and teaching capacity.

A lead article marking the inaugural publication of the World University


Rankings by the Times Higher Education (THE) magazine, then known as the
Times Higher Education Supplement, noted that a global ranking was ‘an idea
whose time has come’. Not only had the time come, it is set to stay, and Times
Higher Education foresees its own sustained contribution to this effort.

The publication emphasized that leading United Kingdom (UK) universities were
increasingly defining their success against global competitors, and noted that:

unlike domestic university league tables, this ranking does not set out
to steer students towards the best undergraduate education: it looks at
institutions in the round. Despite the importance of overseas students
to universities, international comparisons inevitably centre mainly on
research… the positions will… be used as ammunition by politicians and
vice chancellors in funding negotiating (Times Higher Education, 2004: 14).

The initial methodology used by THE was simple. There were five performance
indicators: a staff-student ratio (weighted at 20 per cent) designed to give a sense
of the ‘teaching capacity’ at each institution; an academic reputation survey
(weighted at 50 per cent); an indicator of research quality based on citations
(20  per  cent); and two measures of internationalization (worth 10  per  cent),
one looking at the proportion of international staff on campus, and the other
looking at the proportion of international students.

Evolution of the ranking methodology


The world university rankings proved to be a major success, gaining an
increasing global reach and influence. But with growing influence came grow-
ing scrutiny of the rankings and their methodology. Aware of increasing criti-
cism of the rankings methodology, during 2009, a new senior editorial team

42 Rankings and Accountability in Higher Education: Uses and Misuses


at the Times Higher Education magazine carried out a comprehensive review of
the rankings. The review concluded that the rankings the magazine had been
publishing successfully since 2004 were no longer fit for the purposes being
assigned to them. Ann Mroz, then editor of Times Higher Education, explained
in an editorial in the magazine on 5 November 2009 that:

Global rankings have always been used by students to choose where


to study, by staff to look at career opportunities and by research
teams seeking new collaborative partners… But in recent years the
[world rankings] have become extraordinarily influential, used by
institutions to benchmark themselves against global competitors and
even by governments to set their national higher education agendas.

The responsibility weighs heavy on our shoulders. We are very much


aware that national policy and multimillion-pound decisions are
influenced by these rankings. We are also acutely aware of the criti-
cisms made of the methodology. Therefore, we feel we have a duty to
improve how we compile them.

Higher education is global. Times Higher Education is determined to


reflect that. Rankings are here to stay. But we believe universities
deserve a rigorous, robust and transparent set of rankings – a serious
tool for the sub-sector, not just an annual curiosity (Mroz, 2009, 5).

That month, the magazine ended Survey findings


its six-year relationship with its 74 per cent of respondents said they believed that
previous rankings data supplier, ‘institutions manipulate their data to move up in ranking’.
QS – Quaquarelli Symonds, and 71 per cent of respondents said that rankings ‘make
set up a new partnership with institutions focus on numerical comparisons rather than
Thomson Reuters, one of the on education students’.
world’s leading data companies. 70 per cent of respondents sais that rankings use
‘methodologies and data’ that are ‘neither transparent nor
Thomson Reuters was engaged reproducible’.
to build a new database of
global, research-driven univer-
sities, and to work with Times Higher Education to develop a new, more sophis-
ticated way of ranking universities. A new brand – the Times Higher Education
World University Rankings, powered by Thomson Reuters – was created. Times
Higher Education would take full responsibility for the rankings methodology,
and undertook to rank the institutions, while Thomson Reuters would collect,
analyse and supply the data, but would not itself publish a ranking.

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 43
Key methodological concerns

In order to help develop the new ranking system, in early 2010, Times Higher
Education held a meeting of its expert editorial advisory board, to discuss
concerns about rankings. Three strong concerns were raised about the origi-
nal THE-QS methodology of world university rankings published between
2004 and 2009.

First, was the heavy weight (20 per cent) that had been assigned to a staff-
student ratio as the only proxy for teaching quality in the old ranking system.
It was not seen as a particularly helpful or valid indicator of teaching quality,
and it was believed that data was easily manipulated.

Second was the quality and value of reputational surveys of academics and
employers, and concerns about the size and quality of the samples. There were
further concerns that excessive weight was given to results of such subjec-
tive reputational surveys, which made up 50 per cent of the overall rankings
indicators in the 2004–2009 ranking system.

Andrejs Rauhvargers later reiterated this concern in a June 2011 report from
the European University Association, entitled Global University Rankings and
their Impact. He noted that the reputation scores in the 2004–2009 ranking
system were based on ‘a rather small number of responses: 9,386 in 2009 and
6,534 in 2008; in actual fact, the 3,000 or so answers from 2009 were simply
added to those of 2008. The number of answers is pitifully small compared
to the 18,000 email addresses used’ (Rauhvargers, 2011: 28). Rauhvargers also
raised concerns that the lists of universities that survey respondents were
asked to select from were incomplete: ‘What are the criteria for leaving out a
great number of universities or whole countries?’ (Rauhvargers, 2011: 29)

The third concern was actually raised by the Times Higher Education editorial
board and related to the use of citations data to indicate research excellence.
Given the wide variety of publication habits, and therefore given the wide vari-
ety of citation volumes between different disciplines, Times Higher Education was
advised to normalize the citations data by subject. No normalization was car-
ried out under the 2004–2009 ranking system, meaning that institutions with
strengths in areas with typically lower citation volumes, such as engineering
and the social sciences, were at a serious disadvantage compared to those with
strengths in the life sciences, where citation levels tend to be much higher.

In another important step towards improving the method for developing a


new ranking system, Thomson Reuters carried out a global opinion survey

44 Rankings and Accountability in Higher Education: Uses and Misuses


to find out what higher education professionals thought of existing ranking
systems. Specifically the survey sort to establish the indicators they valued
and any concerns they may have had. The results, published by Thomson
Reuters in the report New Outlooks on Institutional Profiles, were illuminating.
In the survey, many respondents raised a number of concerns about the
existing rankings. A disconcerting finding was that a significant proportion
of respondents (74  per  cent) suggested that world rankings have perverse
incentives for institutions that perceived themselves as unfavourably ranked.
They believed that these institutions manipulated their data in order to move
up the ranks. While indirect, this incentive cast serious doubt on the integrity
of the methodology and therefore its results.

Perceived value of rankings

Despite the above-outlined concerns, respondents overall had strong


support for the utility of university rankings. From a self-selected sample
of respondents, 40 per cent found analytic comparisons between academic
institutions to be either ‘extremely useful’ or ‘very useful’. A further 45 per cent
found them ‘somewhat useful’.

Times Higher Education took the survey as a clear indication that, despite
the inherent problems with reducing all the complex and often intangible
activities of a university into a single ranked table, the limitations of global
ranking systems are outweighed by their perceived general usefulness.

It however needs to be acknowledged that any ranking system will have


significant limitations. Ranking methods, no matter how good, cannot fully
capture all or many of the aspects that matter most in higher education. Of
particular challenge are those aspects that are hard to measure; for instance,
the life-transforming effects that a great lecturer can have on students’ lives,
and the enormity with which free enquiry enhances our societies. Not only
are these aspects hard to measure, but also the methodologies and indicators
used to capture them can be quite subjective and susceptible to subjective
judgement. Caution also needs to be sounded that, if not cautiously used,
rankings can impose uniformity on a sub-sector that thrives on diversity.
They can pervert university missions and distract policy-makers. For instance,
when very simple proxy indicators, such as a staff-student ratio, are given
too much weight in any ranking methodology, they can be manipulated for
unfair gain. However, when appropriately constructed and with a larger mix
of indicators and a focus on indicators that reflect real-world performance,
many of the pitfalls associated with university rankings can be minimized,

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 45
although never totally eliminated. The real risks associated with the
methodologies, scope and use of rankings place a lot of responsibility and
accountability on ranking houses, and almost compel the establishment of a
code of conduct for these hallowed houses.

What should ranking houses do?

The Times Higher Education holds that as long as rankers are responsible and
transparent, as long as they invest properly in serious research and sound
data, as long as they are frank about the limitations of the proxies they
employ, and as long as they help to educate their users and engage in open
debates, rankings can be a positive force in higher education.

University rankings can significantly enhance global understanding of


dramatic changes faced by the sub-sector. According to the OECD, 3.9 million
students are currently studying outside their home countries. The number is
predicted by many to increase to as many as 7 million in the next few years.
There are now at least 200 satellite campuses set up outside their parent
universities’ home countries according to the Observatory of Borderless
Education. Around 40 per cent of the millions of research papers published
in the last five years by Times Higher Education’s top 200 institutions were
co-authored with an international research partner.

We are entering a world of mass higher education and the traditional world
order is shifting. Massification of higher education is made possible by the
diversification not only of providers, but also of programmes and, possibly, their
quality. If based on defensible and clearly explained methodologies, rankings
can help to fill a crucial information gap. Often students as consumers in a
competitive global market need comparative information on the institutions
they may seek to study at. Faculty, who are increasingly mobile across national
boarders, also need information to identify potential new research partners
and career opportunities. University leaders need benchmarking tools to
help forge institutional strategies. National governments need comparative
information to help determine higher education policies. Industry needs
information to establish where to invest in university research and innovation.
Carefully selected and appropriately suited indicators can provide rankings
that address information needs of such different clientele.

Other than identifying methodological contentions and perceived uses of rankings,


the Thomson Reuters survey report, New Outlooks on Institutional Profiles, also
presented information indicators that begin to address the information needs

46 Rankings and Accountability in Higher Education: Uses and Misuses


of particular clientele. For instance, some 92  per  cent of survey respondents
noted that faculty output (as measured by research publications) was a ‘must
have’ or a ‘nice to have’ indicator. There was also strong support (91 per cent) for
a measure of faculty impact (research paper citations). Some 86 per cent wanted
faculty/student ratios as a proxy measure of the teaching environment. Another
84  per  cent supported the use of income from research grants. Surprisingly,
79 per cent of respondents supported the use of peer ‘reputation’ measures – the
controversial opinion polls that have provoked strong criticism.

Times Higher Education’s new methodology


In developing a new rankings system with Thomson Reuters, Times Higher
Education sought to directly address the concerns outlined above. This meant
going back to basics to consider how to capture as many characteristics as
possible of the global research-led university, across all of its core missions.

Methodological improvements to THE’s rankings have to be considered against


the reality that its world university rankings consider only a particular type
of university. In officially ranking just 200 institutions, THE focuses on about
1 per cent of the world’s higher education institutions. The world top 200 list
may incorporate institutions with different cultures, histories, sizes, shapes,
funding and governing structures, but they all share core characteristics: they
publish world-class research carried out across national borders, they work
with global industry, they teach from undergraduate to doctoral level, and
they compete in a global market for the top student and academic talent.

The rankings therefore cover only global research-driven universities. There


are many other different models of university, all of which can achieve excel-
lence in the context of their own aims and missions. Many different models
could be deemed absolutely successful in their own terms, but they would be
unlikely to find places at the top of the world university rankings.

Times Higher Education has data on many hundreds of institutions; however, its
official rankings list comprises only the first 200 placed universities. This is done
specifically to undermine the notion that everyone should aspire to the same
model. Times Higher Education recognizes that one of the strengths of the higher
education system is its diversity. It is keen to emphasize that it does not deem
it appropriate to judge every university on the same scale against the model set
by the universities such as Harvard and Stanford and Oxford and Cambridge.
Not every institution can be a Harvard and not every institution would want

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 47
to be. Each institution of course will have its own mission and its own priori-
ties for development, serving in some cases a largely teaching-led role, and in
others focusing on local or national skills needs. Times Higher Education’s World
University Rankings examine only a globally competitive, research-led elite.

The Times Higher Education World University Rankings were finalized only
after ten months of open consultation, and the methodology was devised
with expert input from more than fifty leading figures from fifteen countries,
representing every continent.

The new Times Higher Education World University Rankings, first published on
16  September 2010, and again on 6 October 2011, recognize a wider range of
what global universities do. While the Academic Ranking of World Universities,
compiled by Shanghai Jiao Tong University, really focus only on research
performance, the Times Higher Education World University Rankings seek to
capture the full range of a global university’s activities – research, teaching,
knowledge transfer and internationalization.

Consistent with its aim to take a more holistic view of the mission of universi-
ties, the new THE rankings use thirteen separate indicators – more than any
other global system (Figure 1).

Figure 1. Times Higher Education World University Rankings

48 Rankings and Accountability in Higher Education: Uses and Misuses


Times Higher Education World University Rankings place the most weight on a
range of research indicators. This is deemed an appropriate approach in a world
where governments invest heavily in developing the knowledge economy and
seek answers to global challenges such as climate change and food security.
Research indicators may include reputation (assessed through an improved
professional academic reputation survey), volume (assessed through publica-
tion in leading academic journals indexed by Thomson Reuters) and income.
However, the highest weighting is given to ‘research influence’ measured by
citations of published research by academics worldwide. Citations indicate
which research has stood out, been picked up and built on by other scholars,
and most importantly, has been shared among the global scholarly commu-
nity, thereby pushing further the boundaries of collective understanding – one
of the most fundamental roles of any research university.

For the 2011–12 world university rankings, THE examined more than 6 million
research publications, producing more than 50 million citations accumulated
over a six-year period (2005–2009). In response to strong criticism of the
2004–2009 methodology, data were fully normalized to reflect variations in
citation volume between different subject areas. As such, universities with
strong research in fields with lower global citation rates were not penalized.
In addition, citations per paper produced by each university were measured
against world average citation levels in each field.

Of course, there remain concerns that scholars in the developing world


may find it harder to publish their work in the leading journals indexed
by Thomson Reuters, which for the most part are published in the English
language and are predominantly edited and published in the United States
and United Kingdom, where localized research networks exist. In addition to
normalizing the citations data for subject variations, THE therefore sought
to acknowledge excellence in research from institutions in developing
nations, where institutions may have research networks of their own, but
less opportunities, international exposure and therefore lower citation rates.
Normalizing the data to reflect variations in citation volume between regions
is an important innovation, and one that goes a long way towards addressing
criticism that the rankings, based so heavily on bibliometrics, overly favour
the English-speaking world.

Times Higher Education judges knowledge transfer in terms of just one indica-
tor – research income earned from industry – but, in future years, this cate-
gory will be enhanced with other indicators. One proposal being considered,
at the time of going to press, is to take into account the number of research
papers a university publishes in partnership with an industrial partner.

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 49
Internationalization is recognized through data on the proportion of
international staff and students attracted to each institution – a sign of
how global an institution is in its outlook and, perhaps, reputation. The
ability of a university to attract the very best staff from across the world is
key to global success. The market for academic and administrative jobs is
international in scope, and this indicator suggests global competitiveness.
Similarly, the ability to attract students in a competitive global market-
place is a sign of an institution’s global competitiveness and its commit-
ment to globalization.

For the first time for the 2011–12 rankings, THE also added an indicator that
rewards a high proportion of internationally co-authored research papers.

Perhaps the most dramatic innovation for the world university rankings
for 2010 and beyond is the set of five indicators designed to give proper
credit to the role of teaching in universities, with a collective weighting
of 30 per cent. However, it should be clarified that the indicators do not
measure teaching ‘quality’. There are currently no recognized, globally
comparative data on teaching outputs, so fair global assessments of teach-
ing outputs cannot be made. What the Times Higher Education rankings do
is to look at the teaching ‘environment’ to give a sense of the kind of learn-
ing milieu in which students are likely to find themselves. Times Higher
Education takes a subjective view, based on expert advice and consulta-
tion, that the indicators of the teaching environment they have chosen are
indicative of a high quality environment.

The key indicator for this category draws on the results of an annual aca-
demic reputational survey carried out for the world university rankings
by Thomson Reuters. To meet criticisms of the reputation survey carried
out for the rankings between 2004 and 2009, Thomson Reuters brought
in a third-party professional polling company to conduct the survey. The
Academic Reputation Survey is distributed worldwide each spring. It is a
worldwide, invitation-only poll of experienced scholars, statistically repre-
sentative of global subject mix and geography. It examines the perceived
prestige of institutions in both research and teaching.

Respondents are asked only to pass judgement based on direct, personal


experience within their specific area of expertise. They are asked ‘action-
based’ questions, such as ‘Where would you send your best graduates for
the most stimulating postgraduate learning environment?’ to elicit more
meaningful responses. In 2010, the survey covered 13,388 responses, attract-
ing a good balance of responses around the regions and the disciplines. In

50 Rankings and Accountability in Higher Education: Uses and Misuses


2011, despite the fact that no one who completed the survey in 2010 was
invited to take part again, the survey attracted 17,500 responses, with an
excellent balance of replies.

Some 19 per cent of the 2011 respondents were from the social sciences,
with 20 per cent from engineering and technology, and the same propor-
tion from physical sciences. Seventeen  per  cent came from the ‘clinical,
pre-clinical and health’, while 16 per cent came from the life sciences. The
smallest number of responses came from the arts and humanities – just
7 per cent – and while this is a little disappointing, it still provides a statisti-
cally sound basis for comparisons.

There was also an excellent spread of responses from around the world,
facilitated by the fact that the survey was distributed in nine languages:
Arabic, Brazilian, Chinese, English, French, German, Japanese, Portuguese
and Spanish. The vast majority of respondents, some 36 per cent, came
from North America, while 17  per  cent came from Western Europe,
10  per  cent from Eastern Asia, 8  per  cent from Eastern Europe and
7 per cent from Oceania.

In addition to the reputation survey’s results on teaching, four further


indicators are used to provide information on a university’s teaching and
learning environment.

The rankings also measure staff-to-student ratios. This, as noted by Times


Higher Education’s editorial board, is admittedly a relatively crude proxy for
teaching quality. But the indicator hints at the level of personal attention
students may receive from faculty, and there was strong demand for it
among stakeholders, so it remains in the rankings, but receives a relatively
low weighting of just 4.5 per cent.

Times Higher Education also look at the ratio of PhD to bachelor’s degrees
awarded, to give a sense of how knowledge-intensive the environment is,
as well as considering the number of doctorates awarded, scaled for size, to
indicate how committed institutions are to nurturing the next generation
of academics and providing strong supervision.

The last of the teaching indicators is a simple measure of institutional


income scaled against academic staff numbers. This figure, adjusted for
purchasing price parity so that all nations compete on a level playing field,
gives a broad sense of the general infrastructure and facilities available.
This is another major innovation in world rankings.

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 51
Sector response to these new tables has been excellent. However, there
was notable criticism, coming mainly from heads of institutions that have
taken the biggest hits from the new methodology, but many other com-
ments have been positive.

The new methodology equally attracted praise. David Willetts, the UK gov-
ernment minister for universities and science, congratulated Times Higher
Education for revising its rankings methodology. Steve Smith, vice-chancellor
of the University of Exeter and the former president of Universities UK, which
represents all UK vice-chancellors, said that the new methodology – and
particularly its reduced dependence on subjective opinion and increased
reliance on more objective measures – ‘bolstered confidence in the evalua-
tion method’ (Smith, 2010: 43).

David Naylor, president of the University of Toronto, summed things up well.


He recognized that Times Higher Education

consulted widely to pinpoint weaknesses in other ranking systems


and in [our] previous approach… They brought in a new partner with
recognized expertise in data gathering and analysis. And they also
sought peer opinions on the education and learning environment at
scores of universities. These are welcome developments. (Beck and
Morrow, 2010: 1)

Future directions

Times Higher Education has registered notable progress in improving its


methodology. Going forward, it will continue to engage its critics and take
expert advice on further methodological modifications and innovations. A
key innovation in rankings will be the pressure to provide more disaggregated
data to the user. This both reflects the inherent limitations of any single
composite ranking ‘score’ and recognizes the growing diversity of the users
of rankings, with a diverse range of needs. A key step in this direction is THE’s
free World University Rankings application for the iPhone and iPad, which
represents a major step forward in the field. Times Higher Education has of
course chosen its thirteen performance indicators and weightings carefully,
and only after lengthy consultation. But with the iPhone application, the
user can change weightings in five broad performance categories to suit their
individual needs. Such transparency with the rankings data not only helps
to provide more tailored information for the individual user, but also helps
to educate the user: it exposes the influence that the ranking compilers’

52 Rankings and Accountability in Higher Education: Uses and Misuses


decisions on the weighting of different performance indicators can have on
any institution’s overall ranking position.

More transparent, user-driven and multi-faceted approaches are, in my


view, the future of global university rankings.

References

Beck, S. and Morrow, A. 2010. Canadian universities ranked among world’s best. The Globe
and Mail, GlobeCampus supplement (16 September 2010). www.globecampus.
ca/in-the-news/article/canadian-universities-ranked-among-worlds-best/
(Accessed 30 September 2011.)

Mroz, A. 2009. Only the best for the best. Times Higher Education (5 November 2009): www.
timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=408968
(Accessed 30 September 2011.)

Observatory on Borderless Higher Education, 2012. http://www.obhe.ac.uk/documents/view_


details?id=894 (Accessed 1 November 2012.)

OECD (Organization for Economic Cooperation and Development) 2011.


Education at a Glance. Paris : OECD http://www.oecd.org/education/
highereducationandadultlearning/48631079.pdf (Accessed 1 November 2012.)

Rauhvargers, A. 2011. Global University Rankings and their Impact: EUA Report on Rankings 2011.
Brussels: European University Association.

Smith, S. 2010. Pride before the fall? Times Higher Education, Times Higher Education World
University Rankings supplement (16 September 2010).

Times Higher Education. 2004. (5 November 2004): www.timeshighereducation.co.uk/story.asp?sect


ioncode=26&storycode=192208 (Accessed 30 September 2011.)

Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 53
Chapter 3

Issues of transparency
and applicability in global
university rankings
Ben Sowter
In 1911, one hundred years ago, the world was a dramatically different
place, and higher education was no exception. Fewer than 10 per cent of the
universities in existence today had been established. The population of the
planet was less than a quarter of what it is today (United Nations, 2004) and
a substantially smaller proportion of them were going through university.
It had been only eight years since Orville and Wilbur Wright made the first
controlled, sustained and heavier-than-air human flight, and only ten since
the first Nobel Prizes were awarded (Nobel Foundation, n.d.).

Today, there are more than 20,000 higher education institutions (HEIs) in
the world and more than 3.3  million students are studying outside their
home country (OECD, 2010). In 2008 there were over 29  million flights
(OAG Aviation, n.d.) and 813 individuals have now been awarded a Nobel
Prize (Nobel Foundation, n.d.). With universities under increasing pressure
to accept more students and increase research productivity on increasingly
constricted budgets, student attraction at increasing fee levels is becoming
an ever more important priority for universities worldwide.

Demand and utility of university rankings


There is greater demand than ever for comparative information on inter-
national universities. This demand comes from the institutions themselves
to assess their competitive position, governments who have to ensure the
quality of higher education and research and have to rationalize resource
allocations, and students seeking to make the best choice of university.

Such information has not always been broadly available and remains
unavailable in certain contexts, but there has been a drive for transpar-
ency among institutions over the last thirty years. University league
tables have been one of the most influential factors in driving the trans-
parency of information on higher education. League tables, regardless of
how sophisticated their underlying measures, are compelling because of
their allure of simplicity and facility to place institutions in a hierarchy
where one is presented as superior to the next. Simple and conclusive
statements can be inferred and used to attract headlines and contribute
to marketing messages.

The tendency and even preference for the simplification of otherwise com-
plex realities is not limited to higher education. The human mind seems

56 Rankings and Accountability in Higher Education: Uses and Misuses


hard-wired to organize information into ordinal hierarchical lists (Eco,
2009). It is this way of thinking that has helped league tables, and more
recently, international league tables to cement their position in the higher
education landscape. The enormous interest in their published results
and the ambitions of institutions to feature well therein has provided the
incentive for universities to open up and feed data into both rankings and
central data systems and agencies, paving the way for a range of more
sophisticated tools to be considered that might never have been possible
without the influence of league tables.

International league tables, emerging for the first time in 2003, have
increased in number (see Table 1). Their rise in influence has mirrored this
increase but, if anything, at an accelerated rate, attracting an unanticipated
profile. Additionally, in certain contexts, international rankings have served
as an effective wake-up call to institutions and governments in countries
that may have previously had an inflated view of their own performance
and global impact. Rankings have also led to a revolution in the availability
of data on HEIs and intelligence to guide institutional and government
strategies for higher education.

While it is clear that a range of stakeholders refer to international


league tables, the primary target audience of the QS World University
Rankings® is that of prospective international students in line with the
mission statement of the company – ‘To enable motivated people around
the world to achieve their potential by fostering international mobility,
educational achievement and career development’ (QS Quacquarelli
Symonds Ltd).

Table 1. Major global rankings by principal perceived or stated audience

Ranking Compiler/publisher First appeared Principal audience


Academic Ranking of World Shanghai Ranking 2003 (Chinese) University
Universities Consultancy leadership
QS World University Rankings QS Quacquarelli Symonds 2004 Prospective students
Ltd
Ranking Web of World Webometrics 2004 University leadership and
Universities webmasters
Performance Ranking of Higher Education 2007 University leadership and
Scientific Papers of World Evaluation and research planners
Universities Accreditation Council of
Taiwan (HEEACT)
Times Higher Education Times Higher Education 2010 Academics
World University Rankings and Thomson Reuters

Chapter 3. Issues of transparency and applicability in global university rankings 57


The decisions facing prospective students have become immeasurably
more complex in the last twenty years. The World Wide Web was first
invented in 1989 and brought online in 1991, Google was founded in 1998,
Facebook in 2004 and Twitter in 2006. Today, the volume of information
which a prospective student has to sift through in order to make some
final choices is mind-boggling and in some cases overwhelming. There is
a growing need for simple-to-use but sophisticated tools to filter out the
noise and shortlist options that merit further research. This is the need
that the QS World University Rankings® and related tools and services
aim to meet.

Limitations of current university rankings


Current annual aggregated university rankings do not always adequately
meet client demands, nor do they always lend themselves to optimal use.
Key limitations pertain to inadequacies in recognizing and addressing
institutional diversity, lack of discipline-level matrices, narrow range and
scope of measures, and limited allowing for user-driven results.

Institutional diversity

Universities differ greatly from one another. While the universities evalu-
ated in the QS World University Rankings® are at the top end of the world’s
20,000+ and as a result are pursuing high performance in teaching and
research, their characteristics can vary greatly. The University of Buenos
Aires has over 300,000 students, while ENS Paris has around 2,000. In
this aspect alone it is clear that the two institutions are dramatically differ-
ent in terms of funding and facilities, before their difference is studied in
any more detail. In a global ranking context these differences are entirely
overlooked.

The QS response to this issue has been to devise a devastatingly simple clas-
sification system based on three key metrics: size, as defined by full-time
equivalent student enrolments; focus, as defined by the number of broad
faculty areas in which they are active; and research intensity, as defined by
the total volume of papers published factored against the size and focus of
the institution.

58 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 1. Abbreviations, descriptions and thresholds for QS classifications
Size Focus Research intensity
Fully comprehensive Very high
Very large
XL >=30 000 students FC All 5 faculty areas + VH Threshold relative to size
medical school and focus
High
Large Comprehensive
L >=12 000 students CO All 5 faculty areas HI Threshold relative to size
and focus
Moderate
Medium Focused
M >= 5 000 students FO > 2 faculty areas MD Threshold relative to
size and focus
Limited or none
Small Specialist
S < 5 000 students SP <= 2 faculty areas LO Threshold relative to
size and focus
Source: Sowter (2011c).

Naturally the definition of ‘research intensity’ in a small institution focused


principally on the Social Sciences has to be different than that for a large
fully comprehensive institution, which makes the thresholds for the research
intensity metric somewhat more complex than the others.

This concept will be extended in future and may consider aspects such as:

• institutional age,
• principal study mode,
• location/campus type (i.e. urban/suburban/rural),
• study levels offered/enrolment profile,
• institution status (i.e. public/private).

The key aspect about classifications is that no quality judgement should be


inferred from their interpretation. It is not necessarily better to be large than
small, comprehensive than specialist, or conduct less research if that is not
the key focus of the institution.

Discipline level metrics

From personal experience and focus groups it is clear that a large proportion
of prospective students address the question of institution choice already
equipped with a strong idea of the discipline in which they want to study.
Prior to 2011, global ranking compilers were not generating results at this
level of granularity – reducing the potential utility of the data being com-
piled. It is clear that there is a need for better data at the discipline level.
In response to this, QS is in the process of releasing tables at a narrower

Chapter 3. Issues of transparency and applicability in global university rankings 59


discipline level (Sowter, 2011a). By the time the first cycle is complete in June
2011, over twenty-five individual subject disciplines will have been released.
The full list at time of writing this chapter was:

• Engineering and Technology (Released 3 April 2011)


• Computer Science
• Engineering – Chemical
• Engineering – Civil and Structural
• Engineering – Electrical and Electronic
• Engineering – Mechanical, Aeronautical and Manufacturing
• Life Sciences and Medicine (Released 3 May 2011)
• Biological Sciences
• Medicine
• Psychology

• Natural Sciences (Released 19 May 2011)


• Chemistry
• Earth and Marine Sciences
• Environmental Sciences
• Mathematics
• Metallurgy and Materials
• Physics and Astronomy

• Arts and Humanities (Released 2 June 2011)


• English Language and Literature
• Geography and Area Studies
• History
• Linguistics
• Modern Languages
• Philosophy

• Social Sciences and Management (Released 22 June 2011)


• Accounting and Finance
• Business and Management Studies
• Communication, Cultural and Media Studies
• Economics and Econometrics
• Education
• Law
• Politics and International Studies
• Sociology
• Statistics and Operational Research

60 Rankings and Accountability in Higher Education: Uses and Misuses


The above list should facilitate a much richer profiling tool by discipline
strength for institutions as presented in the below chart.

Figure 2. Subject profile for selected anonymous institutions

y
graph
Statistic

1 - English
S oc

y
1 - G eo
iolo

stor
s-5

s
tic
gy -
Po

- Hi
100

uis
liti
cs

g
Lin
-5
80 es

1-
ag
Law ngu
-5 1 - La
60
hy
Educa sop
ti hilo
on - 5
40 1-P

cience
Economics - puter S
5 20 2 - Com

2 - Electrical Engineering
ication - 5
Commun

2-M
-5 ech
ent anic
al En
em
nag gine
Ma 3- e ring
g Bio
tin 5 log
un e -
co nc ica
Ac Fina lS
cie
3-
4

& nc
-P

es
ed
hy

4-
sic

ici
3-
M

ne
s

4-M

Psy
ate

4-C
4 - Earth Scien
4 - Environmental
r

c
ial

ho
athe

hem
s

log
Sc
ien

istry

y
ce

atic

Sciences
s

ces

Source: Sowter.

Institutions and potentially students will be able to quickly identify the


key strengths and potential weaknesses of an institution by discipline.
Prospective students might compare the profiles of different institutions to
ensure that the disciplines in which they are interested are well regarded at
their target institutions, while institutions themselves might utilize these
profiles to identify discipline areas that require some attention, or those that
might form the vanguard of their reputation. In the above example, two
weaker areas and four key strengths have been identified.

Range of measures

Global rankings and league tables, while extremely popular and accessible
have fundamental limitations that are not specific to particular methodologies
used, but rather pervade all such exercises (HEFCE, 2008). In the main, these
are imposed by the lack of globally available and comparable data for the key
aspects of university performance that might most importantly be measured.

Chapter 3. Issues of transparency and applicability in global university rankings 61


Every ranking exercise, therefore, has to make one or both of two
compromises:

1. sacrifice the inclusion of certain subject institutions due to lack of data


for intended measures,

2. sacrifice the inclusion of intended measures due to lack of data for cer-
tain subject institutions.

Additionally, the practicality of certain measures is limited by the trans­


national scope of the exercise, ruling out indicators that may work effectively
in one country, but simply do not when comparing across borders. Perhaps,
the best example is financial measures. Differences in fees and funding
between institutions and from year to year are influenced by a range of
factors beyond the institutions’ control, including but not limited to:

• international exchange rates,


• relative economic strength,
• government funding policy,
• cultural and structural tradition.

In order to overcome the effects of these influences, any ordinal evaluation


would have to apply a detailed and complex layer of statistical engineer-
ing to adjust for exchange rates, purchasing power and other less easy to
identify factors – such as the level of social inclination to donate to one’s
alma mater. The alternative is to ignore these factors altogether.

Indeed one of the longest established rankings of HEIs is the Financial


Times Global MBA Ranking. As Figure 3 clearly highlights, there is a strong
correlation between performance in their evaluation and the strength of
the currency where the business school is located. In the ten-year period
considered, the number of British business schools in the top 50 fluctuated
between five and twelve in the league table, and there appears to be a
marked relationship with the strength of the pound.

QS has always avoided considering financial metrics in its overall ordinal


rankings for these reasons, and it will be interesting to see what effects
are seen in the second edition of Times Higher Education’s ranking given
the inclusion of four distinct financial measures. Nonetheless, it is clear
that financial measures are strong indicators for certain aspects of uni-
versity strength and under different circumstances, perhaps, ought to be
looked at.

62 Rankings and Accountability in Higher Education: Uses and Misuses


Binary measures are also very difficult to include in a rankings context, as
are aspects that have a particularly low variance for the institution sample
featured in our work. For example, at a national level, graduate employ-
ment rates may seem a very pertinent measure, but given that the majority
of global rankings are dealing with the top  5 per  cent or less of global
institutions, all of the subjects do very well, providing very little discern-
ment between the participating institutions and resulting in a surprisingly
volatile measure that, once more, potentially owes more to the economic
environment than the effectiveness of the institution.

Figure 3. C orrelation between exchange rate and UK business school performance in


FT rankings

12

12
10

10

9
8
8

8
7

6
6
6

2
2,00128

1,85455
1,8323

1,82025

1,84243
1,63488

1,56538

1,61359
1,5458
1,50317

0
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

UK B-Schools in Top 50 Avg Exchange Rate



*Average Exchange Rates taken from the preceding year since rankings measures tend to be retrospective

Source: Sowter (2011b).

QS is in complete agreement that current aggregate global rankings do


not present a sufficiently comprehensive picture of the performance of
universities. Going further, QS believes that aggregate global rankings will
never be able to provide a complete picture regardless of how sophisti-
cated data collection mechanisms may become. Indeed, in almost all cases,
such rankings were intended for use as a guide to decision-making rather
than an alternative.

Chapter 3. Issues of transparency and applicability in global university rankings 63


Many of these other important measures are best embraced in a context which,

• does not depend on every institution gathering and submitting data, but
instead only on those that wish to be evaluated;
• does not evaluate performance relative to the moving goalposts of others’
parallel progress, but against pre-set and well-understood standards;
• does not automatically favour large, comprehensive institutions, but
identifies the best in each niche category; and
• presents a range of grouped or banded outcomes rather than an ordinal list.

In response to this need, QS and its Academic Advisory Board have devised
QS Stars. This is a rating system akin to a ‘Michelin Guide’ for universities.
The first audits, using a range of well over twenty indicators, are complete
and awards have been made.

User-driven results

University selection decisions are deeply personal and potentially crucial.


Rankings results, as published, are just one (perhaps expert) interpretation
of the data – a little like a film critic stating that one film is better than
another. The critic may know far more about film and may be able to justify
his viewpoint using a near scientific formulae, but his ultimate proclamation
may bear no resemblance to the viewpoint of any given audience member.

Figure 4. Concept screenshot from a QS World University Rankings® Scorecard

Source: Topuniversities (www.topuniversities.com).

Such is the case with university rankings. The expert panel assigning the
criteria and weightings may even effectively navigate the average viewpoint

64 Rankings and Accountability in Higher Education: Uses and Misuses


of the audience, but this does little to change the fact that any given reader
or user may consider employability rather more important than research
in considering their options. The Centre for Higher Education Development
(CHE) in Germany has pioneered this approach, providing a deeply sophis-
ticated tool for stakeholders to manipulate data and generate a rich user-
driven picture of the universities they might be considering. In 2011, data
from the QS World University Rankings® was, for the first time, made
available to such a system where users can select their own criteria and
apply their own weights in pursuit of generating a personalized, user-driven
ranking of their own.

The path to more transparency


Rankings and league tables have served as a very real catalyst for the
transparency agenda in higher education. QS, and I suspect any other
organization involved in similar data gathering operations, has found
institutions both increasingly forthcoming and increasingly able to sup-
ply solid responses to questions that, to some, would have once seemed
unfathomable. Doing the basic research for the basic metrics has certainly
become easier.

Since 2003, when the Shanghai rankings first emerged, the level of quantita-
tive information available on an average university’s own website has seen
a remarkable improvement. The emergence of central and government-­
sponsored data collection exercises has accelerated. There has also been
greater acceptance among university leadership that measurement and
evaluation are early steps on a route to performance improvement against
their own individual missions and goals.

It seems a natural step that rankings and league tables themselves be subject
to similar scrutiny and be expected to provide open access to what is ‘under
the hood’. Transparency is not only about access to the data; it involves
detailed data definitions, complete access to the methodology and any sta-
tistical techniques, the data itself, the ability to search, filter and manipulate
the results, and the necessary health warnings highlighting appropriate use
and misuse along with potential confidence analysis. Arguably there is no
provider of global league tables currently providing the complete collec-
tion of information and tools required to represent ‘complete’ transparency.
In the case of the QS, this is not down to a philosophical or commercial

Chapter 3. Issues of transparency and applicability in global university rankings 65


objective against transparency, but more to do with technical and resourcing
constraints in preparing all the necessary material and keeping it all fully up
to date. At present QS publishes:

• final indicator scores for all indicators,


• complete detailed definitions of all requested data,
• demographic breakdowns of survey responses,
• extensive methodological documentation,
• means and standard deviations for each indicator (aiding
reproducibility),
• specific weightings of indicators,
• statistical profiles (raw data) of each institution (to be reinstated on the
website soon).

QS also publishes links to a wide variety of other global, international and


domestic evaluations of universities, acknowledging that for different pur-
poses and contexts other results may be more relevant and useful. It has also
volunteered to be among the first providers to be subjected to the newly
devised IREG audit process (IREG, 2011). A question that frequently emerges
surrounding this area relates to the nature of organizations that are or should
be conducting this kind of work and whether or not this has a fundamental
influence on the transparency of results. Figure 5 gives an overview of the
organization types of the major international rankings compilers. There are a
mixture of private commercial organizations, government organizations and
institutions themselves.

Figure 5. Organization types of major international rankings compilers

Ranking Compiler/publisher Organization type

Academic Ranking of World Universities Shanghai Ranking Consultancy Commercial

QS World University Rankings QS Quacquarelli Symonds Ltd Commercial

Professional Ranking of Global Higher Mines ParisTech Institution


Education Institutions

Ranking Web of World Universities Webometrics State research institute

Performance Ranking of Scientific Papers Higher Education Evaluation and Government


of World Universities Accreditation Council of Taiwan (HEEACT)

Times Higher Education World University Times Higher Education and Thomson Commercial/Media
Rankings Reuters

High Impact Universities (Affiliated with) University of Western Institution


Australia

U-Multirank CHERPA Alliance/European Commission Government

66 Rankings and Accountability in Higher Education: Uses and Misuses


Clearly there are a number of commercial and media organizations involved
in pursuit of this activity, including QS. These organizations have driven
innovation in this area potentially more rapidly than may have been the case
through a more academic approach. Additionally, they are arguably more
independent as institutions themselves and government organizations are
likely to have, or to be perceived to have, a clear agenda to further the profile
of their own institutions. QS is a business entirely grounded in higher educa-
tion, and feels the pressure to be transparent and responsible – much more
so than an institution, government agency or media organization.

Conclusion
In her closing remarks at the UNESCO Global Forum on Rankings and
Accountability in Higher Education, Stamenka Uvalic-Trumbic, the then Chief
of the UNESCO Section for Higher Education, laid out projections estimating
that global higher education will need to find space for almost 100 million
additional students by 2025. This is equivalent to opening a large compre-
hensive university every two weeks. In reality, the majority of that demand
will be met by increased capacity at existing universities and advancements
in online and distance learning.

Either way, there are some dramatically clear implications. For instance,
­government funding can only go so far so in absorbing these additional students.
More and more universities will escalate their fees. Some countries where fees
have not previously been in evidence are beginning to introduce them. This
is a harsh economic reality. Introducing substantial financial liabilities to the
decision-making process will inevitably influence the behaviour of prospective
students. They will begin to look more like customers, demanding a certain
level of service, expecting a solid return on investment and reacting to good
deals. There will be increasing pressure for them to make the best possible
decision. They will require information that is easy to access and understand in
order to at least sift the options to a manageable level. Global league tables are
pioneering this on an international scale. A range of new tools in development
by QS and other providers will serve to further augment the picture currently
being painted by their results.

While many criticisms of league tables may be valid, there can be little
question that they have been a key driver for transparency and account-
ability among institutions, and have paved the way for a rich and growing

Chapter 3. Issues of transparency and applicability in global university rankings 67


culture of performance evaluation in higher education. They also provide
meaningful and useful input into student decision-making alongside other
sources of information. With generations Y and Z increasingly demanding
fast and convenient access to information it seems inevitable that these
mixed interpretations of university evaluation are likely to converge over
the next few years.

References

Eco, U. 2009. The Infinity of Lists: An Illustrated Essay. Paris: Rizzoli.

HEFCE (Higher Education Funding Council for England). 2008. Counting What is Measured
or Measuring What Counts? League Tables and their Impact on Higher Education
Institutions in England. London: HEFCE.

IREG (International Observatory on Academic Ranking and Excellence). 2011. IREG Ranking Audit
(15 December 2011): www.ireg-observatory.org/index.php?option=com_content&task=vi
ew&id=187&Itemid=162 (Accessed 10 May 2012.)

Nobel Foundation. n.d.: http://www.nobelprize.org/nobel_prizes/ (Accessed 10 May 2012.)

OAG Aviation. n.d. OAG Aviation: www.oagaviation.com/ (Accessed 10 May 2012.)

OECD (Organisation for Economic Co-operation and Development). 2010. Education at a Glance.
Paris: OECD.

QS Quacquarelli Symonds Ltd. n.d. QS Quacquarelli Symonds: www.qs.com (Accessed 10 May 2012.)

Sowter, B. 2011a, QS World University Rankings by Subject (April 2011): www.iu.qs.com/university-


rankings/subject-tables/ (Accessed 5 May 2012.)

Sowter, B. 2011b. Financial Measures Tricky for International University Evaluations. (2 June 2011):
www.iu.qs.com/2011/06/02/financial-measures-tricky-for-international-university-
rankings/ (Accessed 10 May 2012.)

Sowter, B. 2011c. QS Classifications (September 2011): www.iu.qs.com/university-rankings/qs-


classifications/ (Accessed 10 May 2012.)

United Nations. 2004. World Population to 2300. New York: United Nations.

68 Rankings and Accountability in Higher Education: Uses and Misuses


Part 2

Implications and
Applications
Chapter 4

World-class universities
or world-class systems?
Rankings and higher
education policy choices
Ellen Hazelkorn
In today’s world, it has become all too familiar for policy-makers and higher
education leaders to identify and define their ambitions and strategies in
terms of a favourable global ranking for their universities/university. But is
it always a good thing for a university to rise up the rankings and break into
the top 100? How much do we really know and understand about rank-
ings and what they measure? Do rankings raise standards by encouraging
competition or do they undermine the broader mission of universities to
provide education? Can rankings measure the quality of higher education?
Should students use rankings to help them choose where to study? Should
rankings be used to help decide education policies and the allocation of
scarce resources? Are rankings an appropriate guide for employers to use
when recruiting new employees? Should higher education policies aim to
develop world-class universities or to make the system world-class?

This chapter discusses the rising attention accorded to global rankings and
their implications for higher education. It is divided into five sections: the
first explores the growing importance accorded to rankings; the second
discusses what rankings measure; the third asks whether rankings measure
what counts; and the fourth reflects on how the use and abuse of rank-
ings is influencing policy choices. Finally, the fifth section addresses a key
policy question: should governments focus on building the capacity of a few
world-class universities or on the capacity of the higher education system
as a whole, in other words, building a world-class higher education system?

Growing attention to rankings


It is a common saying, but nonetheless true, that higher education is chang-
ing rapidly. There are probably four main drivers:

• First is the rapid creation of new knowledge creation and its application,
which has become a foundation for individual and social prosperity, be
it cultural or economic. People who complete a high-school education
tend to enjoy better health and quality of life than those who finish at
the minimum leaving age. Those completing a university degree can
look forward to a significantly greater gross earnings premium over their
lifetime compared with someone who only completes secondary school.
Graduates are also more likely to be engaged with their community
and participate in civil society. Successful societies are those with the
capacity to ensure its citizens have the knowledge and skills to contribute

72 Rankings and Accountability in Higher Education: Uses and Misuses


to society throughout their lives, and in which new knowledge can be
developed and exploited for competitive and public advantage. Because
higher education institutions (HEIs) are the principal base for human
capital development, and new knowledge creation and dissemination,
investment and performance matters. For all these reasons, higher
education is now at the centre of policy-making.

• Second, the capacity to participate in ‘world science’ depends on the ability


of countries to develop, attract and retain talent. But many countries face
demographic pressures. While the world population is increasing, the
population of more developed regions is dependent on net migration with
a converse impact on the developing world. Despite global population
growth, the availability of skilled labour is actually declining. In 2005, young
people represented 13.7 per cent of the population in developed countries
but their share is expected to fall to 10.5 per cent by 2050 (Bremner et
al., 2009, 2:  6). Together, these demographic dynamics presents a major
challenge for all national strategies based on growing knowledge-intensive
industries. In response, governments around the world are introducing
policies to attract the most talented migrants and internationally mobile
students, especially postgraduate students in science and technology.

• Third, because higher education is considered an essential component of


the productive economy, how higher education is governed and managed
has become a major policy issue. The quality of individual higher education
institutions (HEI) and the system as a whole, (e.g. teaching and learning
excellence, research and knowledge creation, commercialization and
knowledge transfer, graduate employability and academic productivity),
provide a good indication of a country’s ability to compete successfully
in the global economy. Accordingly, the trend for greater transparency
and accountability has been supplemented by an increasing need to
demonstrate value for money and (public) investor confidence.

• Fourth, students (and their parents) have become very savvy consumers,
especially as evidence continues to show that graduate outcomes and
lifestyle are strongly correlated with education qualifications and career
opportunities. Students are now much more focused on employability as
opposed to employment. They assess their choice of an institution and
education programmes as an opportunity-cost – balancing the cost of
tuition fee and/or cost of living and the career and salary opportunities. As
the traditional student market declines, competition for high achieving
students is rising. The balance of consumer power is shifting in favour of
discerning talented students.

Chapter 4. World-class universities or world-class systems? 73


Rankings and higher education policy choices
In this environment, the arrival of higher education rankings is not surprising.
They may be perceived as an independent assessment of individual institutions,
meeting wider policy goals for greater transparency and accountability, and
assessing value for money and return on investment. Rankings are seen to pro-
vide a clue, for a wide range of stakeholders, about the quality of the educational
product. For students, they indicate the potential monetary or private benefits
that university attainment might provide vis-à-vis future occupation and salary
premium. For employers, they signal what can be expected from the graduates of
a particular HEI. For government and policy-makers they can indicate the level
of quality and international standards, and their impact on national economic
capacity and capability. For HEIs they provide a means to benchmark their own
performance. For the public, rankings provide valuable information about the
performance and productivity of HEIs in a simple and easily understood way.

National rankings have existed in many countries, most notably the United
States, for decades. Since 2003, with the publication of the Shanghai Jiao
Tong Academic Ranking of World Universities (ARWU), global rankings have
become very popular. Knowledge about and use of rankings has continued
apace in the aftermath of the 2008 Global Financial Crisis (GFC), reflecting
the realization that in a global knowledge economy, national pre-eminence is
no longer enough. Today, rankings exist in every part of the world. There are
eleven global rankings – albeit some are more popular than others (see Box 1).
Over sixty countries have introduced national rankings especially in emerging
economies (Hazelkorn, 2012b), and there are a number of regional, specialist
and professional rankings. While undergraduate, domestic students and their
parents were the initial target audience for rankings, today, they are used by
a myriad of stakeholders (e.g. governments and policy-makers; employers
and industrial partners; sponsors, philanthropists and private investors; aca-
demic partners and academic organizations; the media and public opinion).
Postgraduate students, especially those seeking to pursue a qualification in
another country, are the most common target audience and user.

Box 1. Main global rankings


• Academic Ranking of World Universities (ARWU) (Shanghai Jiao Tong University), 2003
• Webometrics (Spanish National Research Council), 2003
• World University Ranking (Times Higher Education/Quacquarelli Symonds), 2004–2009
• Performance Ranking of Scientific Papers for Research Universities (HEEACT), 2007
• Leiden Ranking (Centre for Science and Technology Studies, University of Leiden), 2008
• World’s Best Colleges and Universities (US News and World Report), 2008
• SCImago Institutional Rankings, 2009
• Global University Rankings, RatER (Rating of Educational Resources, Russia), 2009
• Top University Rankings (Quacquarelli Symonds), 2010
• World University Ranking (Times Higher Education/Thomson Reuters [THE-TR]), 2010
• U-Multirank (European Commission) 2011
Note: Date indicates date of origin.

74 Rankings and Accountability in Higher Education: Uses and Misuses


What do rankings measure?
Rankings compare different HEIs using a range of indicators to measure dif-
ferent aspects of higher education (see Part I of this book). The choice of
indicators is decided by the promoters of each system, with each indicator
acting as a proxy for the real object. This is because there is often no direct
measurement; for example, there is no agreed way to measure the quality
of teaching and learning. Each indicator is considered independently, while
in reality there is an interactive element to them or at least collinearity; for
example, older well-endowed private universities are more likely to have
better faculty/student ratios and per student expenditure compared with
newer public institutions or institutions in developing countries. Each indi-
cator is also assigned a weight or percentage of the total score, with research
usually assigned the highest weight. A final score is aggregated to a single
digit and ranked sequentially. Rankings usually concentrate on whole insti-
tutions, although there is an increasing focus on sub-institutional rankings
at the field of science level (e.g. natural science, mathematics, engineering,
computer science, social sciences) or by discipline or profession (e.g. busi-
ness, law, medicine, graduate schools, etc.).

Regardless of ranking system, there has been considerable criticism of


the methodology, the choice of indicators and weightings, the quality of
the data and its reliability as an international or institutional comparator
of performance, and whether it is possible to measure and compare
complex and diverse HEIs possessing different missions and contexts (see
Dill and Soo, 2005; Rauhvargers, 2011; Sadlak and Liu, 2007a; Saisana and
D’Hombres 2008; Usher and Medow, 2009; Usher and Savino, 2006; Usher
and Savino, 2007). Over the years, and in response to commentary and
analysis, various changes to the methodology have been made, but the
overarching criticisms remain.

Rankings use information from four main sources: independent third


parties, such as government databases; bibliometric and citation data
gathered through proprietary, electronic or web-based sources; institu-
tional data; and student, peer, employer or other stakeholder surveys.
The absence of internationally meaningful and available data continues
to present a considerable problem for any reliable comparisons. Similarly,
the lack of consistency in data definition, sets, collection and reporting
makes it difficult to make simple and easy comparisons across jurisdic-
tions and between different rankings. National rankings are usually able
to capture data across a wide range of dimensions, while global rankings
are inevitably more narrowly proscribed. Peer or stakeholder surveys were

Chapter 4. World-class universities or world-class systems? 75


Rankings and higher education policy choices
issued in only a few languages until recently; however, THE-TR have now
expanded to nine languages. Webometrics measures the size and qual-
ity of university internet presence, but this can disadvantage developing
countries with poor internet connectivity.

The data sources are also susceptible to bias, self-perpetuating views of


quality and allegations of ‘gaming’ – or manipulating the data in order
to influence the outcome. To get around these problems, measurements
usually consist of proxies. For example, research data is used to measure
academic quality; student entry levels or student selectivity gauge
institutional selectivity; faculty/student ratio measure educational quality;
and an institution’s budget measures the quality of the infrastructure
(e.g. the buildings and laboratories). In addition, different rankings assign
different weightings to the indicators, and thus a HEI’s position can
change considerably depending upon the weight ascribed to the particular
criteria. Aggregating the scores into a final rank ignores the fact that some
institutions might score higher in some domain than others, or vice versa.
This can lead to inconsistency across different rankings but also highlights
the arbitrariness of the weightings.

Rankings focus disproportionately on research. This is due to the fact that


research data are widely available, but more importantly it reflects a view
that research is the most important indicator of higher education quality.
Research is assessed on the basis of bibliometric and citation data usually
provided by Thomson Reuter’s Web of Science or Elsevier’s Scopus. However,
these data are most accurate only for bio- and medical sciences research;
they are less reliable for the arts, humanities and social science disciplines.
By focusing on research output as the primary measure of higher education
quality and productivity, rankings ignore the full breadth of higher education
activity, including: teaching and learning, the quality of the student experi-
ence or the ‘added value’ a HEI contributes to a student’s learning over and
beyond the student’s entry level. No attention is given to the social and
economic impact of knowledge and technology transfer, or the contribution
of regional or civic engagement or ‘third mission’ activities to communi-
ties and student learning outcomes – despite these aspects being a major
policy objective for many governments and the mission focus for many HEIs.
Nonetheless, research accounts for 100 per cent of the marks of the ARWU
compared with 62.5 per cent for THE-TR and 20 per cent for QS. ARWU also
collects information on publications in Nature or Science, albeit it is not clear
why these two journals have been singled out for such attention. Table 1
below provides a simple comparison of what rankings measure and what
they do not measure.

76 Rankings and Accountability in Higher Education: Uses and Misuses


Table 1. What rankings measure
Rankings measure Rankings do not measure
• Bio- and medical science research • T eaching and learning, including ‘added value’, the
• Publication in Nature and Science impact of research on teaching
• Student and faculty characteristics (e.g. productivity, • Arts, Humanities and Social Science Research
entry criteria, faculty/student ratio) • Technology/knowledge transfer or impact and
• Internationalization benefit of research
• Reputation – among peers, employers, students. • Regional or civic engagement
• Student experience.

Despite the huge diversity in national context and institutional missions,


existing rankings compare complex HEIs using a common set of indicators.
Nonetheless, the results of major global rankings are often similar. According
to Usher and Medow (2009: 13), this commonality arises from the fact that
rankings measure socio-economic advantage and the benefits of age, size
and money, which help large institutions and countries. They attach greatest
importance to HEIs that are roughly 200 years old with approximately 25,000
students and 2,500 faculty, and an annual budget of around €2 billion plus
considerable endowment earnings (Sadlak and Liu, 2007b; Usher, 2006). These
HEIs operate highly selective entry criteria for students and faculty. Accordingly,
they have been able to amass significant competitive advantage. Of the world’s
more than 16,000 HEIs, research performance is concentrated in the top 500
and is virtually undetectable (on that index) beyond 2,000. Because age and
size matters, there is a super-league of approximately twenty-five universities,
usually with medical schools and in English-language countries, which tend to
dominate the top strata of all rankings (Sheil, 2009).

There are over 16,000 HEIs worldwide, according to the International


Association of Universities (IAU). However, rankings generally publish data for
only a fraction of this number with some exceptions (e.g. QS publishes data
for 700, and Webometrics for over 2,000 HEIs). Nonetheless, statements by
politicians and policy-makers, university leaders, other HE stakeholders and
the media regularly focus on the achievements of the top 100. This represents
less than 1 per cent of the world’s higher education institutions.

Do rankings measure what counts?


Considerable attention has been paid to commenting on what rankings
measure and identifying methodological flaws. However, the key question
is: do rankings measure what counts or, to paraphrase Einstein, do they

Chapter 4. World-class universities or world-class systems? 77


Rankings and higher education policy choices
simply count what is easily measured? Because rankings, like other
performance indicators, can incentivize opinions, decisions and behaviour,
it is important to understand more fully what is measured and the possible
perverse incentives or unintended consequences that can be encouraged
by their usage (see Martin and Sauvageot, 2011). The following discussion
briefly examines six different dimensions (see Table 2; fuller discussion in
Hazelkorn, 2011a, chap. 2).

Table 2. Summary of advantages and disadvantages of commonly used indicators

Indicator Advantage Disadvantage


Student entry levels • Strong correlation between academic • No statistically significant relationship
tests and future achievement, between ‘learning and cognitive growth’
especially for literacy and mathematics and admissions selectivity
Faculty/student ratio • Assesses ‘commitment to teaching’ • Quality depends on interaction among
• Smaller ratio creates a better learning many factors (e.g. faculty, pedagogy,
environment laboratories and other facilities)
Resources • Correlation between budget and • No direct correlation between budget
quality of learning environment, and usage, or between value, cost and
programme choice and services efficiency
Student satisfaction • Used to understand quality of learning • Useful to help improve performance, but
environment difficult to use for comparisons or ranking
Education outputs • Completion, graduation and • Lower socio-economic and ethnically
employability measures educational disadvantaged groups or mature students
success and failure can have different study patterns
• Links education with careers, salaries • Employability and salary are linked to
and lifestyle market forces
Research • Measures research and scholarly activity, • Bibliometric and citation practices are
impact and faculty productivity inaccurate measures of research activity
Reputation • Value and regard as measured by • Subject to rater bias, halo effect and
academic peers or key stakeholders ‘gaming’
Source: adapted from Hazelkorn (2011a: 60).

Measuring student entry

Many national rankings, such as the US News and World Report Best College
rankings (USN&WR), measure student entry levels on the basis that high
entry scores are a proxy for academic quality. This is based on the view
that student grades can be used to predict future achievement, and hence,
more high-achieving students equate with higher quality. But as Hawkins
(2008) says, ‘many colleges recruit great students and then graduate
great students [but is] that because of the institution, or the students?’
International evidence repeatedly shows that student-learning outcomes

78 Rankings and Accountability in Higher Education: Uses and Misuses


are attributable to many factors that influence prior learning. Kuh and
Pascarella (2004: 56) warn that failure to control for student pre-college
characteristics can lead to the conclusion that differences in reported stu-
dent experiences are institutional effects when, in fact, they may simply
be the result of differences in the characteristics of the students enrolled
at the different institutions. The US National Study of Student Learning
(NSSL) and the National Survey of Student Engagement (NSSE) ‘found no
statistically significant relationship between effective teaching practices
and admissions selectivity’ (Carey, 2006a) To get a more accurate picture
of the quality of teaching and learning, it would be better to assess ‘value
added’ – in other words, what an institution has contributed to a student’s
knowledge and skills rather than measuring students at entry. Ultimately,
entry scores simply reflect socio-economic advantage.

Measuring faculty/student ratio

Because measuring the quality of teaching and learning is highly complex,


rankings such as the THE-QS, QS and U-Multirank use faculty/student ratio
as a proxy for teaching quality. A smaller ratio is viewed as equivalent to
better teaching on the basis that small classes create the optimum learning
environment. This is an issue of discussion at primary and secondary level,
but even here the OECD (2010: 72) has warned that: ‘While smaller classes
are often perceived as enabling a higher quality of education, evidence on
the impact of class size on student performance is mixed.’ Education quality
is influenced by the whole learning environment; for example, the balance
of quality across academics, seminars, laboratories, tutorials, and so on, as
well as different pedagogical formats and learning resources. If a university

hired full-time lecturers, at lower salaries, to do more of its under-


graduate teaching and devoted the resources that it saved from doing
so to increasing the average salaries of its tenure-track faculty would…
its students be disadvantaged by having a smaller share of their classes
taught by tenure and tenure-track faculty? (Ehrenberg, 2005: 32)

Faculty/staff ratio also has very different meanings for public and private
institutions and systems, and may say more about the funding or efficiency
level. Class size in and of itself can be a hollow indicator especially when
used to measure the learning environment for high-achieving students.
Ultimately, the simplicity of the indicator does not tell us very much about
what affect the faculty/student ratio has on actual teaching quality or the
student experience (Brittingham, 2011).

Chapter 4. World-class universities or world-class systems? 79


Rankings and higher education policy choices
Measuring resources

The level of expenditure or resources is often used as a proxy for the quality
of the learning environment. This is captured, inter alia, by the total amount
of the HEI budget or by the size of the library collection. USN&WR says that
‘generous per-student spending indicates that a college can offer a wide
variety of programmes and services’ (US News Staff, 2010); this is sometimes
interpreted as expenditure per student. For example, Aghion et al. (2007)
argue that there is a strong positive correlation between the university
budget per student and its research performance as demonstrated in the
ARWU ranking. However, many HEIs are competing on the basis of substan-
tial resources spent on dormitories, sports and leisure facilities, and so on; it
is not clear what impact these developments – worthy as they are – have on
the actual quality of the educational or learning experience. This approach
can also penalize ‘institutions that attempt to hold down their expenditures’
(Ehrenberg, 2005: 33) and it provides ‘little or no information about how
often and how beneficially students use these resources’ (Webster, 1986: 152).
For example, because the costs associated with building a new library for a
developing country or new HEI can be very significant. Many institutions
have switched to electronic access or sharing resources with neighbouring
institutions. There is a danger that looking simply at the budget ignores the
question of value vs. cost vs. efficiency (Badescu, 2010), and that the indica-
tor is essentially a measure of wealth (Carey, 2006b). Indeed, while many
policy observers look to the US, ‘if value for money is the most important
consideration, especially in an age of austerity, the American model might
well be the last one... [to] be emulating’ (Hotson, 2011).

Measuring education outputs

In recent years, performance and quality assessment have shifted from


focusing on input factors to looking at outputs and outcomes. Rather
than simply comparing the number of students in a particular HEI or the
number of students entering the first year of a programme, emphasis
has turned increasingly to looking at successful completion or gradua-
tion rates, as determined by the appropriate timeframe (e.g. a BA degree
is usually completed in three/four years, a Master in one/two years and
a PhD in three/four years). Employability is also a focus of increasing
attention. There is little doubt that these are critical issues, as it places a
responsibility on HEIs to ensure that students successfully complete their
programme of study within a reasonable timeframe and can find sustain-
able employment afterwards.

80 Rankings and Accountability in Higher Education: Uses and Misuses


But, as mentioned above, educational performance is influenced by
­myriad factors. This method may be disadvantageous to lower socio-
economic and ethnically disadvantaged groups or mature students whose
life or family circumstances disturb normal study patterns. These students
often take longer to complete, as they may need to work to supplement
their income or look after family or domestic matters. While HEIs that
seek to serve this particular student cohort can become dis-incentivized
by such indicators (Jones, 2009), institutions which serve a large num-
ber of wealthy students can win the numbers game when graduation
and retention rates are reported as averages among the entire student
body. Employability can be a reflection of wider economic factors, and not
necessarily a measure of educational quality. The US National Governors
Association Centre for Best Practice has cautioned against relying upon
methodologies that can inadvertently ‘exclude far too many students and
track too few student milestones’:

The most commonly used measure for public higher education


funding formulas is total student enrolment. This measure creates
no incentive to see students through to completion… Alternatively,
strict graduation rate formulas can penalize schools that serve
disadvantaged students because these schools will inevitably have
lower graduation rates. Moreover, a singular emphasis on gradu-
ation can discourage open-enrolment policies, because skimming
top students will improve institutional performance despite exclud-
ing students who may benefit most from postsecondary education.
Graduation rate funding formulas may also pressure schools to
lower their graduation standards if they are desperate for funds
and are not meeting graduation targets (Limm, 2009).

Measuring research

Counting academic publications and citations is the most common method


to assess academic work; the former measures productivity and the latter
measures quality. Rankings rely heavily upon Thomson Reuters and Scopus,
which collect publication and citation data for approximately 9,000 jour-
nal articles in Web of Science and 18,000 in Scopus, respectively. The main
beneficiaries of this practice are the bio- and medical sciences because
these disciplines publish frequently with multiple authors. In contrast, the
social sciences and humanities usually have single authors and publish
in a wide range of formats (e.g. monographs, policy reports, translations
and so on), whereas the arts produce major art works, compositions and

Chapter 4. World-class universities or world-class systems? 81


Rankings and higher education policy choices
media productions, and engineering produces conference proceedings and
prototypes. These latter outputs, in addition to electronic formats or open
source publications, are ignored by traditional bibliometric methods.

Bibliometric practices also disproportionately reward research that is


published in English-language, international, peer-reviewed journals.
Although English is the lingua franca of business and the academy, it can
be an inhibitor. English-language articles and countries, which publish the
largest number of English-language journals, tend to benefit the most. It
also disadvantages the social sciences and humanities, which often consider
issues of national relevance and publish in the national language but can
equally affect the sciences (e.g. environmental or agricultural science, for
similar reasons).

Disparity across disciplines and world regions is further reflected in citation


practices. Authors are most likely to reference other authors whom they
know or who are from their own country. Given an intrinsic tendency to
reference national colleagues or English-language publications, the reputa-
tional or halo factor means that certain authors are more likely to be quoted
than others. Altbach (2006) claims that non-English language research is
published and cited less often because researchers from US universities
tend to cite colleagues they know. It is also easier, says Altbach (2012: 29;
also Jones, 2009), ‘for native English speakers to get access to the top jour-
nals and publishers and to join the informal networks that establish the
pecking order in most scientific disciplines’. This may occur because of the
significance of their work or because of informal networks. This can affect
reputational surveys that have become the chosen methodology of both
the new QS and THE-TR rankings, which assign 50 per cent and 33 per cent,
respectively (THE-TR also publishes a reputation ranking). Because detailed
familiarity with a country or institution may in reality be imperfect, peer
reviewers ‘tend to rank high those departments of the same type, and with
the same emphases, as their own universities’ (Webster 1986: 44) or those
with whom they are most familiar (Hazelkorn, 2011a: 74–77). The pool of
peers has tended to be disproportionately weighted in favour of Anglophone
countries and while changes have been made to the peer selection process,
participation levels remains limited (Usher, 2012).

There are other more consequential problems that arise from this method.
By focusing only on peer-reviewed articles in particular journals, it assumes
that journal quality is equivalent to article quality. Articles may be quoted
because of errors, not necessarily because of a breakthrough. This has led to
the controversial practice of ranking academic journals (Hazelkorn, 2011b).

82 Rankings and Accountability in Higher Education: Uses and Misuses


Peer review, which is the cornerstone of academic practice, can also be a
conservative influence; new research fields, interdisciplinary research or
ideas that challenge orthodoxy can find it difficult to get published or be
published in high-impact journals.

Furthermore, using citations to measure ‘impact’ suggests that its relevance


and benefit is simply a phenomenon of the academy, thereby ignoring the
wider social and economic value and benefit of publicly funded research
and innovation. In so doing, the full spectrum from knowledge creation to
technology and knowledge transfer and exchange – across all disciplines – is
ignored. Furthermore, depending on the research project or the discipline,
research findings and analysis may be published in a wide variety of formats
or as prototypes, and its impact and benefit felt far beyond the academy.
Table 3 shows what is measured above the red line by traditional bibliomet-
ric and citations practice, and what is ignored below the red line.

Table 3. Indicative list of research output and impact

Journal articles Peer Esteem

• Book chapters • Impact on Teaching


• Computer software and datavases • Improved Productivity, REduced Costs
• conference publications • Improvement on environment and
• Editing of major works lifestyle
• Legal cases, maps • Improving people’s ealth and quality of
• Major works in production or exhibition life
and/or award-winning design • Increased employment
• Paents of plant breeding rights • Informed public debate
• Policy documents or brief • New approaches to social issues
• Research or tehnical reports • New curriculum
• Technical drawings, designs or working • Patents, Licences
models • Policy change
• Translations • Social innovation
• Visual reconrdings • Stakeholder esteem
• Stimulating creativity

Measuring reputation

To assess how prominent stakeholders view individual HEIs, rankings often


use reputational surveys of academic peers, students or industry stake-
holders. They usually ask respondents to identify the best universities either
from memory or from a pre-selected list. This method has led to the opinion
that reputational surveys are prone to being subjective, self-referential and
self-perpetuating (Rauhvargers, 2011: 65). They benefit older institutions
in developed countries and global cities with which there is an easy

Chapter 4. World-class universities or world-class systems? 83


Rankings and higher education policy choices
identification. Peer judgements may ‘say little or nothing about the quality
of instruction, the degree of civility or humaneness, the degree to which
scholarly excitement is nurtured by student-faculty interaction, and so on’
(Lawrence and Green, 1980: 13). Over-estimation of a university ‘may be
related to good performance in the past, whereas underestimation may be
a problem for new institutions without long traditions’ (Becher and Trowler,
2001). Van Raan (2007: 95) similarly acknowledges that

Institutions with established reputations are strong in maintaining


their position, for they simply have the best possibilities to attract the
best people, and this mechanism provides these renowned institu-
tions with a cumulative advantage to further reinforce their research
performance.

The real question is: can university presidents or any other stakeholders
know sufficiently about a wide range of other institutions, around the world,
in order to score them fairly? In other words, rankings are a self-replicating
mechanism that reinforces the position of already known universities, rather
than those that are excellent.

In summary, there is no such thing as an objective ranking. The choice of


indicators and weightings assigned to them reflect the value judgements
or priorities of the different ranking organizations. More importantly, the
measurements are rarely direct but consist of proxies, either because the
issue is very complex or because there are no available data. Hence, the
evidence is never self-evident and does not reflect an incontestable truth.
Rather, rankings measure what is easy and predictable, and concentrate
on past performance, which benefits older HEIs at the expense of new
institutions. Quantification is used as a proxy for quality. Given all these
short­comings, it should not be surprising that rankings do not unreservedly
measure the quality of education.

Policy choices
Since the arrival of global rankings, it is not uncommon for governments
to gauge national global competitiveness and positioning within the world-
order in terms of the rank of their universities, or to attribute national ambi-
tions to a position in the rankings. The ongoing global economic crisis has

84 Rankings and Accountability in Higher Education: Uses and Misuses


further highlighted the importance of ‘academic capital’ and investment as
critical indicators of competitiveness and global success. These developments
have sparked a debate about the need for higher education reform. Because
the price tag for achieving world-class status is so high, many governments
and HEIs are questioning their commitment to mass higher education as
funding comes under strain; others are concerned their universities may not
be elite or selective enough:

We want the best universities in the world.… How many universities


do we have? 83? We’re not going to divide the money by 83
(Nicolas Sarkozy, President, France, 2009).

The Higher Education Endowment Fund… [will] support the emergence


of world-class institutions;… We are trying to leapfrog universities
above the norm
(Julie Bishop, Federal Education, Science and Training Minister,
Australia, 2007).

Work [is underway] on establishing the country’s first ‘research-inten-


sive’ university… universities which earned a place in the top 500 rank-
ings… were entitled to financial support
(Jurin Laksanavisit, Education Minister, Thailand, 2009).

The price tag to get one Nigerian university into the global top 200 is
put at NGN 5.7 billion [€31 m] annually for at least ten years
(National Universities Commission, Nigeria).

Many governments have embarked on significant restructuring of their


higher education and research systems.

The world-class university has become the panacea for ensuring success
in the global economy, based on the characteristics of the top 20, 50 or
100 globally ranked universities. China, Finland, France, Germany, India,
Japan, Latvia, Malaysia, Russia, Singapore, South Korea, Spain, Taiwan
and Viet Nam – among many other countries – have launched initia-
tives to create world-class universities. Individual US states (e.g. Texas
and Kentucky) have similarly sought to build or boost flagship universi-
ties, elevating them to what is known as Tier One status, a reference
to USN&WR College Rankings. In contrast, countries such as Australia,
Ireland and Norway are emphasizing the importance of the system being
‘world class’.

Chapter 4. World-class universities or world-class systems? 85


Rankings and higher education policy choices
There are two basic policy models.

1. The Neo-liberal model seeks to concentrate resources in a small number of


elite or world class universities. This is often referred to as the ‘Harvard-here’
model because it aims to replicate the experience of Harvard University or
the Ivy League (see Figure 1). This is to be achieved by encouraging greater
vertical or hierarchical (reputational) differentiation between HEIs, with
greater distinction between research (elite) universities and teaching
(mass) HEIs. Resource allocation may be linked to institutional profiling or
other classification tools informed by rankings.

Figure 1. The ‘Harvard here’ model

Field 1 Field 2 Field 3 Field 4


PhDs and research intensive Institution A1
Institution B1
Masters and some research
Institution B2
Institution C1
Institution C2
Baccalaureates and scholarship
Institution C3
Institution C4
Institution D1
Institution D2
Diplomas and extension services Institution D3
Institution D4
Institution D5
Source: Gavin Moodie, pers. comm. 7 June 2009.

2. The Social-democratic model seeks to balance excellence and equity by


supporting the development of a world-class system of higher educa-
tion across a country. This is to be achieved by strengthening horizontal
(mission or functional) differentiation across a diverse portfolio of high-
performing HEIs, some of which may be globally or regionally focused.
Emphasis is on supporting ‘excellence’ wherever it occurs by encourag-
ing HEIs to each specialize in specific disciplines or knowledge domain
according to their expertise, competence, demand and/or mission (see
Figure 2). There is a strong emphasis on a close correlation between
teaching and research, and knowledge production, commercialization
and dissemination as components of an integrated process. Institutional
compacts or strategic dialogues may be used as a policy tool to enforce
mission specialization and differentiation.

86 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 2. Field or mission specialization model

Field Field Field Field Field Field Field Field Field Field
1 2 3 4 5 6 7 8 9 10
PhDs and research
intensive
Masters and some

Institution 1

Institution 2

Institution 3

Institution 4

Institution 5
research
Baccalaureates
and scholarship
Diplomas and
extension services
Source: Gavin Moodie, pers. comm. 7 June 2009.

Rankings have also had an influence on other aspects of government policy.


Some governments, such as the Czech Republic, Jordan, Macedonia and
Romania, are using rankings to help assess and/or classify HEIs within their
own countries. Article 159 of the Macedonia Law on Higher Education (2008)
grants automatic recognition to graduates of the top 500 THE-QS, ARWU or
USN&WR rankings without going through a more complex recognition pro-
cess. Brazil, Chile, Kazakhstan, Mongolia, Qatar, Singapore and Saudi Arabia,
to name a few, restrict government scholarships for international study to
students admitted to top ranking universities (Salmi and Saroyan 2007);
Singapore’s Foreign Specialist Institute has similar criteria for institutional
collaboration. Dutch (2008) and Danish (2011) immigration laws grant special
recognition to foreigners from top universities (150 and 20 respectively). And
finally, several US states benchmark academic salaries (Florida and Arizona)
or ‘fold-in’ rankings into performance measurement systems (Indiana,
Minnesota and Texas).

World-class universities or world-class


systems?
Rankings are influencing our perceptions of and decisions about higher edu-
cation policy in two major ways:

1. Rankings have highlighted the importance of quality and striving for


excellence in a competitive world. As a result, international or cross-
jurisdictional comparisons are likely to remain a constant feature of a

Chapter 4. World-class universities or world-class systems? 87


Rankings and higher education policy choices
globalized world. As the Australian Federal Minister for Innovation,
Industry, Science and Research said more succinctly, it ‘isn’t enough to
just go around telling ourselves how good we are – we need to measure
ourselves objectively against the world’s best’ (Carr, 2009). Thus, rank-
ings have influenced the way we think about higher education, and have
raised our collective consciousness about the necessity for greater public
accountability and transparency, and to demonstrate value for money
and return on public investment.

2. Rankings have highlighted the importance of investment in higher


education as a key factor determining sustainable social and economic
development in the knowledge economy. In the twenty-first century, the
capacity to compete globally is determined by the calibre of the higher
education system, its graduates and its contribution to ‘world science’;
talent and knowledge creation are the new oil. The indicators measure
attributes of socio-economic advantage, age and wealth; the results are
presented as a ‘league table’ or ‘academic world order’ which, in turn,
is used for global positioning and branding in order to attract capital,
talent and tourism. This is putting pressure on governments to increase
or at least maintain investment in higher education in order to ensure
national competitiveness.

Given this effect, many governments use rankings, inter alia, to classify and
accredit HEIs, allocate resources, drive change, assess student learning and
learning outcomes and/or evaluate faculty performance and productivity,
at the national and institutional level. They are used as an accountability or
transparency tool, especially in societies and institutions where this culture
and practices are weak or immature.

Many myths are promulgated about the value of rankings for policy-making
or strategic decision-making. But, rankings should be used cautiously – and
only as part of an overall quality assurance and assessment or benchmarking
system and not as a stand-alone evaluation tool. Four examples will suffice:

1. Rankings provide useful comparative information. It is often argued that


rankings provide useful comparative information about university per-
formance which facilitates student choice and policy-making. But HEIs
are complex organizations, providing education from undergraduate to
PhD level, conducting research, participating in outreach initiatives, and
being a source of innovation and entrepreneurship. For many countries,
they are a critical engine of nation-building, a regional, national and
global gateway attracting highly skilled talent and investment, actively

88 Rankings and Accountability in Higher Education: Uses and Misuses


engaging with a diverse range of stakeholders through knowledge and
technology transfer, and underpinning the global competitiveness of
nations and regions… As a group, they sit within vastly different national
context, underpinned by different value systems, meeting the needs
of demographically, ethnically and culturally diverse populations, and
responding to complex and challenging political-economic environments
(Hazelkorn, 2011a: 78).

Publicly funded, private not-for-profit and for-profit HEIs operate in very


different financial circumstances, and with different levels of governance
and financial autonomy. There is a wide variance of students served by
these institutions. It is difficult to compare institutions – or indeed aca-
demic departments – across different national contexts or to measure
quality through measurements of quantification. But this is what rankings
purport to do.

2. Rankings provide good measures for research. Despite criticism about the
disproportionate focus on research, the choice of indicators is usually
considered meaningful or ‘plausible’. However, as discussed above, the
data primarily reflect basic research in the bio- and medical sciences.
As a consequence, some disciplines are valued as more important
than others, and research’s contribution to society and the economy
is seen primarily as something which occurs only within the academy.
In this way, rankings misrepresent the breadth and dynamism of the
research-innovation process and higher education’s role as part of the
innovation eco-system – what the European Union calls the ‘knowledge
triangle’ of education/learning, research/discovery and innovation/
engagement. This narrow conceptualization of research is helping to
drive a wedge between teaching and research at a time when policy-
makers and educators advocate the need for more research-informed
teaching (Hazelkorn, 2009).

3. Concentrating resources in a few world-class universities. There is a strong


view internationally that argues the policy priority should be to con-
centrate resources in a few elite universities in order to ‘lift all boats’,
using a metaphor often associated with economic growth. This view
is based on the assumption that high-ranked HEIs are better quality
institutions than those which are either lower ranked or not ranked.
However, while top-ranked universities may produce the majority of
all peer-reviewed papers, those who publish in refereed journals do
not necessarily have the application of their knowledge as an objec-
tive. Nor is it obvious that this kind of investment will create sufficient,

Chapter 4. World-class universities or world-class systems? 89


Rankings and higher education policy choices
patentable or transferable knowledge that can be exploited and used
by society. Concentrating research in a few institutions could reduce
the overall national research capacity with perverse ‘knock-on con-
sequences for regional economic performance and the capacity for
technology innovation’ (Adams and Gurney, 2010; Lambert, 2003: 6).
Furthermore, there is no evidence that more concentrated national
systems generate higher citation impact than those in which output
is more evenly distributed, because concentration is most relevant in
only four disciplines of ‘big science’: biological sciences, clinical medi-
cine, molecular biology/biochemistry and physics (Moed, 2006). The
key factor underpinning improved national research performance and
competitiveness is consistent investment.

4. Rankings measure quality. Most (global) rankings primarily measure


research, which is widely interpreted as being equivalent to educa-
tion quality. This has led to much confusion. The choice of indicators is
based on the opinion and values of the different ranking organizations,
influenced to a great extent by the available data. But the indicators
don’t and can’t measure how good the teaching is, how well students
learn or if the facilities and resources are actually used by the students.
They take no account of how well a HEI fulfils its mission or contributes
to society. ‘Which university is best’ can be asked differently depending
upon who is asking the question, which question is being asked and for
what purpose. Is the user a student choosing a college/university in his/
her own country or abroad or a government seeking to make decisions
about resource allocation?

It is time to look at alternatives (see Hazelkorn, 2012a). Rankings encourage


us to emulate the achievements of a few elite ‘world-class universities’ as the
panacea for success in today’s competitive world. An alternative approach
says that what matters for sustainable social and economic prosperity is how
governments balance the needs of all its citizens by creating a ‘world-class
system’, characterized by:

• having a coherent portfolio of horizontally differentiated high-performing


and actively engaged institutions – providing a breadth of educational,
research and student experiences;
• having open and competitive education, offering the widest chance to the
broadest number of students;
• developing knowledge and skills that citizens need to contribute to society
throughout their lives, while attracting international talent;

90 Rankings and Accountability in Higher Education: Uses and Misuses


• producing graduates able to succeed in the labour market, fuelling and
sustaining personal, social and economic development, and underpinning
civil society; and
• operating successfully in the global market, being international in
perspective and responsive to change.

A whole-of-system benchmarking methodology, using a sophisticated set of


quantitative and qualitative accountability and transparency instruments,
provides a better way to assess and ensure quality (see Salmi, 2012).
This method can be used to (i) highlight and accord parity of esteem to
diverse institutional profiles to facilitate public comparability, democratic
decision-making and institutional benchmarking; (ii) identify what matters
and assess those aspects of higher education, including improvements in
performance not simply absolute performance; and (iii) enable diverse
users and stakeholders to design fit-for-purpose indicators and scenarios
customized to individual stakeholder requirements – but this does make
international comparison difficult. Because any assessment system can
incentivize institutional and individual behaviour, it is vital that the choice
of indicators recognize, support and reward the full spectrum of higher
education endeavours across education/learning, research/discovery
and innovation/engagement. To be meaningful, comparisons should be
conducted at regular intervals. Critically, the collection and control of
data and verification of the processes should not be the remit of private/
commercial providers or self-appointed auditors; UNESCO might see this
as a useful role for itself, perhaps in collaboration with other international
organizations.

Rankings are only one form of comparison; they are popular today because of
their simplicity. However, their indicators of success are misleading. Rather
than using definitions of excellence designed by others for other purposes,
what matters most is whether HEIs fulfil the purpose and functions that
governments and society want them to fulfil.

Chapter 4. World-class universities or world-class systems? 91


Rankings and higher education policy choices
References

Adams, J. and Gurney, K. 2010. Funding Selectivity, Concentration and Excellence – How Good is the
UK’s Research? London: Higher Education Policy Institute: www.hepi.ac.uk/455-1793/
Funding-selectivity-concentration-and-excellence – how-good-is-the-UK’s-research.html
(Accessed 7 June 2010.)

Aghion, P., Dewatripont, M., Hoxby, C., Mas-Colell, A. and Sapir, A. 2007. Why reform Europe’s
universities? Bruegel Policy Brief, September (4): www.bruegel.org/uploads/tx_
btbbreugel/pbf_040907_universities.pdf (Accessed 30 April 2010.)

Altbach, P.G. 2006. The dilemmas of ranking. International Higher Education, 42: www.bc.edu/
bc_org/avp/soe/cihe/newsletter/Number42/p2_Altbach.htm (Accessed 4 April 2010.)

Altbach, P.G. 2012. The globalization of college and university rankings. Change, January/
February: 26–31.

Badescu, M. 2010. Measuring Investment Efficiency in Education, EU Directorate-General Joint


Research Centre, Institute for the Protection and Security of the Citizen: http://
crell.jrc.ec.europa.eu/Publications/CRELL%20Research%20Papers/EUR%2022304_
Measuring%20Investment%20Efficiency%20in%20Education.pdf
(Accessed 10 April 2010.)

Becher, T. and Trowler, P.R. 2001. Academic Tribes and Territories (2nd edn). Buckingham, UK:
SRHE/Open University Press.

Bremner, J., Haub, C., Lee, M., Mather, M. and Zuehlke, E. 2009. World Population Prospects: Key
Findings from PRB’s 2009 World Population Data Sheet. Washington DC: Population
Reference Bureau, United Nations: www.prb.org/pdf09/64.3highlights.pdf
(Accessed 26 April 2010.)

Brittingham, B. 2011. Global Competition in Higher Education: Student Learning Outcomes/


Measurement and Quality. Presentation at the 2nd International Exhibition and
Conference on Higher Education, Riyadh, Saudi Arabia, 19–22 April 2011.

Carey, K. 2006a. Are our students learning? Washington Monthly, September,


www.washingtonmonthly.com/features/2006/0609.carey.html (Accessed 27 March 2010.)

Carey, K. 2006b. College Rankings Reformed: The Case for a New Order in Higher Education,
Education Sector Reports, September.

Carr, K. 2009. Speech by Federal Minister for Innovation, Industry, Science and Research to
Universities Australia Higher Education Conference, Australia: http://minister.innovation.
gov.au/Carr/Pages/UNIVERSITIESAUSTRALIAHIGHEREDUCATIONCONFERENCE2009.
aspx (Accessed 30 March 2010.)

Dill, D.D. and Soo, M. 2005. Academic quality, league tables and public policy: a cross-national
analysis of university ranking systems. Higher Education, 49(4): 495–537.

Ehrenberg, R.G. 2005. Method or madness? Inside the U.S. News & World Report college rankings.
Journal of College Admission. Fall.

Hawkins, D. 2008. Commentary: Don’t Use SATs to Rank College Quality, CNN:
http://edition.cnn.com/2008/US/10/17/hawkins.tests/index.html
(Accessed 27 March 2010.)

92 Rankings and Accountability in Higher Education: Uses and Misuses


Hazelkorn, E. 2009. The Impact of Global Rankings on Higher Education Research and the
Production of Knowledge. Presentation to UNESCO Forum on Higher Education,
Research and Knowledge, Occasional Paper No. 16: http://unesdoc.unesco.org/
images/0018/001816/181653e.pdf (Accessed 3 May 2010.)

Hazelkorn, E. 2011a. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. Houndmills, Basingstoke: Palgrave McMillan.

Hazelkorn, E. 2011b. The futility of ranking academic journals. The Chronicle of Higher Education.
(20 April): http://chronicle.com/blogs/worldwise/the-futility-of-ranking-academic-
journals/28553 (Accessed 20 April 2010.)

Hazelkorn, E. 2012a. European ‘transparency instruments’: driving the modernisation of European


higher education. P. Scott, A. Curaj, L. Vlăsceanu and L. Wilson (eds) European Higher
Education at the Crossroads: Between the Bologna Process and National Reforms, Volume 1.
Dordrecht, Netherlands: Springer.

Hazelkorn, E. 2012b. Striving for excellence: rankings and emerging societies. D. Araya and
P. Marbert (eds) Emerging Societies. Routledge.

Hotson, H. 2011. Don’t look to the Ivy League. London School of Books. (19 May).

Jones, D.A. 2009. Are graduation and retention rates the right measures? Chronicle of Higher
Education. (21 August): http://chronicle.com/blogPost/Are-GraduationRetention/7774/
(Accessed 30 March 2010.)

Kuh, G.D. and Pascarella, E.T. 2004. What does institution selectivity tell us about educational
quality? Change, September/October: 52–58.

Lambert, R. 2003. Review of Business-University Collaboration, Final Report. London: HMSO:


www.hm-treasury.gov.uk/d/lambert_review_final_450.pdf (Accessed 6 June 2010.)

Lawrence, J.K. and Green, K.C. 1980. A Question of Quality: The Higher Education Ratings Game.
Report No. 5. Washington DC: American Association of Higher Education.

Limm, D. 2009. Measuring student achievement at postsecondary institutions. Issue Brief,


National Governors Association Centre for Best Practice: www.nga.org/Files/
pdf/0911measuringachievement.pdf (Accessed 2 April 2010.)

Martin, M. and Sauvageot, C. 2011. Constructing an Indicator System or Scorecard for Higher
Education. A Practical Guide. Paris: UNESCO International Institute for Educational
Planning (IIEP).

Moed, H.F. 2006. Bibliometric Rankings of World Universities. Leiden, Netherlands: CWTS, University
of Leiden.

Moodie, G. 2009. Personal correspondence (7 June 2009).

OECD (Organisation for Economic Co-operation and Development). 2010. Education at a Glance.
Paris: OECD.

Rauhvargers, A. 2011. Global University Rankings and Their Impact. Brussels: European University
Association.

Sadlak, J. and Liu, N.C. (eds). 2007a. The World-Class University and Ranking: Aiming Beyond Status.
Bucharest: Cluj University Press.

Chapter 4. World-class universities or world-class systems? 93


Rankings and higher education policy choices
Sadlak, J. and Liu, N.C. 2007b. Introduction to the topic: expectations and realities of world-class
university status and ranking practices. J. Sadlak and N.C. Liu (eds) The World-Class
University and Ranking: Aiming Beyond Status. Bucharest: Cluj University Press, pp. 17–24.

Saisana, M. and D’Hombres, B. 2008. Higher Education Rankings: Robustness Issues and Critical
Assessment. How Much Confidence Can We Have in Higher Education Rankings? Centre
for Research on Lifelong Learning (CRELL), European Communities, Luxembourg:
http://crell.jrc.ec.europa.eu/Publications/CRELL%20Research%20Papers/EUR23487.pdf
(Accessed 20 November 2010.)

Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 19(2): 31–68.

Sheil, T. 2009. Moving Beyond University Rankings: Developing World Class University Systems.
Presentation to 3rd International Symposium on University Rankings, University of
Leiden.

Usher, A. 2006. Can our schools become world-class? Globe and Mail (30 October 2006):
www.theglobeandmail.com/servlet/story/RTGAM.20061030.URCworldclassp28/
BNStory/univreport06/home (Accessed 11 October 2009.)

Usher, A. 2012. The Times Higher Education Research Rankings (19 March 2012):
http://higheredstrategy.com/the-times-higher-education-research-rankings/
(Accessed 1 April 2010.)

Usher, A. and Medow, J. 2009. A global survey of university rankings and league tables. B.M. Kehm
and B. Stensaker (eds) University Rankings, Diversity, and the New Landscape of Higher
Education. Rotterdam: Sense Publishers, pp. 3–18.

Usher, A. and Savino, M. 2006. A World of Difference: A Global Survey of University League Tables.
Toronto: Educational Policy Institute: www.educationalpolicy.org/pdf/world-of-
difference-200602162.pdf (Accessed 19 January 2009.)

Usher, A. and Savino, M. 2007. A global survey of rankings and league tables. College and University
Ranking Systems – Global Perspectives American Challenges. Washington DC: Institute of
Higher Education Policy.

US News Staff. 2010. How U.S. news calculates the college rankings. US News and World Report
(17 August 2010): http://www.usnews.com/education/articles/2010/08/17/how-us-news-
calculates-the-college-rankings?PageNr=4 (Accessed 12 July 2010.)

van Raan, A.F.J. 2007. Challenges in the ranking of universities. J. Sadlak and N.C. Liu (eds) The
World-Class University and Ranking: Aiming Beyond Status. Bucharest: Cluj University
Press, pp. 87–12.

Webster, D.S.A. 1986. Academic Quality Rankings of American Colleges and Universities. Springfield,
Mass.: Charles C. Thomas.

94 Rankings and Accountability in Higher Education: Uses and Misuses


Chapter 5

Rankings and online


learning: a disruptive
combination for higher
education?
John Daniel
Introduction

Taking, as a starting point, 1530, when the Lutheran Church was


founded, some 66 institutions that existed then still exist today in the
Western World in recognizable form: the Catholic Church, the Lutheran
Church, the parliaments of Iceland and the Isle of Man, and 62 universi-
ties… They have experienced wars, revolutions, depressions, and indus-
trial transformations, and have come out less changed than almost any
other segment of their societies (Carnegie Commission, 1968).

The aphorism above, coined by Clark Kerr’s Carnegie Commission in 1968,


should make us hesitate to predict substantive change in universities. Forecasts
of radical change in higher education have often turned out to be myths. Here
are six:

• First, eighteen years ago the management guru Peter Drucker predicted
that in thirty years the big university campuses would be relics – yet twelve
years before his deadline many seem as vibrant than ever and few appear
to be on their last legs.
• Second – which may partly explain the continuing ebullience of the sector
– enrolment growth has been consistently underestimated, particularly as
concerns women. The desire for access to higher education among most of
the world’s population is stronger than ever.
• Third, when higher education was declared to be a tradable commodity
under the General Agreement on Trade in Services (GATS) a decade ago
there was panic in academia about imminent commercialization – yet most
of the world’s universities are still public institutions with an educational
ethos.
• Fourth, some observers claimed that today’s young students are a new
breed of digital natives who would create a generational divide in study
habits – yet recent research on thousands of students of all ages finds no
such divide.
• Fifth, the hype around the dotcom frenzy in 1999–2000 claimed that all
education would soon go online – yet to date it seems that universities
have absorbed the virtual world rather than allowing it to absorb them.
• Sixth, despite the efforts of some governments to limit the research function
to a limited number of institutions, most universities continue to conduct
research and aspire to expand it.

This list appears to support the notion that higher education develops by
evolution rather than revolution. However, it is too soon to dismiss all these

96 Rankings and Accountability in Higher Education: Uses and Misuses


forecasts as myths. We shall identify two key drivers of change and argue that
their combined effect could be to split higher education.

In 1971 Richard Nixon asked the Chinese leader Zhou Enlai what he thought
had been the impact of the French revolution. Zhou replied that it was too
early to tell. Most commentators assumed that the leaders were referring
to the storming of the Bastille in 1789 and seized on the story as a tell-
ing illustration of China’s talent for long-term thinking. Nixon’s interpreter,
however, insists that Zhou was actually referring to the much more recent
1968 student uprising in France (les événements de mai 1968), which makes
more sense (McGregor, 2011).

When the University of Paris erupted in 1968 the protests inspired some
of the most memorable slogans and graffiti of the mid-twentieth century,
although the students were much less succinct about the reforms they actu-
ally sought. In 1971 it was indeed much too early for Zhou to provide Nixon
with an impact analysis. Four decades later, however, higher education has
changed considerably, although not in the ways that the students cam-
paigned for, nor as a result of their actions. That is because the two drivers of
change in contemporary higher education on which we shall focus, rankings
and online learning, were absent in 1968.

Online learning
The first driver of change is the internet and the online world that it has
­created. A year after the Paris riots, when he launched the UK Open University
in 1969, the founding Chancellor, Lord Geoffrey Crowther, said that:

The world is caught in a communications revolution, the effects of


which will go beyond those of the industrial revolution of two cen-
turies ago. Then the great advance was the invention of machines to
multiply the potency of men’s muscles. Now the great new advance is
the invention of machines to multiply the potency of men’s minds. As
the steam engine was to the first revolution, so the computer is to the
second. It has been said that the addiction of the traditional university
to the lecture room is a sign of its inability to adjust to the develop-
ment of the printing press. That, of course, is unjust. But at least no
such reproach will be levelled at the Open University in the commu-
nications revolution. Every new form of human communication will

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 97
be examined to see how it can be used to raise and broaden the level
of human understanding. (Northcott, 1976)

Lord Crowther could scarcely have imagined where this communications


revolution would take us four decades later. Online and mobile communica-
tions (ICT) have lessened the significance of national borders and disrupted
many business models. When re-fuelling their vehicles, buying books or
arranging travel, consumers are opting increasingly for self-service models
made possible by ICT. This trend is now appearing in universities.

In his report 2011 Outlook for Online Learning and Distance Education, Bates
(2011) identified three key trends in US higher education. It is fair to assume
that other countries will follow similar paths as connectivity improves.

The first trend is the rapid growth of online learning. Enrolment in fully
online (distance) courses in the United States expanded by 21  per  cent
between 2009 and 2010 compared to a 2 per cent expansion in campus-
based enrolments.

Bates’ second finding is that, despite this growth, institutional goals for
online learning in public sector higher education are short on ambition.
He argues that the intelligent use of technology could help higher educa-
tion to accommodate more students, improve learning outcomes, provide
more flexible access and do all this at less cost. Instead, he found that costs
are rising because investment in technology and staff is increasing with-
out replacing other activities. There is no evidence of improved learning
outcomes and a failure to meet best quality standards for online learning
in some institutions. In general, the traditional US public higher educa-
tion sector seems to have little heart for online learning. Many institutions
charge higher fees to online students, even though the costs of serving
them are presumably lower, suggesting that they would like to discourage
this development.

A third finding should stimulate the public sector to take the rapidly grow-
ing demand for online learning more seriously. The US for-profit sector
has a much higher proportion of the total online market (32  per  cent)
compared to its share of the overall higher education market (7 per cent).
Seven of the ten US institutions with the highest online enrolments are
for-profits. For-profits are better placed to expand online because they
do not have to worry about resistance from academic staff, nor about
exploiting their earlier investment in campus facilities. Furthermore, the

98 Rankings and Accountability in Higher Education: Uses and Misuses


for-profits use a team approach to the development of online learning
courses and student support, whereas many public institutions simply
rely on individual academics to create and support online versions of
their classroom courses. Bates calls this the ‘Lone Ranger’ model and
argues that it is less likely to produce sustainable online learning of qual-
ity than the team approach.

Finally, he notes that over 80 per cent of US students are expected to be


taking courses online in 2014, up from 44  per  cent in 2009. Clearly, the
providers that are already established in this mode of delivery (i.e. the for-
profits) will have the advantage).

Indeed, a UK Report, Collaborate to Compete: Seizing the Opportunity for Online


Learning for UK Higher Education, explicitly recommends that public higher
education institutions should link up with for-profit companies in order not
to get left behind in offering online learning (HEFCE, 2011). This is already a
growing trend in the United States. For example, Best Associates, a Dallas-
based merchant bank with various investments in education, operates an
Academic Partnerships programme with a steadily growing number of state
universities. The basis of the model is to help these institutions offer high-
demand and socially important programmes (e.g. M.Ed., B.Sc. Nursing) online
at scale. The public institution sets the fees, of which it retains 20–30 per cent
with the rest going to Best Associates. The system can operate successfully
with much lower fees than these institutions would normally charge. Some
have reduced their fees substantially, but others have not.

Bates concludes his report by alerting institutions to a growing market that


is not well served by campus-based education. In his view, public colleges
and universities are not moving into online distance learning fast enough
to meet the demand. ‘If public institutions do not step up to the plate, then
the corporate for-profit sector will’. With access to broadband internet con-
nections spreading rapidly this statement may well have global validity and
indicates how online learning could disrupt higher education systems. Will
they split over the coming years into a public sector focused on research
and a for-profit sector doing most of the teaching through online learn-
ing? If so, does it matter? Some governments would like to see higher
education divide itself into research universities and teaching institutions.
Extrapolating the trends Bates has identified suggests that their wish may
come true, with the added difference that most research will take place
in publicly supported institutions, while most teaching will be done by
for-profit enterprises.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 99
Rankings: compounding the problem
At a time when public universities should be getting organized to expand
online teaching of quality in response to student demand, they are falling
for the temptation to expend energy and resources on gaining higher places
in university rankings, which is a probably a more congenial goal for most
university presidents and faculty.

The papers presented at UNESCO’s May 2011 Forum on Rankings, which are
the substance of this book, suggest that rankings have reached the stage,
both nationally and internationally, of encouraging a thousand flowers to
bloom. This may blunt the disruptive effect of rankings on higher education
systems because the emergence of rankings based on a wide range of criteria
helps different types of institutions within diverse higher education systems
compare themselves usefully with their peers.

Nevertheless, at present the rankings with most traction in the public


mind are based on research performance. Ben Wildavsky’s readable book,
The Great Brain Race: How Global Universities are Reshaping the World, is
a good example (Wildavsky, 2010). Wildavsky is writing primarily about
the 3  per  cent of the world’s 17,000 higher education institutions that
figure in contemporary global rankings. These rankings, such as those
from Shanghai’s Jiao Tong University, are essentially about performance
in research. In response to the question ‘where is teaching in the inter-
national rankings?’ the American higher education scholar Philip Altbach
replies, ‘In a word – nowhere’.

Wildavsky cites Jamil Salmi’s book, The Challenge of Establishing World Class
Universities, which analyses what makes for a top university (Salmi, 2009).
Here again, the designation refers to only a tiny fraction of the world’s uni-
versities, but some countries are lavishing funds on favoured institutions
in a probably futile attempt to get them into the list of the top 100 – or
top 300 – research universities. Having identified the trend, Salmi is now
sounding a warning note. He now writes of Nine Common Errors in Building
a World Class University and cautions those focusing on boosting one or two
institutions not to neglect ‘full alignment with the national tertiary educa-
tion strategy and to avoid distortions in resource allocation patterns within
the sector’ (Salmi, 2010).

This is happening at a time when the demand for higher learning is bur-
geoning in much of the world. Thirty  per  cent of the global population is

100 Rankings and Accountability in Higher Education: Uses and Misuses


under fifteen and generally accepted forecasts suggest that, in round ­figures,
the current worldwide enrolment in tertiary education will grow from
150 million now to 250 million by 2025. Simple arithmetic on these forecasts
indicates that the world will need to create four sizeable (30,000 students)
new universities every week for the next fifteen years or adopt alternative
approaches.

Sadly, however, universities are much less eager to be ranked on the quality
of their teaching than on the quality of their research. Between 1995 and
2004 the UK’s Quality Assurance Agency conducted assessments of teaching
quality, discipline by discipline, in all universities. A number of disciplines
were assessed nationwide each year on six dimensions, giving a maximum
score of 24 per discipline. The press was not slow to construct rankings based
on each university’s aggregate score and these evolved annually as more
disciplines were assessed. Table 1 shows the nine most highly ranked institu-
tions in 2004 when the teaching assessment process was terminated, alleg-
edly because the major research universities, unhappy with their standing
in this type of ranking, lobbied at the highest political level for its abolition.
The placing of the Open University just above Oxford is a testimony to the
potential of open, distance and technology-mediated learning to offer qual-
ity teaching and a sign of changing times.

A promising feature of UNESCO’s May 2011 forum on rankings was the


evidence it produced of a broadening of approaches to their construction,
aimed both at beefing up the methodology of existing rankings and develop-
ing measures of how well institutions implement their declared missions.
The latter is important because of the continuing growth and diversification
of the student body. As the cybernetic Principle of Requisite Variety (Ashby,
1956) states, an effective system – including an effective higher education
system – must be able to deploy a variety of responses that matches the
demands made on it.

Institutions should be encouraged to develop their own niches in this


complex landscape. As Ernie Boyer wrote in his seminal book Scholarship
Reconsidered (Boyer, 1990):

We need a climate in which colleges and universities are less imita-


tive, taking pride in their uniqueness. It’s time to end the suffocating
practice in which colleges and universities measure themselves far
too frequently by external status rather than by values determined by
their own distinctive mission.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 101
Table 1. Britain’s top nine universities

BRITAIN’S TOP NINE UNIVERSITIES


Quality Rankings of Teaching
based on all subject assessments 1995-2004
(Sunday Times University Guide 2004)

1 Cambridge 96%
2 Loughborough 95%
3 London School of Economics 88%
4 York 88%
5 The Open University 87%
6 Oxford 86%
7 Imperial College 82%
8 University College London 77%
9 Essex 77%
…and tops for student satisfaction
Source: The Sunday Times University Guide 2004.

The cost of higher education


We have argued, however, that at a time when students are opting for online
learning in larger numbers, public institutions are failing to adapt their mis-
sions to respond adequately to this trend. There are a number of reasons for
this, one being a tendency to worry more about research rankings.

Another factor, which will compound the disruptive combination of rank-


ings and online learning, is the likelihood of radical differentiation in the
costs of higher education. Technology-mediated learning has a very different
cost structure from classroom instruction and for-profit providers are better
placed to take advantage of it. Fee structures seem ripe for disruption.

The United States is an extreme case, but since 1986 college fees there have
risen by 467 per cent compared to inflation of 107 per cent in the economy
overall. The impact of the post-2008 recession on US household incomes,
combined with public concern about the heavy debt burdens on students
and graduates, is finally putting downward pressure on fee levels and

102 Rankings and Accountability in Higher Education: Uses and Misuses


creating incentives to offer less expensive options. Consumers have begun
to notice and resist these rising fees, which inspired Robert Archibald and
David Feldman (2010) to justify high fees in their book Why Does College Cost
So Much?

These American economists write only about the US experience, but the
principles and arguments they evoke have broad relevance. They situate
the higher education enterprise in the context of the wider economy and
make some careful comparisons with the evolution of prices in a range
of other industries over more than fifty years. In real terms the prices of
manufactures have gone down; those of many services, such as hairdress-
ing, have stayed roughly constant; whereas the prices of personal services
by professionals with high training requirements have risen in real terms.
They cite academics, dentists, horn players and stockbrokers as examples
in this last category.

Are such comparisons valid? William G. Bowen and W.J. Baumol labelled


the link between the high prices of certain services and the cost of training
the professionals who deliver them in a number of papers on the econo-
mies of the performing arts (Baumol and Bowen, 1965). Their argument
was that salaries in these and similar areas are pushed up, even if their
productivity remains static, by productivity-linked salary increases in other
sectors of the economy. Archibald and Feldman adopt this reasoning as
the basis for their book, dismissing the possibility of using technology to
increase productivity in higher education.

William Bowen (2011) himself, however, is not so sure. In his foreword to


Unlocking the Gates: How and Why Leading Universities are Opening Up Access
to Their Courses by Taylor Walsh (2011), he says that he is rethinking his scep-
ticism about the potential of new technologies to improve productivity in
higher education.

It is not surprising that the price of dentistry rises by more than inflation
because, despite the use of increasingly sophisticated equipment, it remains
a personal service with little scope for automation. Horn players (as examples
of orchestral musicians) are a more debatable case. They are unquestionably
a rare and specialized breed, but their productivity has increased dramati-
cally in recent decades simply because most people now listen to horn play-
ers, with equal or greater enjoyment and at much lower cost, on iPods and
CDs instead of going to concert halls. The most interesting comparison is
with stockbrokers. Their prices went up more rapidly than those of higher
education until the 1980s and then fell steadily to a relatively much lower

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 103
level. This was because brokerage services went online, giving the individual
client much more control.

That is surely a more valid comparator for higher education. Technology


now allows institutions to deliver much of the content of their programmes
through media and to give students more control as distance learners. This
can cut costs dramatically without loss of effectiveness.

The goal of most governments is to widen access to education while improv-


ing its quality and reducing its cost. Visualizing this challenge as a triangle of
vectors makes the simple point that with conventional classroom teaching
there is little scope to alter these vectors advantageously because improving
one vector will worsen the others (Daniel, 2010: 51). Pack more students
into the class and quality will be perceived to suffer. Try to improve quality
by providing more learning materials or better teachers and the cost will go
up. Cutting costs may endanger both access and quality. We call this the ‘iron
triangle’. It has constrained the expansion of education throughout history
and has created in the public mind an insidious link between the quality of
education and its exclusiveness. If this were the end of the story, Archibald
and Feldman’s conclusion that the cost of higher education must rise inexo-
rably would be correct.

However, technology is able to stretch this triangle to achieve the revolution


of wider access, higher quality and lower cost. Traditional distance education
institutions, often called open universities, have been doing this for years.
Not only do they enrol millions of students but, as noted above, some also
achieve high ratings for the quality of their teaching. This revolution of pro-
viding high-quality teaching to large numbers at low cost was achieved with
the traditional distance learning technologies of the industrial era (print,
audio, video and stand-alone computers). It was based on the principles
of industrial production, which were identified two centuries ago by Adam
Smith as division of labour, specialization, economies of scale, and the use
of machines and media.

Today’s new generation of digital technology is characterized by the con-


cepts of networks, connectedness, collaboration and community. As well
as increasing the economies of scale, since digital material costs almost
nothing to distribute, this technology also speeds up and intensifies the
interactions possible between students and their teachers. The result
has been to take technology-mediated learning far beyond the confines
of the open universities. Most traditional campus universities, at least in
countries that have a basic IT infrastructure, are now dabbling in distance

104 Rankings and Accountability in Higher Education: Uses and Misuses


education online and students are seeking out this form of teaching in
larger and larger numbers, as noted earlier. Moreover, forecasts that digital
technology would create a generation gap in higher education, with young
‘digital natives’ seeking out online learning while older students avoided
it, were simply wrong.

Research by the UK Open University on its own highly diverse student


body concludes that while there are clear differences between older and
younger people in their use of technology, there is no evidence of a clear
break between the two separate populations (Jones and Hosein, 2010) The
research was conducted on an age-stratified, gender-balanced cohort of
7,000 students aged between 21 and 100. The results showed that while
there are differences in attitudes to and familiarity with digital technology,
they are not lined up on each side of any kind of well-defined discontinu-
ity. The change is gradual, age group to age group. There is no coherent
‘net generation’.

However, one extremely important discovery was a correlation – independent


of age – between attitudes to technology and approaches to studying.
Students who more readily use technology for their studies are more likely
than others to be deeply engaged with their work: ‘Those students who
had more positive attitudes to technology were more likely to adopt a deep
approach to studying, more likely to adopt a strategic approach to studying
and less likely to adopt a surface approach to studying.’

This evidence that, at any age, a good attitude to technology correlates with
good study habits is also important in giving the lie to the view that online
learning tends to trivialize learning. Instead, as we argued earlier, the intel-
ligent use of technology can improve the quality of learning.

Changing corporate structures


Technology-mediated learning can reduce costs and stimulate good study
habits in students of all ages. Yet we have shown that public sector campus
universities are not exploiting this opportunity well, partly because chasing
higher places in research rankings has more appeal. A key question is whether
we are seeing the emergence of a new business model that will substantially
change the pattern of corporate structures within higher education systems
as the for-profit sector steadily expands.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 105
Again, the United States is the best place to see trends emerging. Although
US tuition fees have risen faster than inflation for decades, there are signs
that the situation has reached a tipping point. The fees bubble will not
suddenly burst, but lower-cost alternatives to the current model of high-
fee programmes will steadily take market share. Already some US states
(e.g.  Texas) are pressuring institutions to cut costs and fees, some major
public institutions (e.g. the University of California) are finally taking online
learning seriously, and models such as Best’s Academic Partnerships will
gain ground. The success of the Western Governors University, which was
viewed as a rather peculiar initiative when it was created in the late 1990s,
is an indicator of how things can change. This institution, which charges
fees of US$5,000 per annum and makes no demands on public funds, is
attracting increasing numbers of students. The for-profit sector has ample
room to cut fees and still make good profits. Currently this sector makes
high profits because it operates a lower-cost model of provision, but can
set fees comparable to those of the public sector with its higher cost base.
As the public sector starts to cut fees the for-profit sector will be able to
lead a downward trend.

What are the implications of an expanded role for the for-profit sector? In
countries where tax codes and charitable status are clearly defined the dis-
tinction between private for-profit and private non-for-profit provision is
easy to make. In most of the world, however, the distinction is not so clear
and we shall use the term ‘private’ to designate both types of institutions. All
private providers try to make a surplus and appear much the same on the
ground, especially in developing countries. The private sector can be either
homegrown or international. Note here that all providers, public or private,
become private for-profit providers once they spread their wings outside
their country of origin and offer programmes across borders. A public uni-
versity is a private, for-profit provider when operating in another country,
even though it may not initially repatriate its profits. Developing countries
sometimes claim that the ethical standards of public institutions operating
outside their home jurisdictions can be lower than those of avowedly com-
mercial providers.

But even without the impact of online learning, the growth of private
provision is essential to the expansion of postsecondary education in
many countries. No government can fund all the post-secondary educa-
tion its citizens want, so the choice is between either a public-sector
monopoly giving inadequate provision or meeting the demand through
a diversity of public and private institutions. This is changing patterns of
provision.

106 Rankings and Accountability in Higher Education: Uses and Misuses


In the Middle East, for example, rapidly increasing demand is driving the
expansion of the private sector. Egypt needs 100 new universities and stu-
dent numbers will double by 2030. In the United Arab Emirates the propor-
tion of students in private HEIs jumped from 23 per cent to 60 per cent in
2012. A major Indian private institution works primarily through partner-
ships (e.g. with 180 universities in China). There, and in Africa, where it has
a partnership with a well-known UK university, partner universities confer
awards based on the Indian institution’s courses and materials. In Africa the
cost of delivery is low and a blended model is most successful.

A country such as Malaysia, which encourages private provision, has many


lively homegrown post-secondary education businesses, the best of which
also conduct research. Thailand and Viet Nam have private campuses of
foreign public universities as well as local commercial providers. In Kenya,
some private providers have international links, both to secure capital
and to gain credibility by association with foreign institutions. This role
of the private sector in expanding equitable provision is a question that
exercises many developing country governments, even when their exist-
ing public systems, catering as they usually do to a small proportion of the
population drawn largely from the urban elite, can hardly be described as
equitable.

Post-secondary education must take up the challenge of serving the 4 bil-


lion people at the bottom of the world economic pyramid. As C.K. Prahalad
(2004) demonstrated in the case of other businesses, to serve such people
post-secondary education will require ‘radical innovations in technology and
business models’, aspiring to ‘an ideal of highly distributed small scale opera-
tions married to world-scale capabilities’. The likeliest candidate for a new
business model is the combination of increasing connectivity and open edu-
cational resources (OER), which are content, software and tools developed
on an open source model.

The key question is whether the private sector can be regulated without
strangling it. Is it possible to develop some common principles of
accountability and transparency for all providers of higher education?
Quality assurance (QA) is a relatively recent concern in higher education in
some countries. The issue is whether public and private institutions should
be treated the same for QA purposes. Is the distinction between good and
bad simply the dichotomy of corporate or public ownership? Ownership is
important for the tax authorities but is not, in principle, relevant to quality.
There are good and bad actors in the public sector and there should be the
same quality thresholds for all.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 107
Legitimate for-profit institutions welcome strong quality assurance frame-
works, but ask that they be applied fairly across the whole higher education
sector. Legitimate areas for regulation are the avoidance of excessive student
loans, ground rules for acquiring accredited institutions, and processes for
eliminating bad actors. The main plea is for a level playing field.

A disruptive model in the public sector:


the Open Educational Resource University
We end with an example of a model being developed in the public sector
that combines online learning with lower costs and new corporate struc-
tures. This is the Open Education Resource University that is being explored
by a group of public universities from several countries. Open Educational
Resources, or OER, are materials used to support education that may be
freely accessed, reused, modified and shared by anyone (Butcher, 2010). They
may well be the most radical technology-based tool poised to disrupt higher
education. How might they help to widen access and cut costs?

Some institutions already have policies that encourage the use of OER
so that each teacher does not have to re-invent the wheel in each of
their courses. Once academics in the Education Faculty at the Asia eUni-
versity in Malaysia have agreed on course curriculum outlines they do not
need to develop original learning materials – good quality OER for all the
topics they require is already on the Web and they simply adapt them to
their precise needs. Likewise, Canada’s Athabasca University will not approve
development of a course until the proposing department has shown that it
has done a thorough search for relevant openly licensed material that can
be used as a starting point. But some would go much further. Paul Stacey
(2011), of Canada’s BC Campus, has outlined the concept of The University
Open. He points out that the combination of open source software, open
access publishing, open educational resources and the general trend to open
government creates the potential for a new paradigm in higher education. In
February 2011 the Open Education Resource Foundation convened a meeting
in New Zealand to operationalize the Open Educational Resource University,
a concept developed from this thinking.

The idea is that students find their own content as OER; get tutoring from
a global network of volunteers; are assessed, for a fee, by a participating

108 Rankings and Accountability in Higher Education: Uses and Misuses


institution; and earn a credible credential. Such a system would reduce
the cost of higher education dramatically and clearly has echoes of the
University of London External System that innovated radically 150 years ago
when it declared that all that mattered was performance in examinations,
not how knowledge was acquired. That programme has produced five Nobel
Laureates.

As regards the first step in this ladder, open educational resources are
unquestionably being used. Literally millions of informal learners and stu-
dents are using the open educational resources put out by MIT, the UK Open
University and others to find better and clearer teaching than they are get-
ting in the universities where they are registered. Thirty-two small states of
the Commonwealth are working together within a network called the Virtual
University for Small States of the Commonwealth to develop open educational
resources that they can all adapt and use (Daniel and West, 2009).

The interest is considerable. The UKOU’s OpenLearn site has 11  million
users and hundreds of courses can be downloaded as interactive eBooks.
Furthermore, with 300,000 downloads per week, the UKOU alone accounts
for 10  per  cent of all downloads from iTunesU. And we must not forget
the worldwide viewing audience of hundreds of millions for OU/BBC TV
programmes.

Martin Bean (2010), the UKOU vice-chancellor, argues that the task of
universities today is to provide paths or steps from this informal cloud
of learning towards formal study for those who wish to take them. Good
paths will provide continuity of technology because millions of people
around the world first encounter higher education institutions such as
the UKOU through iTunesU, YouTube, TV broadcasts or the resources on
various university websites. The thousands who then elect to enrol as stu-
dents in these institutions will find themselves studying in similar digital
environments.

What are the implications of this concept? The institutions best equipped
to make a success of the Open Education Resource University are probably
institutions in the public sector that already operate successfully in parts of
this space and award reputable credentials. Such institutions must also have
the right mindset. It would be difficult for a university that has put scarcity
at the centre of its business model suddenly to embrace openness. In the
coming years some universities will have to ask themselves whether they
can sustain a model based on high fees and restricted access as other parts
of the sector cut fees and widen access.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 109
To examine how the OERU would work we can juxtapose Martin Bean’s
remark about leading learners step by step from the informal cloud of
learning to formal study with Jim Taylor’s representation of the steps in the
Open Educational Resource University. The first step, namely access to open
educational resource learning materials, is increasingly solid. The pool of OER
is growing fast and it is progressively easier to find and retrieve them. The
solidity of the top step, credible credentials, depends on the involvement of
existing, reputable, accredited institutions that resonate with this approach.

What about the three intermediate steps? For the first, student support,
distance-teaching institutions already have the skills necessary. They manage
extensive networks of tutors or mentors. SUNY’s Empire State College has
unique skills for this task given that students will often not be working with
material created by the institution, but with OER they have discovered for
themselves. Its unusual mentoring model is well suited to this.

James Taylor (2011), one of the leading planners of the OERU, envisages the
emergence of a body rather like Médecins sans Frontières or Engineers without
Borders, which he calls Academic Volunteers International. That may work
in some places, but having students buy support on a pay-as-you-go basis
would also work and might make for a more sustainable model. Furthermore,
social software is greatly enriching the possibilities for student support and
interaction. For example, the UKOU’s OpenLearn website is not just a reposi-
tory of OER, but also a hive of activity involving many groups of learners.
Digital technology is breathing new life into the notion of a community of
scholars, and social software gives students the opportunity to create aca-
demic communities that take us well beyond the rather behaviourist forms
of online learning that give some online learning a bad name. Some of this
social learning activity involves various forms of informal assessment that
can be most helpful in preparing students for the formal kind.

When we come to step three, assessment, it seems to us that payment is


essential. However, this is well-travelled territory. It takes us back 150 years
to the University of London External model with the difference, again, that
some assessments would have to be designed for curricula developed by the
student, not the institution. With credible assessment by reputable institu-
tions the next step, the granting and transfer of credit, is straightforward and
leads to the top step of credentials.

The discussions around the Open Educational Resource University assume


that it will not be a new stand-alone accredited institution, but rather an
umbrella organization for a network of participating institutions with

110 Rankings and Accountability in Higher Education: Uses and Misuses


longstanding reputations and accreditation. Indeed, no established institu-
tion is likely to adopt the Open Educational Resource University model for
its core operations in the foreseeable future since the revenues – as well,
of course, as the costs – would be much lower than they are used to. It will
be necessary to test the waters and USQ, which has a strong track record in
open, distance and blended learning, intends to test the waters by offering
studies on this model initially as part of its community service function. That
seems a sensible approach.

Conclusion
We have argued that higher education systems will experience major dis-
ruptions in the coming years. The major cause will be the growing demand
from students to learn online. Public-sector institutions are not responding
adequately to this trend, partly because the long tradition of faculty indi-
vidualism in teaching is not well suited to this mode of learning and partly
because many universities place greater priority on improving their research
rankings. This is creating a major opportunity for the private for-profit sector
to expand its role in teaching. The cost structure of online learning creates
a new business model that will put substantial downward pressure on fees,
causing further challenges to a public sector that has acquired the habit of
hiking fees faster than inflation. Some public institutions are fighting back
with a very low-cost model, the Open Educational Resource University,
although is too early judge its appeal to either students or institutions.

References

Archibald, R.B. and Feldman, D.H. 2010. Why Does College Cost So Much? Oxford: Oxford University
Press.

Ashby, W.R. 1956. An Introduction to Cybernetics. London: Chapman Hall.

Bates, A.W. 2011. 2011 Outlook for Online Learning and Distance Education, Sudbury: Contact
North: http://search.contactnorth.ca/en/data/files/download/Jan2011/2011%20Outlook.
pdf (Accessed April 2012.)

Baumol, W.J. and Bowen, W.G. 1965. On the performing arts: the anatomy of their economic
problems, The American Economic Review, 55(1/2): 495–502.

Chapter 5. Rankings and online learning: a disruptive combination for higher education? 111
Bean, M. 2010. Informal Learning: Friend or Foe? www.Cloudworks.ac.uk/cloud/view/2902
(Accessed April 2012.)

Bowen, W.G. 2011. Foreword, T. Walsh, Unlocking the Gates: How and Why Leading Universities are
Opening Up Access to Their Courses. Princeston, NJ: Princeton University Press, pp. vii–xvi.

Boyer, E. 1990. Scholarship Reconsidered: Priorities of the Professoriate. New York: Carnegie
Foundation for the Advancement of Teaching.

Butcher, N. 2011. A Basic Guide to Open Educational Resources. Vancouver/Paris: Commonwealth


of Learning/UNESCO: www.col.org/PublicationDocuments/Basic-Guide-To-OER.pdf
(Accessed April 2012.)

Carnegie Commission on Higher Education. 1968. Quality and Equality: New Levels of Federal
Responsibility for Higher Education. New York: McGraw-Hill.

Daniel, J.S. 2010 Mega-Schools Technology and Teachers: Achieving Education for All. New York/
London: Routledge.

Daniel, J.S. and West, P. 2009. The virtual university for small states of the Commonwealth, Open
Learning: The Journal of Open and Distance Learning on OERs, 24(2): 67–76.

HEFCE (Higher Education Funding Council for England). 2011. Collaborate to Compete: Seizing
the Opportunity for Online Learning for UK Higher Education, Report of the Task Force on
Online Learning, London: www.hefce.ac.uk/pubs/hefce/2011/11_01/
(Accessed April 2012.)

Jones, C. and Hosein, A. 2010. Profiling university students’ use of technology: where is the
net generation divide? The International Journal of Technology Knowledge and Society,
6(3): 43–58.

McGregor, R. 2011. Zhou’s Cryptic Caution Lost in Translation, Financial Times (10 June 2011):
www.ft.com/intl/cms/s/0/74916db6-938d-11e0-922e-00144feab49a.html#axzz1jI1U7N91
(Accessed April 2012.)

Northcott, P. 1976. The Institute of Educational Technology, the Open University: structure and
operations, 1969-1975. Innovations in Education and Training International, 13(4): 11–24.

Prahalad, C.K. 2004. The Fortune at the Bottom of the Pyramid. Wharton School Publishing.

Salmi, J. 2009. The Challenge of Establishing World-Class Universities. Washington DC: World Bank.

Salmi, J. 2010. Nine Common Errors when Building a New World-Class University, Inside Higher
Education, CIHE, August.

Stacey, P. 2011. Musings on the EdTech frontier: the University of Open: http://edtechfrontier.
com/2011/01/04/the-university-of-open/ (Accessed April 2012.)

Taylor, J. 2011. Towards an OER University: Free Learning for All Students Worldwide:
http://wikieducator.org/Towards_an_OER_university:_Free_learning_for_all_students_
worldwide

Walsh, T. 2011. Unlocking the Gates: How and Why Leading Universities are Opening Up Access to
Their Courses. Princeston, NJ: Princeton University Press.

Wildavsky, B. 2010. The Great Brain Race: How Global Universities are Reshaping the World.
Princeston, NJ: Princeton University Press.

112 Rankings and Accountability in Higher Education: Uses and Misuses


Chapter 6

Ranking higher education


institutions: a critical
perspective
Peter Scott
Introduction
The ranking of higher education institutions has become a ubiquitous, even
explosive, phenomenon; some would add it is also a perverse one. According
to one source there are now six international rankings (including the Academic
Ranking of Universities provided by the Shanghai Consultancy in Shanghai Jiao
Tong University, the Times Higher Education ranking, now in partnership with
Thomson Reuters, and the QS ranking – to name the three most influential). In
addition, twenty-four countries have their own national rankings (nine in China,
seven in the United States, three each in Chile, Germany and Romania, and
two apiece in Spain, Switzerland and the United Kingdom) (Shanghai Rankings
Consultancy, 2011). Almost certainly this is an underestimate, as new rankings are
constantly being devised. Nor does this total include the large numbers of other
rankings that seek to measure particular aspects of comparative performance
–  for example, ‘green’ universities or ‘safe’ universities.

Twenty-five years ago there were almost no – formal – rankings. Of course, there
were national statistics about, and well-established hierarchies within, most
national higher education systems – and, less categorically, informal ratings of
leading universities in a global (or, at any rate, North Atlantic) context. These
hierarchies have now been made explicit by what is best described as a ‘rankings
industry’ in which newspapers and other publishers, national policy-makers and
institutional leaders are all complicit – and which opponents, whether tradition-
alists who believe in the ‘privacy’ of universities or radicals who believe rankings
privilege the already privileged, have been largely powerless to resist (Brown,
2006). In the process traditional hierarchies have been both reinforced, as these
radicals fear. But, as traditionalists fear, these hierarchies have also been sub-
verted by the exponential growth of rankings that emphasize different aspects
of performance and rely on different methodologies. Rankings are certainly ‘fun’
–  unlike funding perhaps, the other big topic that agitates higher education
policy-makers (Altbach, 2010). But it is only recently that they have become the
object of serious attention and attempts have been made to identify their origins
and drivers, their impact on national policy-making and institutional manage-
ment, and their evolution and typology (Dill and Soo, 2005; Hazelkorn, 2011;
Sadlak and Liu, 2007; Salmi and Saroyan, 2007).

Origins and drivers


There have been many reasons for the rapid development of rankings, often
described as ‘league tables’ (the analogy with football is far from accidental):

114 Rankings and Accountability in Higher Education: Uses and Misuses


• Some are commercial. Mass participation has made higher education a matter
of popular interest to an extent unimaginable in the more selective elite
university systems of the past. As a result, a public appetite, even curiosity,
has been created which commercial publishers have been eager to satisfy
(and stimulate). But it is about more than commerce. Closely linked is a new
social and cultural phenomenon, an intrusive and disturbing mediatization
of nearly all aspects of political, professional (and private) life (Lundby, 2009).

• A second set of, closely related, reasons is related more profoundly to the
phenomenon of mass higher education of which wider participation is
only one aspect (Scott, 1995). In mass systems the balance shifts from the
‘private world’ of science, scholarship and elite undergraduate education
to the ‘public world’ of more pervasive social engagement, not simply in
terms of who is considered eligible to benefit from higher education (mass
participation), but also of more accessible forms of knowledge production
(impact, advocacy and multiple forms of translation and application)
(Gibbons et al., 1994). This makes higher education everyone’s business.

• A third set of reasons reflects the erosion of those qualities of trust and
hierarchy that characterized those elite university systems. Paradoxically
perhaps, it is the erosion of the formerly near-unchallengeable consensus
about which were the ‘best’ (and, by extension, the ‘less good’) universities
that has fuelled the appetite for formal rankings. Read in this way, rankings
are related to unease about standards, to volatility of missions, and to the
potential at any rate of establishing new criteria of quality and excellence. It
may not be a coincidence that their popularity has taken place at the same
time that the idea of ‘risk society’ has developed.

• A fourth set of reasons concern the global trend, although much stronger in
some countries than others, to promote the ‘market’ in higher education at
the expense of older notions of public service, social purpose or academic
solidarity. One effect of the market has been to encourage greater competition
among, between and within universities; another has been to place greater
emphasis on marketing techniques, including ‘playing’ the league tables.

• A final set of, admittedly, more speculative reasons may even hint at
suggestive links between university rankings and ‘celebrity culture’. The
development of rankings – in health outcomes, environmental impacts,
customer satisfactions and (almost) everything else in addition to higher
education – ostensibly designed to drive up performance disconcertingly
mirrors the obsession with ‘winners’ and ‘losers’, through merit or chance,
in the mass media.

Chapter 6. Ranking higher education institutions: a critical perspective 115


These constitute a rich and potent, and confusing, mix of reasons: commercial
exploitation, mediatization more generally, the impact of massification and
the erosion of the once ‘private’ domain of the university, the insecurity (and
erosion?) of the academic elite, the intrusion of the ‘market’ into public policy,
the playful ephemera of post-industrial consumerism and post-modern info-
tainment. There are, of course, significant inconsistencies – and, therefore,
tensions – between these different drivers. As a result there has been a prolif-
eration of different rankings, mitigated to some extent by attempts to produce
global ‘meta’ rankings. But this proliferation has also tended to reduce the cat-
egorical effects of rankings in a constant battle between clarity and complexity.

Impact on strategy and management


The impact of rankings on both national policy-making and institutional
strategies and behaviour has also been far-reaching (King and Locke, 2008).

National policy-making

A jumbled rhetoric about the growth of a knowledge society – the inexorable


forces of globalization, the need for a highly skilled workforce, the efficacy
of applied research, the ‘race to the top’, and the rest – has gripped the
imagination of national politicians, both right and left. One of the indicators
of success in this high-tech Darwinian struggle for survival and mastery, has
been taken to be the number of ‘world-class’ universities in each country
– as measured, of course, by international rankings. However, the assumed
correlation between ‘world-class’ universities and global economic success
has been accepted without significant critical interrogation. As a result, two
inconvenient facts have been generally overlooked:

1. The first is that, the greater the international reach of a university, the
more its ‘products’ – whether highly skilled graduates or research out-
puts – become globally available, so undermining any purely national
advantage. Global or ‘world-leading’ universities belong by definition
to everyone; the scientific and social capital they generate cannot easily
be monopolized. The advantages conferred on a nation by having more
than its ‘share’ of such universities probably contributes more to national
socio-political prestige than to economic effectiveness. In other words it
is a reputational game.

116 Rankings and Accountability in Higher Education: Uses and Misuses


2. The second is that the correspondence between academic prestige and
economic success is typically weak. For example, in the United Kingdom
where the ‘world-class’ university discourse is especially influential
(because the United Kingdom has a disproportionate share of highly
ranked universities), economic success, at any rate until the 2008 bank-
ing crisis, depended largely on the growth of financial services, although
business and management were (and are) not among the United
Kingdom’s most highly ranked subjects. Only in biosciences has there
been a reasonably convincing correlation, although the existence of a
very large public customer in the shape of the National Health Service
provides at least as convincing an explanation. Equally, economic growth
especially in high-technology manufacturing has been rapid in countries
that do not possess large number of highly ranked universities.

Institutional strategies – and behaviours

Institutional strategy and behaviour have also been significantly affected by


rankings:

1. Firstly, rankings have contributed to a shift in the balance of authority


within colleges and universities. Their influence has tended to strengthen
the corporate ‘core’, where competition between universities is consid-
ered to be normal, at the expense of the academic ‘periphery’, where
older habits of solidarity have persisted. Also, the crude reductionism
inherent in rankings encounters fewer objections among institutional
managers than among academics, who in addition may be queasy about
the robustness of the methodologies employed.

2. Secondly, rankings have affected institutional strategies, often deeply.


In many cases universities have adopted as an explicit target the goal
of ‘improving’ their positions in the league tables. This has had two
results. The first is that, even if they do not recognize it, institutions
that adopt such goals have sacrificed a significant degree of their auton-
omy because such targets are positional rather than absolute (so their
‘success’ is dependent on the success, or failure, of other institutions).
More fundamentally such targets are externally derived rather than
internally generated. The second result is that ‘planning’ has tended
to take on a new character. Instead of being concerned with develop-
ing longer-term academic visions, its focus has shifted to short-term
‘brand’ management – potentially with the same doleful consequences
for academic freedom.

Chapter 6. Ranking higher education institutions: a critical perspective 117


Evolution and typology
The development of rankings has typically progressed through three
phases or generations: (i) primitive league tables reporting ‘public opinion’
in universities and/or listing the outcomes of policy interventions and
funding allocations; (ii) more systematic rankings based on more detailed
data published by ministries and other governmental agencies, as more
coherent systems of higher education were established (and also the more
detailed management information systems created within institutions);
and, (iii) a proliferation of rankings in a more fragmented and quasi-market
environment.

First-generation ranking: polls and lists

The first real league tables were created in the late 1970s and 1980s by journal-
ists and without sanction from the leaders of higher education systems or
institutions. Often the higher education establishment was actively opposed to
these embryonic ‘league tables’, which were regarded as a vulgar intrusion into
their, ideally private, affairs. The few academics that helped with the construc-
tion of these early rankings were treated as, at best, mavericks and, at worst, as
collaborators. During this first phase rankings took two main forms:

• The first were in effect a version of the public opinion polls that, not
coincidentally, were becoming more generally popular at the same time.
The methodology was to survey the, inevitably subjective, opinions of
heads of department and other subject specialists, secondary school
teachers and the like. There was little attempt to collect more objective
or empirical data. In essence the compilers of league tables were merely
attempting to capture the tacit knowledge about ‘reputation’ and silent
hierarchies about the performance of universities that already existed.

• The second form taken by rankings in effect was a response to the increasing
transparency of policy instruments: funding allocations initially and
predominantly, but also the first tentative measurements of institutional
position and performance on which more formulaic funding allocations
depended, and, of course, quality assessment indicators (although assessing
threshold rather than comparative quality was of limited use to the
compilers of rankings). Newspapers reported these new policy instruments,
collating their results into simple ‘lists’. It is possible to regard these ‘lists’ as
one aspect of the emergence of systems of higher education that implied
some ordering of the institutions these systems comprised.

118 Rankings and Accountability in Higher Education: Uses and Misuses


In this first phase, therefore, the development of rankings was in response
to external trends – the more widespread use of polling data more gen-
erally, and in the particular context of higher education the evolution of
more transparent policy instruments. But the initiative remained firmly with
journalists; neither national policy-makers nor institutional leaders actively
encouraged the creation of league tables.

Second-generation ranking: transparency, accountability and


management information systems

In the second phase the first of these trends, the capture of tacit knowledge
and the codification of silent hierarchies that already existed, became less
important. In that sense rankings ceased to be – however remotely and
perversely – collegial in their sources. ‘Opinion polls’ among deans and
heads of department, or among secondary school teachers, disappeared or
became only minor elements in the more elaborate rankings that began to
appear in the 1990s. The main reason for this change was the availability of
other data that apparently were more ‘objective’. These data still included
the results of peer-review processes. But these were now mediated through
formal systems of assessment of both teaching and, especially, research.

However, the second trend, the creation of higher education systems


(including the growth of accountability systems) and the growth of
management information systems within universities became more
significant. This systematization and sophistication, of course, were
themselves responses to the larger phenomenon of massification, both in
the scale of higher education, but also its scope and complexity. They were
also part of an even wider phenomenon – the revolution in information
and communication technologies (both the scale of data collection and the
ease of data manipulation).

These trends provided the raw material for the construction of new kinds of
ranking. Firstly, explicit policy frameworks were established covering most or
all types of higher education institutions, not simply universities. Because of
their scale and breadth, these frameworks could no longer rely on informal
knowledge networks but required ever more elaborate and transparent systems.
Secondly, there was an inexorable shift from transparency to accountability.
Even without the elaborate assessment regimes that accompanied the
growth of the so-called ‘audit society’, transparent policy instruments made
it possible to make comparisons and create lists. Finally, the size, complexity
and heterogeneity of higher education institutions required the development

Chapter 6. Ranking higher education institutions: a critical perspective 119


of sophisticated management information systems that generated increasing
amounts of data (which, of course, was necessary to satisfy accountability
requirements, but was even more necessary for institutional management).

As a result rankings took on a new character. They no longer had to be com-


piled from scratch. Media rankings relied less on any form of active jour-
nalism, explicitly directed towards collecting views and data about relative
performance, and more on simply generating lists or league tables, derived
from national or institutional data that, crucially, had been collected for
other purposes. Variety was produced by choosing which of this data to use
or by weighting it in different ways. Rankings also proliferated. Firstly, more
accurate international comparisons became possible, as international sta-
tistics (if not systems) converged. Secondly, advocacy and activists groups
– for example, trade unions or subject associations – produced rankings to
strengthen their arguments. Policy advocacy now had to be ‘evidence-based’.
Thirdly, universities constructed benchmark groups of comparator institu-
tions as planning tools to measure their performance. As a result rankings
were internalized in institutional planning systems.

Third-generation ranking: measuring and strengthening market


performance

In the third and contemporary phase, all these trends – transparent policy
instruments, accountability regimes and management information systems –
have intensified. As a result the source material for constructing rankings
has become even more wide-ranging. But these have been supplemented by
two phenomena that are closely linked:

• The first is the shift towards the market. Tuition fees have been introduced
and increased, usually justified by the alleged need to share costs
between taxpayers and users of higher education. Competition between
institutions has been positively encouraged, as has greater differentiation
between missions. Higher fees have led to pressure on institutions to
publish more detailed ‘public information’ (to assist student choice), while
greater competition has encouraged them to devote more attention and
resources to marketing (both the advertising of academic ‘products’ and
more active ‘brand’ management). However, it is important to recognize
that in most countries this shift towards the market has been politically
mandated, so there has been no decline in assessment and accountability
systems that grew up in the context of the coordination of public systems
of higher education.

120 Rankings and Accountability in Higher Education: Uses and Misuses


• The second phenomenon is the intense mediatization of policy-
making, institutional management and, indeed, personal identity and
self-realization. This can be observed in a range of different contexts:
‘instant’ 24-7 politics, celebrity culture and consumer relationships. Some
attribute this phenomenon to the increasing power of the mass media
as a result of the ICT revolution; traditional forms of publication have
been supplemented by so-called ‘social networking’; others prefer more
structural, even philosophical, explanations such as the ‘abolition’ of time
and space and the emergence of the ‘extended present’. But, whatever
explanation is preferred higher education is clearly an active arena of
mediatization. ‘Social networking’ has become a key tool of marketing for
many institutions as Facebook pages and Twitter sites have proliferated.
For publishers high-profile rankings have become profitable products,
just as transparency and accountability tools (and, in particular, research
assessment) have increased the profitability of scientific publishing.

As a result of both phenomena, marketization and mediatization, rankings


now occupy a central role in the consciousness of higher education as policy
targets and as talking points. Also a paradoxical effect of marketization is that
in some cases rankings have been ‘nationalized’, in the sense that universi-
ties are now required to publish comparative data to help students choose
between institutions and courses.

Critiques of rankings
The rise of rankings has not gone unchallenged. Earlier criticisms that rank-
ings were ‘intrusive’ have become less persuasive. The scale and cost of mass
systems, and their engagement in processes of social change and economic
development, have made it impossible any longer to regard higher educa-
tion as a ‘private’ domain ruled by scientists, scholars and teachers. At the
same time the pressure on all kinds of organization, private as well as public,
to demonstrate their accountability to those who buy their products or use
their services has increased, despite doubts about the long-term impact of
this ‘audit culture’ on the independence of civil society institutions from both
the state and the market.

However, other critiques have become more persuasive as the influence of


rankings has increased. These can be grouped under two headings: technical
and methodological critiques and more fundamental critiques that chal-
lenge the concepts and principles underpinning rankings.

Chapter 6. Ranking higher education institutions: a critical perspective 121


Methodology

In the first group a number of shortcomings have been identified. These


include reliance on readily available rather than relevant data, use of input
data rather than data about outcomes, a lack of transparency (particularly
when individual indicators are aggregated), and frequent changes in
methodology that make consistent and sustained judgments difficult if
not impossible. To remedy some of these shortcomings an International
Rankings Expert Group (IREG) developed a number of criteria – the so-called
‘Berlin Principles’ – in 2006, and also established an Observatory on
Academic Ranking and Excellence. The role of the Observatory is to audit (on
a voluntary basis) individual rankings (IREG, 2010).

The criteria established by the Berlin Principles attempt to address many of


the methodological objections that have been made to rankings. They include
the need to make the purpose of rankings explicit; whenever possible, to use
outcome measures rather than input data; to ensure that the methodology is
fully transparent (particularly with regard to the weighting of composite indi-
cators); to rely only on data that is ‘authorized, auditable and verifiable’; and,
as far as possible, to maintain a consistent methodology. However, in practice,
the compilers of rankings are forced to make a number of compromises:

• Firstly, most of the data on which rankings rely are not collected for the
purposes of compiling rankings. Often they are linked to the distribution of
resources. In other words their essence is not the crude ‘scores’ – the only
element used in rankings – but the complex algorithms used to allocate
resources. The same thing happens at the institutional level. For example, in
the case of research assessment in the United Kingdom, some universities
seek to maximize reputation and others income.

• Secondly, input data are easier to obtain and are probably more reliable
than outcome measures, some of which are under at least the partial
control of the institutions that are being ranked. Inputs and outcomes are
clearly linked. More generously funded universities are not only able to
offer their students a high-quality experience, they also tend to be the most
privileged in terms of their historical prestige and student profiles. As a
result employment rates among their graduates are likely to be superior.

• Thirdly, the most relevant data are rarely the most auditable and
verifiable. Journalists, in particular, are used to ‘going with what we’ve
got’. The shift from active journalism – for example, by organizing opinion
polls among academic leaders – to passive reporting of published data

122 Rankings and Accountability in Higher Education: Uses and Misuses


has made it more difficult to obtain relevant data. The most systematic
bias, of course, is against teaching, the primary activity of (nearly) all
universities, and in favour of research – the result of the dearth of
reliable (and, in particular, comparative) data about the former and the
wealth of such data about the latter.

• Finally, although transparency about the weighting of various elements


within composite indicators certainly can help to expose the values
attached to them by the compilers of rankings, it does not remove the
element of subjectivity. So different weightings produce different results
because judgments are being made, tacitly or deliberately, about the
relative worth of different activities. Paradoxically, this was easier to justify
when higher education systems were much smaller and institutional
missions less divergent – although in practice there had been little
demand for rankings – than in mass systems in which institutions have
multiple missions, and in which also the demand for rankings is greatly
increased (not least because of this diversity).

Principles

This paradox illustrates one of the more fundamental objections to rank-


ings – the greater the appetite for rankings the more difficult they are to
undertake. But three other, equally fundamental, questions must also be
asked. The first is whether university performance should be measured
(but also ranked) at all. The second is whether it can be measured suc-
cessfully without perverse feedback loops being created. The third is what
aspects of performance should be measured.

The first question is generally dismissed on the grounds that higher educa-
tion cannot escape some measure of accountability, and for such account-
ability to be effective there must be appropriate tools. However, a number
of issues need to be addressed before this first question can be answered
out-of-hand. One is the precise degree of accountability. Open societies
depend on vigorous and independent intermediary institutions, such as
universities. Democratic states, therefore, must set limits to the extent of
accountability, even when it is carried out to ensure that the ‘will of the
people’ is respected. In other words, surveillance can be excessive – espe-
cially so in the case of institutions such as universities that embody values
of critical enquiry and open-ended research. A second issue is the nature
of accountability. While it may be necessary to measure performance, it
may nevertheless be undesirable to rank it according to an absolute (and

Chapter 6. Ranking higher education institutions: a critical perspective 123


reductionist?) scale. In some important respects rankings may reduce
accountability because fitness-for-purpose often has to be sacrificed to
one-size-fits-all.

The second question is harder to dismiss. It is a truism that no system


of measurement can be effects-free. However carefully such systems are
designed, and however ‘neutral’ their original intentions, they affect behav-
iour. Even a simple and essential requirement such as setting the timeframe
within which performance is to be measured changes the ‘rules of the game’.
For example, a five-year timeframe for research assessment, especially
when it is combined with a minimum level of productivity, will encourage
the production of short articles at the expense of longer books. In the wider
context of rankings, especially when these institutional targets are set in
terms of improved rankings, the temptation to indulge in ‘game playing’ can
be considerable. Outright fraud has not been unknown – for example, over-
stated completion rates or unemployed graduates who ‘disappear’. But in a
less extreme register there can be other examples. For example, universities
may be tempted to discourage the recruitment of students with lower entry
qualifications if ‘grade points’, or other proxies for academic preparation,
contribute to higher rankings. Examples can approach the absurd. One Ivy
League university, faced with a league table of the proportion of alumni
who ‘give’ to their universities, simply assumed that all graduates who had
not ‘given’ in the previous five years had died. Of course, there was no cor-
responding league table of mortality among graduates.

The third question is the most difficult of all. To produce valid comparisons,
which excite newspaper readers or inform higher education applicants,
institutions must be ranked according to common criteria. Ranking cannot
be produced from a mass of incommensurable data. But to be accurate
(and fair) rankings must take into account the different missions of institu-
tions, especially within mass system enrolling up to and in excess of half
the relevant age population. The criteria based on the Berlin Principles
include the need to take account of these different missions and goals,
acknowledgement of ‘linguistic, cultural, economic and historical contexts’
and recognition that quality is multi-dimensional and multi-perspective.

This trade-off between accessibility and accuracy cannot be fully resolved.


The only workable solution is to assume that some missions and goals
must take precedence over others. For example, the major global rank-
ings such as the Shanghai and Times Higher Education (THE) rankings place
great weight on measures of research performance, with an inevitable

124 Rankings and Accountability in Higher Education: Uses and Misuses


bias towards blue-skies fundamental research at the expense of applied
or translational research, and of international activity, such as the num-
ber of international staff and students. They place less weight on both
teaching and a range of ‘social’ indicators (such as access and engage-
ment). In other words these rankings adopt in an uncritical manner, and
also reinforce, a particular notion of global ‘success’. Some would argue
that this notion of global academic ‘success’ has been developed within
the wider context of a worldview that is essentially neo-liberal in its ideo-
logical preferences and hegemonic in its geopolitical framework.

The future of rankings


The popularity of rankings is such that these critiques have so far had
limited impact. Instead, the higher education policy (and research) com-
munity has adopted an ambivalent attitude to rankings. It has generally
accepted that rankings are here to stay, especially if national systems
continue to evolve into quasi-markets under the influence of globaliza-
tion. The challenge, therefore, is to improve rankings. Many of the con-
tributors to the International Forum on Ranking and Accountability in
Higher Education: Uses and Misuses, organized by UNESCO in cooperation
with the OECD’s Institutional Management in Higher Education (IMHE)
programme and the World Bank (Paris, May 2011), adopted this broad,
middle-ground position. So far this has proved to be the most popular
and credible attempt to fashion ‘fourth-generation ranking’. The European
University Association (EUA) has also produced a sober and balanced
report on rankings (Rauhvargers, 2011).

Meanwhile the ‘rankings industry’ races ahead, fuelled by the commer-


cial ambitions of publishers, the ‘student choice’ policies adopted by
many governments and the rivalry of aspirational university leaders. The
methodologies used in the major global rankings have been improved,
in the sense that they have become more transparent and more accurate
data sources have been tapped. But, for reasons that have already been
explored, there are limits to how much rankings can be ‘improved’ if they
are to retain their cutting edge, and their ability to communicate simple
‘league table’ messages. At the heart of rankings must remain the essential
belief in ‘winners’ and ‘losers’, the conviction that higher education is a
competitive and positional good. That cannot change.

Chapter 6. Ranking higher education institutions: a critical perspective 125


Those who are reluctant to accept this viewpoint and hold to the idea of
higher education as an absolute good (and a good that is public and social
as well as private and individual) need to think beyond simply ‘improving’
rankings. For them ‘fourth-generation ranking’ must abandon its essential
purpose, to provide rankings that, however many elements they may
contain, can be reduced to a single figure. But as rankings cannot simply be
abolished, they can only be superseded by superior measures of university
performance that combine in equal measure two different objectives:
excellence and difference. One of the most interesting experiments in
developing a credible alternative to one-dimensional rankings is the U-Map
project – the European classification of universities (van Vught, 2009; van
Vught et al., 2010). It is a map, not a ranking. Performance is assessed over
five domains: teaching and learning, student profile, research performance,
knowledge exchange, international orientation and regional engagement.
The result is a profile, not a position in a league table.

The U-Map approach is not without its difficulties. On the one hand, it
adopts some of the same success criteria as the Shanghai and THE rank-
ings: research performance and international orientation. Within the cur-
rently dominant paradigm of the knowledge society and the free-market
economy there is perhaps no alternative, although the internal contradic-
tions of that paradigm (not simply in terms of equity but also of efficiency)
and the challenges to it posed by new social movements may soon create
a space in which alternatives can be conceived. On the other hand, U-Map
does not produce ‘lists’ that can be translated into newspaper headlines,
‘best buy’ guides or political initiatives to boost the number of ‘world
leading’ universities. Nevertheless, U-Map offers a more sophisticated
and nuanced approach to assessing success in modern higher education
systems composed of institutions with multiple missions and of different
types of institution.

The alternative strategy available to critics of rankings is to encourage the


proliferation of rankings with different methodologies, different weight-
ings and different orientations. Although no single ranking can ever be
satisfactory, a plurality of rankings may begin to capture the diversity of
twenty-first-century higher education. By striving for serial exactitude
across an ever-wider range of domains, fuzziness may be achieved. This
would be a bold gamble. But it might just be the only available strategy if
more sophisticated tools for describing, and assessing the success of, indi-
vidual universities such as U-Map fail to establish themselves as ‘fourth
generation ranking’.

126 Rankings and Accountability in Higher Education: Uses and Misuses


References

Altbach, P. 2010. ‘The State of Ranking’, Inside Higher Ed.: www.insidehe.com/views/2010/11/11/


altbach (Accessed 1 November 2012.)

Brown, R. 2006. ‘League tables – do we have to live with them?’, Perspectives, 10.2: 33–38.

Dill, D. and Soo, M. 2005. ‘Academic quality, league tables and public policy: a cross-national
analysis of university rankings systems’, Higher Education, 49, 495–533.

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. and Trow, M. 1994. The New
Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies.
London: Sage.

Hazelkorn, E. 2011. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. London: Palgrave Macmillan.

IREG (International Rankings Expert Group) – Observatory on Academic Ranking and Excellence
(2010), IREG-Audit: Purpose, Criteria and Procedure.
Brussels: IREG: www.ireg-observatory.org (Accessed 1 November 2012.)

King, R. and Locke, W. 2008. Counting What is Measured or Measuring What is Counted? League
Tables and their Impact on Higher Education Institutions in England (Issues Paper):
Bristol: Higher Education Funding Council for England.

Lundby, K. (ed.). 2009. Mediatization: Concept, Change and Consequences. New York: Peter Lang.

Rauhvargers, A. 2011. Global University Rankings and their Impact. Brussels: European University
Association.

Sadlak, J. and Liu, N.C. 2007. Introducing the topic: expectations and realities of world-class
university status and rankings practices, J. Sadlak and N.C. Liu (eds) The World-Class
University and Rankings: Aiming Beyond Status. Bucharest: Cluj University Press.

Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses, Higher
Education Management and Planning, 19.2: 31–68.

Scott, P. 1995. The Meanings of Mass Higher Education. Buckingham: Open University Press.

Shanghai Ranking Consultancy. 2011: www.shanghairankings.com/resources.html


(Accessed 1 November 2012.)

van Vught, F. 2009. Mapping the Higher Education Landscape: Towards a European Classification of
Higher Education. Dordrecht, Netherlands: Springer.

van Vught, F., Kaiser, F., File, J., Gaethgens, C., Peter, R. and Westerheijden, D. 2010. U-Map: the
European Classification of Higher Education Institutions. Enschede: Centre for Higher
Education Policy Studies (University of Twente): www.u-map.eu
(Accessed 1 November 2012.)

Chapter 6. Ranking higher education institutions: a critical perspective 127


Chapter 7

Rankings, new
accountability tools and
quality assurance
Judith Eaton
Quality assurance and accreditation in higher education have experienced a
major expansion over the past twenty years.1 This growth has accompanied the
expansion or ‘massification’ of higher education as more and more countries
around the world focus on access to colleges and universities as vital to the
future success of societies.

Quality assurance and accreditation are about establishing, maintaining and


enhancing the quality of the academic work of higher education institutions.
They are the oldest forms of quality review of colleges and universities, address-
ing, for example, teaching and learning, curriculum, faculty performance, advis-
ing and counselling, and research. Typically, national, regional or international
bodies that are government-based or non-governmental carry out this work.

Until recently, quality assurance efforts around the world shared a small number
of key characteristics. They focused on either entire colleges or universities or on
specific programmes such as engineering or law or medicine. Quality assurance
is peer-based, with academics reviewing academics, relying on self-evaluation. It
is standards-based, with academics setting expectations of institutional or pro-
gramme performance. It is, in general, a formative evaluation, concentrating on
how to strengthen academic performance rather than making an up-or-down
judgment. Standards are primarily qualitative, calling for professional judgment.
It is also evidence-based, with institutions or programmes expected to provide
information about the extent to which standards are met.

During the past ten years, however, the demands on higher education to provide
evidence of quality have gone beyond these familiar characteristics. Increasingly,
the focus has been on direct accountability to the public, with diminishing
investment in relying exclusively on an academically driven, peer-based system.
Societies are increasingly reluctant to rely only on the professionals in the field
to make judgments about quality. This is accompanied by diminished interest in
quality improvement.

New accountability tools


Elected officials in governments around the world emphasize public account-
ability by calling for better evidence about what students learn, greater

1 There are many definitions of these terms. For the purposes here, they refer to external quality review
of higher education that typically involves self- and peer review culminating in judgments about both
threshold quality and needed improvement.

130 Rankings and Accountability in Higher Education: Uses and Misuses


transparency, and more information about the return on their investment
in colleges and universities. As larger and larger numbers of students enrol
in colleges and universities and as substantial tuition becomes a fact of life
for more and more of these students, higher education is viewed as a con-
sumer good, with a degree or credential or a job as essential outcomes for
all students.

The emphasis on accountability is driven by several factors, all of which rein-


force a pragmatic, utilitarian approach to higher education. They include an
international economy that places emphasis on jobs requiring at least some
post-secondary experience, the continued growth and dominance of technol-
ogy, international competitiveness, national economies with limited funds for
higher education and, in some countries, rising skepticism about the effective-
ness of all social institutions, including higher education. For some countries,
the growth of the for-profit higher education sector is a factor as well.

One major result of the emphasis on public accountability has been


the development of new tools to judge quality, which are external to the
academy. The public no longer needs to rely solely on the professionals and
practices within higher education to judge its effectiveness.

Rankings are one of these new accountability tools – the most prominent
and the most controversial. ‘Rankings’ refer to a hierarchical ordering and
comparing of the performance, effectiveness and characteristics of higher
education institutions based on specifically chosen indicators. The indica-
tors may differ significantly and can include, for example, research, funding,
endowment and student characteristics. They are the means by which, for
instance, US News and World Report or the Shanghai Rankings, determine
the rank at which institutions find themselves. More than fifty countries now
use rankings, accompanied by ten international and some regional rank-
ings. The number of ranking systems is likely to increase, expanding to larger
numbers of countries and regions and, according to some experts, becoming
a standard worldwide accountability tool. Ranking systems may be devel-
oped by government, by private bodies or the media.

Other external accountability tools have emerged as well, including online


interactive data sets, qualifications frameworks, regional quality standards
and international quality standards. Online, interactive data sets allow
students or the public to align and compare various features of institutions
such as graduation rates or retention. The European classification system,
U-Map, is one example, featuring a multi-dimensional approach to such
comparisons based on specific indicators such as student characteristics,

Chapter 7. Rankings, new accountability tools and quality assurance 131


degree levels and research expenditures. The feasibility project for U-Map
has just been completed. Another such system is the College Navigator
provided by the US  Department of Education. Prospective students can
compare colleges and universities to learn more about the availability of
federal student aid, enrolments, tuition, majors and admissions practices.
A key feature of these systems is the opportunity for students or the pub-
lic to develop an individualized data set to be used in making judgments
about colleges and universities.

Qualifications frameworks serve to articulate expectations of what students


are to learn, arrayed by levels of education. They can be developed for all
education, from primary grades to graduate education, or for specific levels
such as the baccalaureate. They can also be used to set common expecta-
tions across institutions for degrees, for example, what students who earn
the baccalaureate are expected to know and do. Qualifications frameworks
may function at a national, regional or international level. Approximately
seventy countries have established such frameworks and a number of
regional frameworks have been developed as well. Europe has established
both a regional framework and national frameworks. Australia, Hong Kong,
New Zealand and South Africa, as well as many European nations, have
country-based frameworks.

International standards for student learning are the newest of the exter-
nal accountability tools. The Organisation for Economic Co-operation and
Development has been developing international indicators of student
achievement, the Assessment of Higher Education Learning Outcomes,
which may be a prototype for perhaps standardizing international judgment
of what students learn and can do.

The challenge of the new tools


Rankings and the other accountability tools pose a significant challenge to
traditional quality assurance and accreditation. Developed outside higher
education, they are often viewed as a disruptive technology. They do not
incorporate any of the fundamental features of the traditional approach:
self-evaluation, peer review and predominantly qualitative judgment do not
dominate. Given that their primary purpose is public accountability, they
tend not focus on quality improvement. They build on the consumerist,
utilitarian view of higher education described above.

132 Rankings and Accountability in Higher Education: Uses and Misuses


As these tools shift judgment of academic quality away from higher educa-
tion towards external actors, they serve purposes different from traditional
quality assurance and accreditation. Where the tools are developed and
implemented by government, they strengthen government oversight of
higher education. They are also sometimes not consistent with efforts to
develop an institutional ‘culture of quality’ that has been part of the interna-
tional discussion of quality assurance for some time.

The United States and the new tools


In the United States, the development of new tools such as rankings, quali-
fications frameworks or interactive data sets has rested with the business
community or government – not, as might be expected, with the eighty-
five non-governmental institutional and programmatic accrediting organi-
zations or higher education generally. US accreditation, focusing on both
quality assurance and quality improvement, has a strong investment in the
traditional and well-tested practices of peer review, qualitative and forma-
tive evaluation, and the exercise of professional judgment in determining
academic quality. To date, this community has not chosen, for the most part,
to make a major investment in the new tools as part of their commitment
to professionals reviewing other professionals or academics reviewing other
academics. They are free to make this choice, given the non-governmental
status of accrediting organizations and the longstanding US tradition of gov-
ernance of higher education as vested in boards of trustees of individual
colleges and universities, not federal or state government.

While these tools have not been developed within accreditation and higher
education, there is, nonetheless, considerable discussion and debate sur-
rounding them. The Council for Higher Education Accreditation (CHEA), a
non-governmental, national institutional membership organization of 3,000
degree-granting colleges and universities, has worked over the past several
years to bring US colleagues together with academics from a range of other
countries to explore the new accountability tools and the work of quality
assurance in other countries. Accrediting organizations themselves have
been part of many international discussions with regard to, for example,
rankings and qualifications frameworks.

That these tools are emerging from the business or governmental sector is
not, however, discouraging to the public that is using them with increasing

Chapter 7. Rankings, new accountability tools and quality assurance 133


frequency to make judgments about the quality of higher education and
making key decisions such as what college or university to attend. Rankings
in particular are popular and available through a number of commercial
enterprises including US News and World Report, Barron’s, and The Princeton
Review.
Federal and state governments are part of the development of the new
tools, along with private foundations. At present, this interest focuses on quali-
fications frameworks such as the Lumina Foundation’s Degree Qualifications
Profile and interactive datasets that provide for user-driven comparability such
as the College Navigator on the US Department of Education website men-
tioned above and the Education Trust’s College Results Online.

The new tools and traditional quality


assurance
What should professionals engaged in quality assurance and accreditation
do about the accountability tools? Should they ignore the new tools? Make
some accommodation in response to the calls for greater accountability and
the new tools and, if so, how? Attempt to dominate the new tools? Allow the
new tools to replace traditional quality assurance?

Some within quality assurance, accreditation and higher education have


chosen to ignore the new accountability tools, seeing them as inadequate
measures of academic quality and raising serious questions about the
methodologies employed in, for example, the various rankings that are
available. These people continue to advocate for traditional quality assurance
as the preferred means to judge higher education. They see preservation
of quality improvement and formative evaluation as essential. They are
eager to maintain the traditional academic leadership role of colleges and
universities, in contrast to a stronger role for external actors.

Others have opted for more of an adaptive role for accreditation and qual-
ity assurance. They acknowledge that the calls for public accountability –
evidence of student learning and greater transparency both for institutions
and quality assurance – require a robust response in today’s world. A key
question here is whether to accommodate the new tools or to address public
accountability through traditional processes augmented by, for example,
greater transparency and additional attention to student learning. At the
same time, a strong commitment to the desirable elements of the traditional

134 Rankings and Accountability in Higher Education: Uses and Misuses


quality assurance model calling for peer review and professional judgment
needs to be maintained.

Yet others working in higher education have engaged with the tools, seeing
them as useful additions to traditional practice. For example, in the United
States, the Lumina Foundation’s Degree Qualifications Profile, mentioned
above, is a framework to develop and judge expectations of what students
are learning at the baccalaureate, masters and doctoral levels. The Profile
has attracted some accrediting organizations, associations and institutions
that are piloting its application to address student learning and account-
ability. Europe, as indicated above, has developed both regional and national
frameworks, as have a number of other countries. These frameworks are
addressed either as part of ongoing quality assurance practice or in tandem
with traditional efforts. Governments often prescribe them.

Those interested in some alignment between traditional quality assurance


and the new accountability tools also talk about carefully distinguishing the
tasks that are to be accomplished through these different practices and tools.
The European Association for Quality Assurance in Higher Education (ENQA)
has initiated a discussion about differentiating tools based on specific pur-
poses that tools are intended to serve. The same approach may not be effec-
tive to judge quality, assure accountability and encourage improvement. We
may need separate tools for these various purposes. Perhaps, for example,
traditional quality assurance is best when focused on improvement and the
new tools are more appropriate when addressing accountability.

For some critics, especially outside higher education, there is interest in


dismantling traditional quality assurance entirely and replacing these tradi-
tional practices with the new accountability tools, especially if government
backs the tools. In the United States, one hears that US News and World
Report is the de facto judge of the quality of higher education, not accredita-
tion. There is considerable emphasis on government-driven requirements
for additional transparency and evidence of student learning.

Summary
The likelihood of these external accountability tools, especially rankings, dis-
appearing is low. There is much that the public now wants to routinely know
about higher education as compared to the past. Society now believes that it

Chapter 7. Rankings, new accountability tools and quality assurance 135


should have significant information about what colleges and universities are
doing; how teaching, learning and research are carried out; what we mean
when we use the term ‘quality’ and, above all, how well we are performing.

In addition, these tools are quickly becoming essential to the increasing


internationalization of higher education. International rankings allow for
judgments of institutions across borders. Qualifications frameworks can
be used to align expectations with regard to degrees for entire geographic
regions. International data sets can provide comparisons not only within
countries, but also across countries. The work on international student
learning outcomes indicators is intended to be applicable around the globe.
For some, higher education is the last unregulated global industry – and
the new accountability tools provide what is considered to be the needed
regulatory framework.

Whatever the response to the new accountability tools and their implications
for traditional quality assurance and accreditation, future conversations will
centre on the role of traditional practices and these tools:

• What counts as appropriate accountability – responsiveness to the public


about student learning and institutional performance?

• What are various ways in which traditional quality assurance and new
accountability tools may be aligned to enhance service to students and
society?

• What is needed to sustain the value of formative evaluation through


peer review and its emphasis on quality improvement?

• How do we preserve the unique and highly successful academic


leadership role for individual colleges and universities?

The new accountability tools have already had a profound impact on higher
education. Their impact will continue to be felt as these conversations take
place and lead to additional change within colleges and universities.

136 Rankings and Accountability in Higher Education: Uses and Misuses


References

About the U-Map project. n.d.: www.u-map.eu/about.doc/ (Accessed 1 November 2012.)

Altbach, P.G., Reisberg, L. and Rumbley, L.E. 2009. Trends in Global Higher Education: Tracking an
Academic Revolution. A Report Prepared for the UNESCO 2009 World Conference on
Higher Education. Chestnut Hill, MA: Boston College Center for International Higher
Education.

ENQA (European Association for Quality Assurance in Higher Education). 2011, March 4.
ENQA position paper on transparency tools.

Hazelkorn, E. 2011, March 13. Questions abound as the college-rankings race goes global.
The Chronicle of Higher Education: http://chronicle.com/article/Questions-Abound-as-
the/12669/ (Accessed 1 November 2012.)

Lewis, R. 2009. Qualifications Frameworks. Presentation to the Council for Higher Education
Accreditation 2009 International Seminar: www.chea.org/Research/index.asp
(Accessed 1 November 2012.)

Lumina Foundation. n.d. The degree qualifications profile: www.luminafoundation.org/tag/dqp


(Accessed 1 November 2012.)

OECD (Organisation for Economic Co-operation and Development). n.d. Testing Student and
University Performance Globally: OECD’s AHELO: www.oecd.org

The Princeton Review. n.d. College Rankings: www.princetonreview.com/college-rankings.aspx


(Accessed 1 November 2012.)

The Education Trust. n.d. College Results Online: www.edtrust.org

U-Multirank: A multi-dimensional global university ranking: www.u-multirank.eu

United States Department of Education. National Center for Education Statistics. College
navigator: http://nces.ed.gov.collegenavigator/ (Accessed 1 November 2012.)

US News and World Report: www.usnews.com/rankings

Chapter 8. An African perspective on rankings in higher education 137


Part 3

International
Perspectives
Chapter 8

An African perspective
on rankings in
higher education
Peter A. Okebukola
Introduction
The African higher education system has grown significantly over the past
twenty years in response to demand for admission spaces by secondary
school leavers. From about 700 universities, polytechnics, colleges of educa-
tion and other post-secondary institutions classified within the higher edu-
cation group in the early 1990s, the system now has well over 2,300 of such
institutions. The growth of the system with respect to enrolment is judged to
be one of the fastest in the world (UIS, 2010).

This impressive performance on access has failed to be matched by improve-


ment in quality (Materu, 2007; Okebukola and Shabani, 2007; World Bank,
2008). As a way of clustering the good from the bad, stakeholders, especially
potential students, employers and parents, have turned to the ranking of these
institutions to provide a basis for selection. The first ports of call are typically
global ranking league tables such as Webometrics, Times Higher Education (THE)
and the World University Rankings and Academic Ranking of World Universities
(ARWU), commonly called the Shanghai Ranking (Salmi, 2011). These rankings
are regularly updated and readily available in the public domain, hence indi-
viduals or groups desiring relative standing of their national institutions find
them to be an easily accessible resource. Unfortunately, these global ranking
schemes provide little help for the locals, especially potential undergraduates,
since over 90 per cent of the higher education institutions in Africa are not
captured in the top leagues (Salmi and Soroyan, 2007). A sprinkling of universi-
ties in Africa shows up in the top 500 of all global league tables. For instance,
in the 2010 ARWU only three universities, all from South Africa, were listed in
the world’s top 500 and only two in the 2011–2012 THE best 400. Even when
regional and national tables are extracted from the global data set, many insti-
tutions at the national level are not ranked. This presents a need for national
and African regional schemes to fill this void. This chapter presents an example
of a national (ranking) and a regional (rating) system developing in response
to this need.

Since the 1960s, ranking of universities in Africa has been conjectural rather
than empirical. Two indicators have typically featured. These are the age of
the institution and employers’ perceptions of the quality of graduates. As
reported by Taiwo (1981), in the minds of Kenyans, the University of Nairobi
(established 1956) should be better in quality of training than Kenyatta
University (established in 1965). The same order of ranking emerges when
employers rank these universities on the assumption that graduates of
University of Nairobi should be better than graduates of other universities
in Kenya. Nairobi graduates may have been tried and tested and adjudged

142 Rankings and Accountability in Higher Education: Uses and Misuses


good in quality, which may colour and sustain their perception over time. In
Nigeria the University of Ibadan, established in 1948, is generally perceived
to be better than other universities established after it. Regionally, there has
been a pervasive perception that the ‘first generation’, post-colonial univer-
sities such as Makerere (1922), Ibadan (1948) and Legon (1948) are better than
those established after them. While there are complex variables implicated
in the perceived high ranking of these institutions, such as the quality of
facilities and staff, strict compliance with standards to match top-rate uni-
versities in Europe, quality of leadership, as well as quality and quantity of
students, the rankings were not based on verifiable data.

From early 2000, conjectural ranking began to yield for the empirical. Global
rankings provided a template for more transparent and more objective data
collection, analysis and reporting. They also provided a menu of indicators that
can be adapted or adopted for local context. The first Times Higher Education
ranking in 2004, which showed the big names in the higher education sys-
tem in Africa by the conjectural ranking not listed in the Times league tables,
jolted stakeholders. Governments, university managers, students and parents
reacted angrily. The call to improve quality and hence global ranking was thick
in the air. This call has persisted and has been a major driver for improving
the delivery of higher education in the region. The next section of this chapter
presents a national example of ranking of higher education institutions, while
the section that follows this describes a regional rating scheme. The concluding
section positions Africa within a global context and suggests ways by which
African universities can achieve better ranking on global league tables.

A national example: the Nigerian


experience
In September 2001, Nigeria, through the National Universities Commission
(NUC), initiated steps towards a national ranking of its universities. There were
three major drivers for this effort. The first was a desire among the popula-
tion to know more about the relative standing (performance) of the universities
and their programmes in order to guide career choices by prospective students.
Second, the government wanted a transparent and objective mechanism for
identifying centres of excellence that could benefit from preferential funding.
Third, the NUC, whose mandate includes the orderly development of universi-
ties, needed a basis for advising government on programmes and universities

Chapter 8. An African perspective on rankings in higher education 143


that should be strengthened to address projected human resource needs of the
country. Coincidentally, consultations on a World Bank facility for improving the
Nigerian university system was about to be concluded and the league table of
universities and programmes was to be a key factor in implementing the pro-
ject. Taken together, the atmosphere was ripe for a university-ranking scheme.
The national programme accreditation exercise of 2000 provided data derived
through an objective and transparent methodology for drawing up the league
tables. Since 2001, annual university rankings by programmes and institutions
have been conducted. By 2004 and 2005, additional indicators were included in
the data to align the national ranking with three global ranking schemes: ARWU,
THE, and Webometrics (Okebukola, 2006; 2010).

The ranking indicators were:

1. Percentage of academic programmes of the university with full accreditation


status: This indicator measures the overall academic standing of the uni-
versity. It is computed by dividing the number of academic programmes
of the university with full accreditation status by the total number of
programmes offered by the university and expressing this as a percent-
age. It will be recalled that the first two ranking exercises of Nigerian
universities used only programme accreditation data.

2. Compliance with carrying capacity (measured by the degree of deviation from


carrying capacity): This indicator measures how well enrolment of the
university matches available human and material resources. Universities
that over-enrol (exceed carrying capacity) are penalized on this measure.
It is computed as:

Deviation from carrying capacity x 100%


Carrying Capacity

3. Proportion of the academic staff of the university at professorial level: This


indicator is an assessment of the quality of academic staff in the univer-
sity. The full professorial category is selected as it constitutes the zenith
of academic staff quality in a university. It is calculated by dividing the
number of full professors in the university by the total number of aca-
demic staff and expressing this as a percentage.

4. Foreign content (staff ): proportion of the Academic staff of the university who
are non-Nigerians: This indicator is designed to measure how well the
university is able to attract expatriate staff. The indicator is important
in a globalizing world and within the context of a university being an

144 Rankings and Accountability in Higher Education: Uses and Misuses


institution with a universal framework of operations. It is computed by
dividing the number of non-Nigerian teaching staff by the total number
of academic staff in the university and expressing this as a percentage.

5. Foreign content (students): proportion of the students of the university who are
non-Nigerians: This indicator measures how well the university is able to
attract foreign students. As stated for the staff component, the indicator
is important in a globalizing world and within the context of a university
being a universal institution where students from all over the world are
free to enrol. It is derived as the percentage of the quotient obtained by
dividing the number of non-Nigerian students in the university by the
total number of students.

6. Proportion of staff of the university with outstanding academic achievements:


These achievements include Nobel Prize winners, National Merit
Awardees and Fellows of Academies (e.g. Academy of science, Academy
of Letters, Academy of Education and Academy of the Social Sciences).
The indicator gives the standing of the staff of the university when
normed with colleagues at national and international levels. Further, it
measures how well the university is able to stimulate and retain quality
staff. It is computed by dividing the number of staff with such academic
achievements by the total number of academic staff and expressing the
quotient as a percentage.

7. Internally generated revenue: This measures the ability of the university to


generate funds from non-governmental/proprietor sources. It is derived
as the amount of revenue generated internally divide by the total rev-
enue of the university x 100.

8. Research output: A very important measure of the esteem and relevance


of a university, this indicator provides information on how well the staff
of the university are able to contribute to knowledge through research.
Only research published through international outlets and indexed in
acclaimed abstracts and indexes are to be counted. For the 2004 ranking,
only books and journal articles that are published in outlets with edito-
rial offices in Australia, Europe, India, Japan, New Zealand and North
America will be accepted. Nigerian publications with proof of abstracting
or indexing in world-renowned abstracting and indexing services will be
accepted. This measure is computed as the total number of such publica-
tions contributed by staff of the university in 2004 up to a maximum
of 100. Proofs of the publications are to be submitted at the time of filing
data for the university.

Chapter 8. An African perspective on rankings in higher education 145


9. Student completion rate: A measure of the internal efficiency of the uni-
versity, student completion rate in 2004 is calculated by dividing the
number of students of the university who graduated in 2004 (for the
cohort that enrolled in 1999/2000) by the total number of students in the
graduating class in 2004. The quotient is expressed as a percentage.

10. Ph.D. graduate output for the year: This indicator combines the postgradu-
ate standing of a university with the internal efficiency of postgraduate
education. It is computed by dividing the number of PhDs graduated in
2004 by the total number of postgraduate students in that year and mul-
tiplying by 100.

11. Stability of university calendar: It is in an atmosphere of peace and stability


that good quality teaching, learning and research can prevail. When the
university calendar is stable, foreign staff can fit the schedule of their
parent university to a target local university and be able to offer services
including contributions to research. In addition, stability guarantees local
staff a long vacation period that can be used to engage in research activi-
ties in a target foreign university. Exciting vacation courses for students
can be run during such periods. This indicator is computed as follows:

12 – No. of months of closure x 100


12 months

12. Student to PC Ratio: In an ICT-enabled higher education world, the stu-


dent-to-PC ratio becomes important. This indicator is computed as:

Total no. of computers available to students x 1,000


Total number of students

Computers available to students in commercial internet cafes are not


counted.

By 2009, the NUC indicated its intention to revise the indicators for ranking
as shown in Table 1 (Okebukola, 2010).

146 Rankings and Accountability in Higher Education: Uses and Misuses


Table 1. Proposed NUC ranking indicators
Common
1 Academic peer review
2 Employer review
3 Faculty/student ratio
4 Citations per faculty
5 Retention: six-year graduation rate and first-year student retention rate
6 Graduation rate performance: difference between expected and actual graduation rate
7 Proportion of international staff
8 Proportion of international students
9 Web impact factor
10 Alumni holding a post of chief executive officer or equivalent in one of the 500 leading international
companies

Unique
1 Percentage of academic programmes of the university with full accreditation status
2 Proportion of academic staff of the university at full professorial level

Regional effort: the African Quality Rating


Mechanism
The African Quality Rating Mechanism (AQRM) was instituted to ensure
that the performance of higher education institutions in Africa are com-
parable against a set of criteria that takes into account the unique context
and challenges of higher education delivery on the continent. Higher
education has been identified as a major area of focus in the African
Union (AU) Plan of Action for the Second Decade of Education for Africa
(2006-2015), with quality as an area essential for revitalization of higher
education in the region. The AU Commission has developed a framework
for Harmonization of Higher Education Programmes in Africa, with the
specific purpose of establishing harmonized higher education systems
across Africa, while strengthening the capacity of higher education insti-
tutions to meet the many tertiary education needs of African countries
(AUC, 2008; Oyewole, 2010). This occurs mainly through innovative forms
of collaboration and ensuring that the quality of higher education is
systematically improved against common, agreed benchmarks of excel-
lence that facilitate the mobility of graduates and of academics across
the continent. In connection with this, the AQRM is also envisioned to
enhance higher education institutions’ effective delivery of programmes
across the continent and to allow for a more objective measure of their
performance.

Chapter 8. An African perspective on rankings in higher education 147


AQRM uses clusters of eleven indicators (standards) at the institutional and
programme levels summarized in Table  2 and presented in detail in the
Appendix.

Table 2. AQRM standards/clusters of indicators

Standard No. of
rating items
1 Institutional governance and management 9
2 Infrastructure 8
3 Finance 7
4 Teaching and learning 8
5 Research, publications and innovations 8
6 Community/societal engagement 8
7 Programme planning and management 8
8 Curriculum development 7
9 Teaching and learning (in relation to curriculum) 7
10 Assessment 6
11 Programme results 4

The AQRM was piloted in 2010 in institutions that fall within Regional
Economic Communities (RECs). An AQRM survey questionnaire, an eighty-
item instrument with fifteen parts, was used for collecting data. Items
in parts  1 to 13 cover demographic features of institutions and detailed
data on students, staff, facilities and processes. Part 14 is in two parts, the
first of which requires self-rating of faculty/college characteristics such
as management, infrastructure, recruitment, admission and selection,
research output, learning materials, curriculum and assessment. The
second requires that the programmes of the institution be ranked from
first to fifth. Part  15 is the institutional self-rating. The entire institution
is to be rated on a three-point scale covering excellent performance,
satisfactory performance and unsatisfactory performance on the eleven
clusters of standards.

Each of the thirty-four participating institutions conducted a self-assessment


on the items that constitute the eleven standards. The average performance
was then rated on a three-point scale (unsatisfactory performance=1;
satisfactory performance=2; excellent performance=3). The results of this are
yet to be released. It is unclear if the AU will take AQRM beyond the pilot
stage, but hopes are high in Africa and the rest of the world that AQRM will
evolve into a respectable international rating scheme.

148 Rankings and Accountability in Higher Education: Uses and Misuses


AQRM versus major global ranking
schemes
There are a number of convergences and divergences in the intention,
methodology and reporting of AQRM and the three global rankings selected for
comparative review in this paper: ARWU, THE and Webometrics. A key goal of
the AQRM is the fostering of quality in higher education institutions in Africa.
This is congruent with the implied goal of global world rankings, which is the
heightening of the quest for quality through competition. The Plan of Action
of the African Union Second Decade of Education envisions that with AQRM
in place, African higher education institutions will begin to march forward in
improving their performance on the thirteen indicators. Such improvements
are expected to translate into overall quality improvement of their institutions.

Congruence between AQRM and three global rankings can also be seen in the
indicators. Four of the clusters of indicators that are common to the three
global rankings feature directly or indirectly in AQRM. These are research and
publications, teaching and learning, infrastructure and community/social
engagement. The other two – governance and management, and finances  –
are not directly measured by the global rankings. The programme-level
criteria of AQRM do not directly match the indicator clusters of ARWU,
THE and Webometrics, except of course for teaching and learning, which
is indirectly related. It is safe to assume, therefore, that the measures on
AQRM are proximal to the measures on the three global ranking schemes.
It can be predicted that a well-performing institution on AQRM will have a
respectable rank on global league tables if a rigorous verification process is
applied to the AQRM methodology.

In spite of this similarity in indicators, the nature of the measurements


is different. AQRM is mainly criterion-referenced. An institution does not
assess itself against others, but against a set of criteria. As an example, in
applying AQRM, the University of Cape Town in South Africa assesses itself
on the criterion of governance and management and comes to a judgement
as to whether it rates its own performance as excellent, satisfactory
or unsatisfactory. The University of Cape Town has no data to make a
comparison with the University of Lagos in Nigeria to rank itself higher or
lower than the University of Lagos on this measure. In this self-assessment
mode, it is assumed that the university will be truthful to itself and provide
an honest and verifiable score on the different AQRM indicators. Conversely,
the global rankings are largely norm-referenced, comparing performance of
one university to the others.

Chapter 8. An African perspective on rankings in higher education 149


AQRM requires institutions to develop an improvement plan following the
assessment process. This aspect of the methodology of AQRM is not a feature
of global ranking schemes, whose main intention is to publish league tables
and expect users to make whatever use of them they deem fit. AQRM, how-
ever, requires performance improvement over a specified time based on the
self-assessment scores. It is expected that AQRM will be complemented by
strong internal quality assurance mechanisms to monitor the implementa-
tion of the improvement plan.

Perception and effects of ranking


It is difficult to provide an Africa-wide view of the perceptions of stakeholders
on ranking of higher education institutions without a regional survey on the
subject. However, three data sources permit a fair view on the subject. One of
these is the study by the African Union reported by Oyewole (2010) in which
subjects especially from the university community were surveyed across Africa.
The other is the Nigerian study by Okebukola (2006) and a recent regional sur-
vey of newspaper reports on university rankings reported in Okebukola (2011).

Overall, these studies show tremendous enthusiasm by the general public


for the ‘rot in the higher education system to be exposed’ as a result of the
enduring poor performance in the global league tables and a ‘firm basis for
improved funding for the universities’. As findings in Okebukola (2006) and
Okebukola (2011) show, labour employers were quite excited about the rank-
ing, as they seek ways of selecting graduates from the best-ranked schools in
the midst of the graduate glut. Parents and potential students found ranking
helpful in the selection of institutions and were quite happy to turn to league
tables showing universities with very good rankings in the programmes
desired for study.

Perhaps the group that is divergent in its perception of ranking is higher


education managers and teachers. While some who appear favoured by good
placement on ranking tables felt comfortable with ranking, others who are
not so favoured have harsh words for ranking. Yet there is a third group
made up predominantly of staff unions that use ranking results to back up
requests for improvement in working conditions.

Some details of the Nigerian survey showed that over 68 per cent of students
seeking admission into Nigerian universities between 2003 and 2006 and

150 Rankings and Accountability in Higher Education: Uses and Misuses


84  per  cent of their parents were guided in the selection of their courses
by the NUC rankings, published in newspapers. This, of course, is after the
variables of proximity of the institution to the home and type of university
would have been considered. About 76  per  cent of labour employers sur-
veyed combined the national ranking with global league tables in shortlisting
applicants for employment interviews. About 69 per cent of vice-chancellors
made reference to the ranking of their universities or their programmes in
their annual reports and convocation speeches.

The effect of ranking has been largely positive. A striking effect is the
improvement in funding to universities to improve facilities for teaching,
learning and research. In Nigeria, this increase is over 30 per cent over a ten-
year period. In Africa, national quality assurance agencies have increased in
number from ten in 2003 to twenty-two in 2012 in response among other
things to the desire to improve quality and hence bolster global ranking or
regional rating. The third effect is the slow but steady increase in the quality
of delivery of higher education in the region. While this improvement cannot
be adduced solely to ranking, it is obvious that the competition induced by
ranking is spurring efforts at improving quality.

It is instructive to examine the issue of rankings, education policies and


resource allocation. The overarching goal of any education policy is to
foster learning, which will ensure that national goals and objectives are
met. Beyond this broad statement, there are several determinants of
education policy of which ranking could be a minuscule element. Yet it can
also be played up to a mega level depending on the pervading needs of the
society. The parameters for deciding education policies include national
philosophy, socio-cultural, economic and political contexts, and the desire
to remain competitive in a globalized world. Such policies have general
and specific variants. General policies provide the framework for steering
the national agenda, while specific variants within this general framework
target sub-systems to guide institutional goals. The location of ranking as
a stimulus for setting education policy cannot be universally determined
as the interplay of national idiosyncrasies in terms of what gets priority in
setting the education agenda.

Ranking can be a strong determinant of educational policy insofar as the goal


is to engender competition and act as a catalyst for improvements in quality.
The theory of competition on which ranking rests implies that competing
elements strive to improve in order to be the leader in the field. Thus, if the
system-wide or institutional goal is to stimulate improvement in quality,
ranking comes in as one of several pathways.

Chapter 8. An African perspective on rankings in higher education 151


In Nigeria, the National Policy on Education (Federal Government of
Nigeria, 2006) aims for an egalitarian society where education plays a
pivotal role. It calls for ‘supporting education institutions to make them
internationally competitive’. One of the strands of the spirit of this ‘inter-
national competitiveness’ is ranking, where institutions and their pro-
grammes are compared among themselves at the national and global level
and, as a consequence, improve their delivery with international standards
in mind. Such national policies are shaped by public opinion, which in the
last ten years has swayed in the direction of demanding that educational
institutions in Nigeria, especially universities, improve their standing
in global league tables. Institutions are responding by enacting policies
that will lead to improvement in their teaching and research activities
aligned to the indicators in the national and global ranking schemes. For
instance, the University of Ilorin, the best Nigerian ranked institution
on the 2010 Webometrics table, has been driven by a 2007 institutional
policy of improving the research and publications activities of its staff.
The University of Benin and the Obafemi Awolowo University, which had
respectable rankings from 2006 to 2008 but slipped thereafter, also took
steps through institutional policy enactments to bolster their research
standing to be elevated in global league tables. In sum, the decision to use
ranking to shape education policies is taken based on local circumstances,
with national and institutional visions often serving as a guide.

One of the major influencing factors for education policies is the national
vision. In 2009, Nigeria signed on to Vision 20-2020 with the aspiration
to be one of the twenty leading world economies by 2020. Botswana,
Ghana, Kenya, Lesotho, Namibia and South Africa are other examples
where national visions are set to guide development. The common thread
through these national vision statements is ranking – as the thrust to
emerge among top-ranked economies by a set target date. Consequently,
education policies deriving from these visions look to another form of
ranking – of universities.

National and institutional desire to elevate standing on league tables will


be realized through financial allocation mechanisms in one of two direc-
tions: lower ranked institutions can be financially supported to improve
their delivery process, whereas higher ranked institutions are financially
supported to evolve into centres of excellence.

152 Rankings and Accountability in Higher Education: Uses and Misuses


Achieving respectable ranking on existing
global ranking schemes
In spite of the development of AQRM, African higher education institutions
will remain on the radar for data collection and ranking by other global rank-
ing schemes. Indeed, most heads of higher education institutions attending
a September 2011 event on quality assurance in Bamako would want to cite
their rankings on such global league tables as the Webometrics, Times Higher
Education and Academic Ranking of World Universities as a measure of their
global, rather than regional, ranking/rating. Some expressed preference
for renaming the Quality Rating Mechanism without the ‘African’ to convey
the universality of the application of the rating system. While it is apparent
that such higher education managers are unaware of the philosophy and
usability of AQRM, it is clear that in the spirit of inclusivity, African higher
education institutions should continue to strive to attain respectable rank-
ing on global ranking schemes.

As stated earlier, this chapter looks at three of the major global ranking
schemes: the Academic Ranking of World Universities (ARWU), Times Higher
Education ranking and Webometrics ranking. The Academic Ranking of
World Universities focuses on academic or research performance (Liu, 2011).
Ranking indicators include alumni and staff winning Nobel Prizes and Fields
Medals, highly cited researchers in twenty-one broad subject categories,
articles published in Nature and Science, articles indexed in Science Citation
Index-Expanded and Social Science Citation Index, and academic perfor-
mance with respect to the size of an institution.

The Times Higher Education–QS World University Rankings employ thirteen


performance indicators designed to capture the full range of university
activities, from teaching to research to knowledge transfer. These thirteen
elements are brought together under five headline categories: Teaching –
the learning environment (worth 30 per cent of the overall ranking score);
Research – volume, income and reputation (worth 30 per cent); Citations –
research influence (worth 32.5 per cent); Industry income – innovation (worth
2.5 per cent); and International mix – staff and students (worth 5 per cent).

The Webometrics Ranking of World Universities applies four indicators


obtained from the quantitative results provided by the main search engines:

1. Size: refers to the number of pages recovered from four engines (Google,
Yahoo!, Live Search and Exalead).

Chapter 8. An African perspective on rankings in higher education 153


2. Visibility: refers to the total number of unique external links received
(inlinks) by a site, which can be only confidently obtained from Yahoo
Search, Live Search and Exalead.

3. Rich files: after evaluation of their relevance to academic and publica-


tion activities and considering the volume of the different file formats,
the following are selected: Adobe Acrobat (.pdf ), Adobe PostScript (.ps),
Microsoft Word (.doc) and Microsoft PowerPoint (.ppt); these data are
extracted using Google, Yahoo! Search, Live Search and Exalead.

4. Scholar: Google Scholar provides the number of papers and citations for
each academic domain; these results from the Scholar database repre-
sent papers, reports and other academic items.

One of the stiffest indicators for African universities on the ARWU scheme
is alumni and staff winning Nobel Prizes and Fields Medals. The environ-
ment for conducting groundbreaking research is largely lacking, hence
steps should be taken to elevate the facilities, especially research labora-
tories, to a level that will permit contributions with the potential to win a
Nobel Prize. While facilities represent one side of the coin, the other side is
the capacity of African researchers to undertake top-quality research and
sustain this over time, as is often characteristic of Nobel-winning studies.
Significant efforts should be invested in capacity building of researchers
and fostering partnerships with renowned researchers outside Africa.
Tutelage under Nobel Prize winners is another pathway. Training graduates
from African higher education institutions under the wings of Nobel Prize
winners will foster cultivation of the research methodologies, attitudes and
values needed to be a prize winner. The Association of African Universities
(AAU) and national quality assurance agencies need to undertake a study
of the institutional location of Nobel prizewinners and seek partnership
with such institutions and centres where the laureates are serving. Bright
graduates, preferably first-class degree holders, can be carefully selected
to undertake postgraduate education in such centres. We should begin
to fade out the vogue of partnerships with little known universities and
laser focus on one or two outstanding universities and programmes where
Nobel Prize winners serve.

Another step which can add up to ultimately spawning Nobel laureates


over a long-term period is to admit the best from the secondary school
system. Admitting the cream of products from the secondary school sys-
tem will enhance the chances of good quality graduates, who in turn will
deploy their sharp intellects to win the Nobel Prize someday. There is

154 Rankings and Accountability in Higher Education: Uses and Misuses


also the need to encourage scholars in African universities to target global
problems. Many Nobel Prizes are won which address problems facing the
entire human race rather than a subset of humanity. Vice-Chancellors
should encourage their staff to think global while seeking research
problems.

Researchers in African universities should be encouraged to network


with their colleagues outside their countries and the Africa region. Since
staff cannot nominate themselves for a Nobel Prize, they should make
their work known to others. They should be encouraged and sponsored
to attend conferences and write articles in newspapers and magazines
to promote public understanding of their technical work. The more they
make their work known, the better their chances of earning a nomination,
especially if the work attracts the attention of a Nobel Prize nominator.

There is also a need to foster collaboration with American universities.


Although the Nobel award is not country-subjective, it has been shown
that working in a US laboratory statistically improves chances of winning
the prize. Prior to 2006, the Nobel Foundation has honoured 758 indi-
viduals and 18 organizations and almost 300 of those recipients have been
American or have worked in the United States. Vice-Chancellors may wish
to be preferentially selective in favour of US universities while looking for
academic and cultural exchanges. It should be stressed that this recom-
mendation does not in any way limit a university’s scope of such linkages.

How can African universities achieve high


standing on cited research?
The proportion of highly cited African researchers is unimpressive trans-
lating into low scores on this indicator. To boost scores, there is need
for research capacity-building. African scholars have the potential to be
top-rate researchers and to contribute hugely to citable literature if their
research skills are continually upgraded. This underscores the need for
constant research capacity-building conducted at the level of the univer-
sity and as a collective at the national level. While trusting the ability of
local senior academics to lead such capacity-building efforts, injection
of renowned and highly cited researchers from other countries will be
a productive venture. The better model of research-capacity building is

Chapter 8. An African perspective on rankings in higher education 155


programme/faculty based, the other being university-based. This demands
that staff in the department receive training in their disciplines as a homo-
geneous unit. Commonalities in problem identification, research method-
ology, data gathering and analysis, and report writing are shared and form
the basis of such training.

The list of journals indexed in databases should be communicated to all


staff. Some staff are unaware of which journals are indexed in Science
Citation and Social Science Citation indexes. The university librarian should
extract the list relevant to each department/faculty and forward to heads
of department and deans of faculty for wide dissemination to their staff.
Since this list is also available on the Web, staff should be informed of
the site to visit to extract the list relevant to their discipline and area of
research. Staff should then be encouraged to consider such journals as
first choice when seeking publication outlets for their research. Incentives
should be given to staff whose publications appear in journals indexed
in Science Citation and Social Science Citation indexes including financial
rewards for every article published, as practised by Covenant University,
Nigeria, as well as financial support for further research. Research mentor-
ing by senior colleagues who are active in research should be encouraged
by vice-chancellors.

Teacher/student ratio is one of the indicators in the THE ranking. It is


assumed, for instance, that staff/student ratio will tell a story on the qual-
ity of teaching insofar as classes are small or of the right size, and with
commensurate staff strength teaching is expected to be of good quality. The
hurdle to scale on teacher/student ratio is low since many universities in
countries in Africa with well-established quality assurance agencies have
endeavoured to keep within prescribed teacher/student ratio minimum
standards for most programmes, in order to stay on the side of full accredi-
tation. The professional bodies for medicine, engineering and law also keep
teacher/student ratios in check through enforcement of their respective
minimum standards on enrolment. It is important to stress the danger
to which some universities in francophone countries, such as Mali, are
exposed over gross over-enrolment in programmes in the social sciences.
The reverse is largely true for many private universities, where subscrip-
tion level by students is still generally low. In sum, the first thing to do is
to keep teacher/student ratios well reined in within minimum standards.

Institutional income scaled against academic staff numbers is assumed to


give a broad sense of the general infrastructure and facilities available to

156 Rankings and Accountability in Higher Education: Uses and Misuses


students and staff. The overall picture in the African university system is
grim. Most state and private universities are in dire financial straits and
inability to meet financial needs is a recurring theme. Low institutional
income translates in the view of the THE ranking scheme to inability to
provide adequate resources for teaching and learning, hence the indicator
of institutional income is taken as proxy for teaching.

The final category of THE ranking looks at diversity on campus – a sign


of how global an institution is in its outlook. The ability of a university to
attract the very best staff from across the world is key to its global success.
Times Higher Education assigns a 60 per cent weighting to the ratio of inter-
national to domestic staff, making up 3 per cent of the overall score. The
market for academic and administrative jobs is international in scope, and
this indicator suggests global competitiveness. The other indicator in this
category is based on the ratio of international to domestic students. Again,
this is a sign of an institution’s global competitiveness and its commitment
to globalization.

African higher education institutions can improve the scores on the diver-
sity indicator through improvement in their salaries and work environ-
ments with the aim of attracting international staff. In a market-driven
economy, attraction of international staff occurs towards where maximum
benefit can be derived in terms of salary and other conditions of service.
Salaries of university staff should be made internationally competitive.
Work environments including facilities for quality teaching and research
should be significantly improved. Special accommodation facilities should
be provided with due attention paid to security and regular supply of water
and electricity.

There is also a need to improve hostel conditions to attract international


students. Hostel facilities in many universities are not conducive for foreign
students, especially those from Europe and North America. Implementation
of a national ‘Operation Fix the Hostels’ is planned, so that by 2013 most
of the hostels are in better shape for habitation by foreign students with
the issue of security guaranteed. Vice-chancellors should attend marketing
fairs to countries in Africa and other parts of the world to publicize their
universities and their programmes to potential foreign students.

Webometrics ranking has some special demands (Aguillo 2008; 2010).


Isidro Aguillo, head of the Webometrics laboratory offers the following tips,
which can be shared among African universities:

Chapter 8. An African perspective on rankings in higher education 157


• URL naming: Each institution should choose a unique institutional
domain that can be used by all the websites of the institution. It is very
important to avoid changing the institutional domain as it can generate
confusion and has a devastating effect on visibility values. Alternative or
mirror domains should be disregarded even when they redirect to the
preferred one. Use of well-known acronyms is fine, but the institution
should consider including descriptive words, like the name of the city,
in the domain name.

• Contents – create: A large web presence is made possible only with the
effort of a large group of authors. The best way to ensure this is to
allow a large proportion of staff, researchers or graduate students to
be potential authors. A distributed system of authoring can operate at
several levels:

–– A central organization can be responsible for the design guidelines


and institutional information.

–– Libraries, documentation centres and similar services can be


responsible for large databases, including bibliographic ones but
also large repositories (thesis, pre-prints and reports).

–– Individual persons or teams should maintain their own websites,


enriching them with self-archiving practices.

–– Hosting external resources can be interesting for third parties


and increase visibility: conference websites, software repositories,
scientific societies and their publications, especially electronic
journals.

Contents – convert: Important resources are available in non-electronic


formats that can be easily converted to web pages. Most universities have a
long record of activities that can be published in historical websites. Other
resources are also candidates for conversion, including past activities
reports or pictures collections.

Interlinking: The Web is a hypertextual corpus with links connecting pages.


If your contents are not known (bad design, limited information or minor-
ity language), and the size is scarce or of low quality, the site probably will
receive few links from other sites. Measuring and classifying the links from

158 Rankings and Accountability in Higher Education: Uses and Misuses


others can be insightful. You should expect links from your ‘natural’ partners:
institutions from your locality or region, web directories from similar organi-
zations, portals covering your topics, colleagues or partners personal pages.
Your pages should make an impact in your common language community.
Check for orphaned pages (i.e. pages not linked from another).

Language, especially English: The Web audience is truly global, so you should
not think locally. Language versions, especially in English, are mandatory
not only for the main pages, but also for selected sections and especially
for scientific documents.

Rich and media files: Although html is the standard format of webpages,
sometimes it is better to use rich file formats like Adobe Acrobat pdf or MS
Word doc, as they allow a better distribution of documents. PostScript is
a popular format in certain areas (physics, engineering, mathematics), but
it can be difficult to open, so it is recommended to provide an alternative
version in pdf format.

Bandwidth is growing exponentially, so it is a good investment to archive


all media materials produced in web repositories. Collections of videos,
interviews, presentations, animated graphs and even digital pictures could
be very useful in the long term.

Search engine-friendly designs: Avoid cumbersome navigation menus based


on Flash, Java or JavaScript that can block robot access. Deep-nested
directories or complex interlinking can block robots too. Databases and
even highly dynamic pages can be invisible to some search engines, so use
directories or static pages instead, or as an option.

Popularity and statistics: Number of visits is important, but it is just as


important to monitor origin, distribution and reason for reaching your
websites. Most current log analysers offer a great diversity of tables and
graphs showing relevant demographic and geographic data, but make sure
that there is an option to show referrers – the webpages from which the
visit arrives – or the search term or phrase used if the visit came from a
search engine. Most popular pages or directories are also relevant.

Archiving and persistence: It should be mandatory to maintain a copy of old


or outdated material in the site. Sometimes relevant information is lost
when the site is redesigned or simply updated and there is no way to easily
recover the vanished pages.

Chapter 8. An African perspective on rankings in higher education 159


Standards for enriching sites: The use of meaningful titles and descriptive
metatags can increase the visibility of pages. There are some standards like
Dublin Core, which can be used to add authoring info, keywords and other
data about the websites.

Conclusion
This chapter reviewed developments in higher education ranking in Africa
with a special focus on Nigeria and the African Quality Rating Mechanism
(AQRM). It examined the potential impact of ranking on improving the
quality of delivery of university education. It highlighted the efforts made
for a regional rating of higher education institutions. Some suggestions are
provided on how African higher education institutions can take steps to
improve their ranking on global league tables.

The Africa regional effort at rating of higher education institutions through


AQRM is poised to become a potent mechanism for fostering quality. There is
an ongoing effort at strengthening the African Higher Education and Research
Space (AHERS), where AQRM will play a role. AHERS is the vista of opportu-
nity for members of the higher education community in Africa to seamlessly
interact among themselves in the quest to fulfil their teaching, research and
service functions. The emphasis on research within the ‘space’ underlines the
accent placed on the congregation of African scholars to finding solutions
through research to problems inhibiting Africa’s development. AHERS is to
permit unhindered collaboration among students and staff of higher educa-
tion institutions in Africa, regardless of linguistic and other barriers.

In the early 1960s, opportunities existed for students and teachers to cross
national boundaries within Africa to participate in teaching, learning and
research. The University of Ibadan in Nigeria and the University of Ghana,
both in West Africa, had active collaboration in teaching and research with
universities in East and Central Africa. Between 1970 and 1984, there was a
sizeable traffic of students from the University of Nairobi, the University of
Tanzania and the University of Cameroun to the University of Ibadan, espe-
cially for postgraduate degrees. Teachers in these institutions collaborated
actively in research. Universities in francophone Africa, especially in Cote
d’Ivoire, Mali and Senegal, have a fairly long history of collaboration in teach-
ing and research. These interactions did not exist within a formal regional

160 Rankings and Accountability in Higher Education: Uses and Misuses


framework. The recent initiative by the Association for the Development of
Education (ADEA) through its Working Group on Higher Education (WGHE)
and the African Union Commission is to formalize a framework for these
interactions at the regional level and to strengthen existing pockets of
national and sub-regional ‘spaces’.

Also worthy of mention in relation to rating of higher education institu-


tions in Africa is the establishment of the Africa Regional Quality Assurance
Framework (ARQAF). ARQAF is being designed to have three key elements:
benchmark/minimum standards, a regional accreditation mechanism and
the strengthening of institutional quality assurance. The basis of measure-
ment in the quality assurance process is the degree of deviation from a set of
minimum standards. This therefore implies that consensus should be reached
on what the minimum standards should be for every academic programme
and for the operations of the entire institution. Consensus needs to be built
by relevant professional bodies and experts in various disciplines, the result
of which will be the regional minimum standards and benchmarks. These
will be the lodestone to guiding regional ratings/rankings.

Ranking of higher education institutions, especially universities in


Africa, has had a ten-year history (Okebukola, 2011). Within the decade,
methodologies have improved and the need to adapt to the African context
has been stressed. The outlook is that many more national efforts will
emerge in the coming years with the upwelling of numbers of national
quality assurance agencies. Resistance to ranking will not totally disappear,
but the queue behind the adherents will likely lengthen. Expectations are
high that before the close of the next two decades, African higher education
institutions will rise to the top of global league tables if the current quality
improvement process is sustained.

References

Aguillo, I. 2010. Comparing university rankings. Madrid: Webometrics Lab.

Aguillo, L. 2008. Webometric Ranking of World Universities: Introduction, methodology, and future
developments. Madrid: Webometrics Lab.

AU (African Union Commission). 2008. Developing an African Quality Rating Mechanism. Addis
Ababa: AU Press.

Federal Government of Nigeria. 2006. National Policy on Education: Lagos: NERDC Press.

Chapter 8. An African perspective on rankings in higher education 161


Liu, N.C. 2011. Academic Ranking of World Universities: Methodologies and Problems. Contribution
to the panel discussion on ‘The demand for transparency: what do the rankings actually
tell us?’ UNESCO Global Forum on University Rankings, Paris, 16–17 May.

Materu, P. 2007. Higher Education Quality Assurance in Sub-Saharan Africa: Status, Challenges,
Opportunities, and Promising Practices. World Bank Working Paper, No. 124.
Washington DC: World Bank.

Okebukola, P.A.O. 2006. Accreditation as Indicator for Ranking. Paper presented at the World Bank
conference on ranking of higher education institutions, Paris, 23–24 March.

Okebukola, P.A.O. 2010. Trends in Academic Rankings in the Nigerian University System and the
emergence of the African Quality Rating Mechanism. Paper presented at the 5th Meeting
of the International Rankings Expert Group (IREG-5), The Academic Rankings: From
Popularity to Reliability and Relevance, Berlin, 6–8 October.

Okebukola, P.A.O. 2011. Nigerian Universities and World Ranking: Issues, Strategies and Forward
Planning. Paper presented at the 2011 Conference of the Association of Vice-Chancellors
of Nigerian Universities, Covenant University, Ota, 27–30 June.

Okebukola, P.A.O. and Shabani, J. 2007. Quality assurance in higher education: perspectives from
sub-Saharan Africa. GUNI (ed.) State of the World Report on Quality Assurance in Higher
Education, pp. 46–59.

Oyewole, O. 2010. African Quality Rating Mechanism: The Process, Prospects and Risks. Keynote
address presented at the Fourth International Conference on Quality Assurance on
Higher Education in Africa, Bamako, Mali, 5–7 October.

Salmi, J. 2011. If Ranking is the Disease, is Benchmarking the Cure? Keynote address presented at the
UNESCO Global Forum on University Rankings, Paris, 16–17 May.

Salmi J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 19(2): 24–62.

Taiwo, C.O. 1981. The Nigerian Education System: Past, Present and Future. Lagos: Thomas Nelson.

UIS (UNESCO Institute of Statistics). 2010. Trends in Tertiary Education: Sub-Saharan Africa. UIS
Factsheet No. 10. December.

World Bank. 2008. Accelerating Catch-up: Tertiary Education for Growth in Sub-Saharan Africa.
Washington DC: World Bank.

162 Rankings and Accountability in Higher Education: Uses and Misuses


Appendix 1. Details of rating items for the African Quality Rating Mechanism

Institutional-level criteria

Governance and management

1. The institution has a clearly stated mission and values with specific
goals and priorities.

2. The institution has specific strategies in place for monitoring


achievement of institutional goals and identifying problem areas.

3. Clear accountability structures for responsible officers are in place.

4. Staff, students and external stakeholders, where appropriate, are


represented on governance structures. Governance structures are
representative in terms of gender.

5. The institution has developed quality assurance policies and


procedures.

6. Appropriate mechanisms are in place to monitor staff in line with


performance agreements with relevant authorities.

7. The institution has put a management information system in place


to manage student and staff data, and to track student performance.

8. The institution has specific policies in place to ensure and support


diversity of staff and students, in particular representation of women
and people with disabilities.

9. The institution has a policy and standard procedures in place to


ensure staff and student welfare.

Infrastructure

1. The institution has sufficient lecturing spaces to accommodate


student numbers, taking the institutional mode of delivery into
account.

Chapter 8. An African perspective on rankings in higher education 163


2. The institution provides sufficient learning/studying space for
students including access to electronic learning resources, as
required for the institutional mode of delivery.

3. Academic and administrative staff have access to computer resources


and the internet.

4. Students have access to computer resources and the internet at


a level appropriate to the demands of the institutional mode of
delivery.

5. The institution has sufficient laboratory facilities to accommodate


students in science programmes, taking institutional mode of
delivery into account.

6. Laboratory equipment is up to date and well maintained.

7. The institution invests in maintaining an up-to-date library to


support academic learning and ensures that appropriate access
mechanisms are available depending on the mode of delivery.

8. The institution makes provision for managing and maintaining


utilities and ensuring that appropriate safety measures are in place.

Finances

1. The institution has access to sufficient financial resources to achieve


its goals in line with its budget and student unit cost.

2. The institution has procedures in place to attract funding, including


from industry and the corporate sector.

3. Clearly specified budgetary procedures are in place to ensure


allocation of resources reflecting the vision, mission and goals of the
institution.

4. Financial and budgetary procedures are known and adhered to by


the institution.

5. The institution provides financial support to deserving students


(institutional bursaries and/or scholarships).

164 Rankings and Accountability in Higher Education: Uses and Misuses


Teaching and learning

1. The institution encourages and rewards teaching and learning


innovation.

2. The institution has procedures in place to support induction to


teaching, pedagogy, counseling and the upgrading of staff teaching
and learning skills through continuing education and/or lifelong
learning.

3. Students have sufficient opportunity to engage with staff members


in small groups, individually or via electronic platforms.

4. Student–staff ratios and academic staff average workloads are in line


with acceptable norms for the particular mode of delivery, and are
such that the necessary student feedback can be provided.

5. The institution has policies/procedures in place to inform the


development, implementation and assessment of programmes
offered by the institution, and these policies take account of ways
in which higher education can contribute to socio-economic
development.

6. The institution has developed a policy or criteria for staff


recruitment, deployment, development, succession planning and a
system of mentorship and/or apprenticeship.

7. Student-support services, including academic support and required


counselling services are provided, in line with the institutional mode
of delivery.

8. The institution has mechanisms in place to support students to


become independent learners, in line with the institutional mode of
delivery.

Research, publications and innovation

1. The institution has a research policy and publications policy, strategy


and agenda. The research policy includes a focus (among others) on
research supporting African socio-economic development.

Chapter 8. An African perspective on rankings in higher education 165


2. The institution has a policy and/or strategy on innovation, intellectual
property ownership and technology foresight.

3. The institution has demonstrated success in attracting research


grants from national or international sources and in partnership
with industry.

4. The institution has procedures in place to support academic staff to


develop and enhance their research skills, including collaborative
research and publication.

5. Staff and students publish their research in accredited academic


journals and apply for patents (where relevant).

6. Researchers are encouraged and supported to present their research at


national and international conferences.

7. Researchers are encouraged and facilitated, using a research


and development budget, to engage in research relevant to the
resolution of African problems and the creation of economic and
development opportunities.

8. The institution encourages and rewards research whose results are


used by society.

Community/societal engagement

1. The institution has a policy and procedure in place to engage with


the local community or society in general.

2. The institution encourages departments and staff to develop and


implement strategies for community engagement.

3. Students are required to engage with communities through their


academic work.

4. The institution has forged partnerships with other education sub-


sectors to enhance the quality of education in the country and region.

5. The institution provides access to an increasingly diverse range of


students, taking account of additional support needs.

166 Rankings and Accountability in Higher Education: Uses and Misuses


6. The institution disseminates information on its community-
engagement activities to the local community.

7. The institution offers relevant short courses to the community/


broader society based on identified needs and supporting identified
economic opportunities.

Programme-level criteria

Programme planning and management

1. The programme is aligned with the overall institutional mission and


vision.

2. The programme meets national accreditation criteria.

3. The institution allocates sufficient resources to support the


programme.

4. There is a programme coordinator(s) responsible for managing and


ensuring the quality of the programme.

5. The mode of delivery takes account of the needs and challenges of all
targeted students.

6. Staff teaching on the programme has the appropriate type and level of
qualification.

7. The programme is regularly subjected to internal and external review


in a participatory manner to reflect developments in the area of study.

8. Programme planning includes a strategy for the use of technology in a


manner appropriate to the programme, facilities available and target
students.

Curriculum development

1. The curriculum clearly specifies target learners and learning


outcomes/competencies for each module/course and for the
programme as a whole.

Chapter 8. An African perspective on rankings in higher education 167


2. The curriculum is regularly updated to take account of new
knowledge and learning needs to support African development.

3. Modules/courses are coherently planned and provide a sequenced


learning pathway for students towards attainment of a qualification.

4. The curriculum includes an appropriate balance of theoretical,


practical and experiential knowledge and skills (where applicable), as
well as core and elective areas.

5. The curriculum has been developed to maximize student career


pathways, opportunities for articulation with other relevant
qualifications, and employment prospects.

6. Curriculum development has been informed by thorough research and


consultation with relevant stakeholders (for example, employers).

7. The curriculum reflects positive African values, gender sensitivity and


the needs of society.

Teaching and learning

1. A clear strategy is in place to identify the learning materials needed


to support programme delivery.

2. Learning materials have been clearly presented and include reference


to the learning aims and outcomes, as well as an indication of study
time.

3. The language level of the learning materials is appropriate for the


targeted students.

4. The learning materials have been designed with the purpose of


engaging students intellectually, ethically and practically.

5. The range of learning materials used in the programme are


integrated and students are guided through their use.

168 Rankings and Accountability in Higher Education: Uses and Misuses


6. Programme review procedures include materials review and
improvement.

7. Innovative teaching and learning materials are provided for students.

Assessment

1. Clear information about mode of assessment is provided for all


courses/modules making up the programme.

2. Assessment is used as an integral part of the teaching and learning


process and seeks to ensure that students have mastered specific
outcomes.

3. The level of challenge of assessments is appropriate to the specific


programme and targeted students.

4. A variety of assessment methods are used in the programme.

5. Staff qualified in assessment have been identified and trained to


provide competent assessment.

Programme results

1. Student progress is monitored throughout the programme and early


warning is provided for students at risk.

2. Completion rates per cohort conform to established norms for the


subject area and mode of delivery and strategies to increase comple-
tion rates are in place.

3. Quality student feedback is provided.

4. Expert peers and/or professional bodies review the relevance and quality
of learning achieved by students.

Chapter 8. An African perspective on rankings in higher education 169


Chapter 9

Rankings and information


on Japanese universities
Akiyoshi Yonezawa
Introduction
As a country whose higher education is categorized as being based on a
‘Confucian’ model, (Marginson, Kaua and Sawir, 2011), or a typical country
suffering from ‘diploma disease’ (Dore, 1997), Japan has a long history of
paying enormous attention to university rankings. From the establishment of
a modern higher education system in the latter half of the nineteenth century,
the Japanese higher education system has had a hierarchical structure. After
the Second World War, large Japanese enterprises developed a sophisticated
career promotion system within companies through the emphasis of on-the-
job training. Under these circumstances, new university graduates would
commonly get recruited just after graduation, and rather than already
possessing specific expertise knowledge and skills, would be expected to
undergo further training after entering a company. By the end of 1970s, the
brand name of a university from which a student graduated became a decisive
factor for the career promotion of Japanese youth. Mock-examination services
were developed and a selectivity score called hensachi, a standardized score
that indicates the academic achievement of the admitted students, became
widely available. This student selectivity score has been utilized as a prior
indicator in all university rankings in Japan up to the present day.

However, by the end of the 1980s universities and academics had become
more sensitive to their global positioning in research performance. Japanese
universities had become fully involved in global competition in research,
especially in the fields of science, technology, engineering and mathematics
(STEM). Based on the strong economy and success in production technol-
ogy at that time, Japan witnessed a ten-fold increase in the acceptance of
international students, which went from 10,428 in 1983 to 109,508 in 2003.
The change in position of the University of Tokyo (from 67th in 1988 to
43rd in 1998) in the Gourman Report, a US-based university ranking that
started in 1967, was referred to in the Diet.1 At the end of the 1990s, a Hong
Kong-based magazine, Asiaweek, published university rankings in Asia. The
attitudes towards this ranking varied among Japanese universities as well
as among Asian universities. It was said that Chinese and Taiwanese uni-
versities once resisted collaborating with Asiaweek. The University of Tokyo
also refused to be ranked, having recognized that university rankings did
not express the exact value of university activities (Yonezawa, Nakatsui and
Kobayashi, 2002). The majority of universities, especially those with a strong

1 The Committee of Education and Science, the House of Councillors, the National Diet, 25 April 2002:
http://kokkai.ndl.go.jp/SENTAKU/sangiin/154/0061/15404250061008a.html

172 Rankings and Accountability in Higher Education: Uses and Misuses


faculty in science and engineering, paid more attention to their position in
international rankings.

Interest in university ranking in Japan occurs for two completely differ-


ent reasons. The first is that a university ranking can influence a students’
choice of university, especially at the undergraduate level. This is quite
important because student selectivity acts as a major factor in student
recruitment. The second is that a university ranking is considered to dem-
onstrate a university’s research capacity according to global standards.
Having a high ranking can assist in attracting funds for research activi-
ties from the government, foundations and private enterprises, as well as
attract talent from all over the world.

Based on its strong economy, Japan has enjoyed a relatively independent,


higher education market, albeit one with a language barrier. Until recently,
Japanese universities have relied on domestic income resources for both
tuition fees and research funds. However, the shrinking student market,
resulting from the decline of the youth population and the rapidly increas-
ing level of regional and global competition in attracting research funds, is
now changing the attitudes of Japanese universities towards both domestic
and international university rankings. Building on earlier works (Yonezawa,
Nakatsui and Kobayashi, 2002), this chapter introduces the recent devel-
opment of university rankings in Japan and the changing perspectives of
Japanese universities towards domestic and international university rank-
ings. Particular attention is paid to the drastic increase in information on
prospective universities and the integration of ‘university rankings’ into a
variety of information sources.

University rankings and student choice


From the 1980s to the beginning of the 1990s, Japanese universities found
themselves in high demand due to the government’s control of total student
numbers. Prior to the enrolment of the second generation of baby boomers
in the latter half of the 1980s, the government loosened their control over
the total student numbers. By the end of the twentieth century, many less
prestigious, private universities became de facto open entry. Under this con-
dition, students were given more freedom to choose universities based on
the content of academic programmes and study life, rather than the single
factor of student selectivity on the part of a university.

Chapter 9. Rankings and information on Japanese universities 173


The general public also viewed this change in the relationship between uni-
versities and student, namely, from one-sided selection by the universities
to a two-way, mutual choice between universities and students, in positive
terms (Arai and Hashimoto, 2005). The various media started to provide
information on universities more actively through a wide range of data and
indicators. Kawaijuku, a leading company in educational service provision
that mainly supports primary and secondary school students and gradu-
ates to prepare for entrance examinations for the universities and high
schools, pointed out the emergence of a significant number of universities
that did not select students (Taki, 2000). After that, Kawaijuku strength-
ened its original surveys and analyses on the academic programmes and
study life of prospective universities, initially for the purpose of providing
guidance to their students, and then to publish these results to promote
university reforms.

Asahi Shimbun, a top newspaper company, started to publish an annual


book called Asahi University Rankings in 1994 (See Appendix), which
included approximately eighty indicators. Asahi Shimbun clarified its mis-
sion to widen the perspective on universities on behalf of students and
their parents. The indicators utilized in Asahi University Rankings contain
everything from academic performance in relation to publications and
citations, to student life including information on opportunities for volun-
teer activities. Asashi University Rankings does not provide a comprehensive
ranking as a policy, partly because of the already existing strong influence
of the student selectivity score. However, it has succeeded in including a
wide range of universities. Many universities can find themselves ranked
on at least one indicator.

Asahi Shimbun has also tried to provide visual and detailed information on
specific universities and academic fields in their book series entitled AERA
Mook. For example, the AERA Mook on the Tokyo Woman’s Christian University
is a 130-page volume with various visual images containing interviews and
portrayals of its faculty, students and alumni, from advanced academic
contents to the contents of a student’s lunchbox. In 2011, Asahi Shimbun
implemented a survey to obtain detailed information from universities in
collaboration with Kawaijuku, which also provides data on entrance exami-
nations for Gakken, another publisher. Such collaboration among ranking
bodies, media and information providers has been frequently found in the
history of university rankings in Japan.

Recruit Ltd. has been another important player in the history of university
rankings in Japan. The main task of this company has been to provide

174 Rankings and Accountability in Higher Education: Uses and Misuses


information on job placement for its students. This company has devel-
oped into a comprehensive service provider for universities, schools and
the labour market. Recruit Ltd. has published a journal entitled College
Management, which provides information to university managers and
administrators. To provide a more in-depth view of various universities,
it implemented student satisfaction surveys with various indicators in
1997, 1999 and 2001. This survey was transformed into a student survey on
their recognition of the ‘brand power’ of universities in 2003. Since then,
Recruit Ltd. has published its rankings on universities’ competitiveness in
terms of developing brand image, based on high-school student surveys,
every two years.

Yomiuri Shinbun, the largest newspaper company in Japan, has been


implementing another type of university ranking entitled Daigaku no
Jitsuryoku (The Real Power of Universities) (See Appendix) since 2008 (Yomiuri
Shimbun, 2011). As a series of newspaper articles, Yomiuri Shinbun has
published reports on the ongoing reforms at various types of universities
in Japan and abroad. The development process of rankings paralleled this
series of articles. Yomiuri Shimbun’s rankings focus more on the quality of
education provision, teaching improvement, curriculum design, and so on.
They then published the data and rankings in combination with the articles
on various types of universities.

The emergence of these rankings, combined with various quantita-


tive and qualitative data and information, indicate that the efforts to
improve educational quality began to have greater impact on student
choice, in the context that many universities are almost open entry.
Yomiuri Shimbun’s ranking, in particular, has attracted the attention
of universities and the general public by surveying and publishing the
dropout rate of prospective universities. Historically, the dropout rate
among Japanese universities has been quite low in general. However,
open enrolment and the increased efforts in quality assurance led to
the emergence of a certain number of universities whose dropout rates
were relatively high. These universities were, in general, reluctant to
reveal these statistics, since most of the parents tend to take for granted
that almost all students, once enrolled, will graduate. At the same time,
Yomiuri Shimbun also requires universities to share their data on reme-
dial and first-year education, which became meaningful for determining
the effectiveness of university education. Overall, the rankings by Yomiuri
Shimbun not only represent the general public’s view on universities, but
also provide a provocative view in terms of insights into the quality of
university education.

Chapter 9. Rankings and information on Japanese universities 175


University rankings for research
performance
For universities involved in international competition, world university
rankings are a highly important tool through which international recogni-
tion can be obtained. In the case of Japan, the most internationally com-
petitive aspect of higher education lies in research activities in the fields
of natural science and engineering, as well as, needless to say, studies in
Japanese society, culture and business. Among top universities in the fields
of natural science and engineering, and to some degree in social sciences,
it is becoming common for international students at the graduate school
level to have the option of participating in study programmes in the English
language. In other words, Japanese top research universities are inevitably
involved in the globalized market of students and researchers.

In Japan, university presidents, especially top ones with long histories,


tend to be appointed based on the voting results of the faculty. Thus, these
presidents are recognized as the representatives for faculty members, the
majority of which are from the fields of natural and medical sciences or
engineering. At the same time, for the vice presidents in charge of research
and international affairs, the ranking position is highly influential when it
comes to attracting external resources, research collaboration and talented
students from all over the world.

Japanese national (public) universities still rely heavily on the public


budget of the national government in terms of their operational expendi-
tures and basic research project funds, and external research funds from
industries based in Japan. Therefore, it is crucial, especially for competitive
national universities, to attract the attention of the government to secure
investment in their academic activities. Universities have also strategically
utilized world university rankings to attract attention from the govern-
ment and domestic society. Tohoku University published its action plan in
2007 to reveal its ambition to be a ‘world leading’ university ranked among
the top 30 in the world. Hitotsubashi University, a top national university
in the field of economics, business studies and other social sciences, also
places a great deal of emphasis on world rankings and attempted to carry
out a benchmarking exercise with the London School of Economics and
Political Sciences. Based on a survey implemented by Tohoku University
in 2008, 47 per cent of national university managers responded that they
referred to world university rankings as an indicator when managing their
universities (Yonezawa, Akiba and Hirouchi, 2009). In reality, however, less

176 Rankings and Accountability in Higher Education: Uses and Misuses


than 10 per cent of national universities are ranked among the top 200 in
any world university rankings.

Private universities face more difficulty in being ranked, especially in rank-


ings based on research performance. To date, only Keio University, which
has a very strong school of medical sciences, and Waseda University, which
has an internationally competitive school of natural sciences and engineer-
ing, have been ranked in a comprehensive ranking such as the QS ranking,
which places a heavy emphasis on a reputation survey. At least in terms of
selectivity by students, those two private universities are equally competitive
with the top tier of national universities, with Waseda University attracting
the largest number of international students among Japanese universities.
Especially in the field of social sciences and applied sciences, these universi-
ties have also been successful in attracting high-level researchers, and feel
the necessity to attract a highly talented, international faculty and students.
In strengthening international exchange with other universities, private uni-
versities, including Waseda and Keio, have paid a great deal of attention to
world university rankings.

At the same time, especially for university managers and staff with exper-
tise knowledge in the fields of science and technology, the methodology
utilized for current world rankings is apparently rough, incomplete and
biased. Japanese university leaders and their staff have been ambiguous
on the reliability and viability of world university rankings, and have con-
tinued to make an effort to request further improvement of the method-
ology. For example, when the indicator of reputation by employers was
introduced in the QS rankings from 2005, Yoshihisa Murasawa, a specially
appointed professor of the University of Tokyo, identified that only two
employers responded to the QS and that the questionnaires were not dis-
tributed among Japanese companies (Kobayashi, 2007).

When Times Higher Education introduced a new ranking methodology in


collaboration with Thompson Reuters in 2010, Japanese universities expe-
rienced a significant lowering of rank. Research University  11 (RU11), the
consortium of top  11 national and private universities in Japan,2 made a
request to improve the ranking methodology, specifically pointing out that
the newly indicated ‘regional modification’ that was originally aimed at ‘fair

2 The RU11 member universities are nine national universities (Hokkaido University, Tohoku University,
Tsukuba University, the University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto
University, Osaka University, Kyushu University) and two private universities (Keio University and
Waseda University).

Chapter 9. Rankings and information on Japanese universities 177


treatment’ towards universities in developing countries, worked negatively
in regard to Japanese universities. Some of those requests have been fac-
tored into the methodological amendment to their ranking in 2011. As a
result, the University of Tokyo regained its top position in Asia, while the
position of Japanese universities, including the University of Tokyo, went
down in rank along with other Asian universities. At the same time, Times
Higher Education also released the ranking results based only on their rep-
utation survey of 2011 in which the University of Tokyo was ranked eighth.

Researchers in the field of higher education also showed an interest in


world university rankings – some as a tool for assessment (Kobayashi,
Cao and Shi, 2007) and others as a social phenomenon (Yonezawa,
Nakatsui and Kobayashi, 2002). Participation in progressive approaches
has also occurred. The University of Tokyo, for example, decided to take
part in the U-map project, the European benchmarking initiative. At the
same time, Kobayashi (2011) stressed the effectiveness of benchmarking
through institutional research activities, rather than merely to university
ranking results.

Rankings on university finance


Another category of university rankings has developed; namely, rankings
on university finance. Especially in the last decade, university finance has
received substantial attention not only from university managers, but also
from various industries and business professionals. Firstly, the emergence
of an oversupply condition in the student market for Japanese higher edu-
cation raised questions about the sustainability of less prestigious, private
universities in particular. The bankers lending money to those private
universities must pay attention to the financial stability of those universi-
ties. However, the financial data of private universities as non-profit school
corporations, unlike that of for-profit stock companies, was not open to the
public. Secondly, universities, especially the top universities of both the
public and private sector, face the necessity of attracting more investment
from the industrial and business world in order to strengthen their educa-
tional and research profile in the face of fierce global competition. In terms
of financial capacity, there is still a big gap between the top national uni-
versities that are heavily subsidized by the national government in Japan
and the leading universities in the United States. Meanwhile, top private
universities in Japan have set their tuition fees at quite a modest level

178 Rankings and Accountability in Higher Education: Uses and Misuses


due to the fact that they have to compete with top national universities
who charge low tuition fees in the domestic student market. This places a
significant limit on the ability to improve quality of education activities to
attract high-level, international students. Therefore, in addition to making
continuous efforts to identify internationally viable tuition fee levels, they
needed to demonstrate a robust financial condition. Thirdly, some private
universities lost property after experiencing financial investment failure
due to the economic recession of 2008. Universities have now become a
major industry, and their economic behaviours inevitably draw attention
from the general public.

Two business journals, Toyo Keizai and Diamond, have also published
university rankings as special issues. Toyo Keizai (see Appendix) issued
their ranking based on an employers’ survey in 1996, and began publish-
ing periodic special issues entitled ‘Truly strong universities’ from 2001.
Adding to the employers’ review, Tokyo Keizai listed their ranking based
on the financial performance of major universities along with details of
their financial and management profiles. The Tokyo Keizai ranking has
developed into a comprehensive ranking that now includes performance
with regard to finance, management innovation, research, education and
employability. On the other hand, the Diamond ranking has focused more
on employability based on the opinions of directors of human resource
management divisions. Diamond also published a comprehensive ranking
in 2003 based on student selectivity, education and research performance,
and the careers of graduates. However, Diamond has not issued any peri-
odic rankings since 2006.

National government and university


rankings
The Japanese government has also taken an ambivalent attitude towards
rankings. Before the emergence of world university rankings, the Japanese
government had been basically critical of hierarchical stratification based
on student selectivity scores. The introduction of a national standardized
entrance examination in 1979 also accelerated an overall tendency for stu-
dents to choose a university based solely on student selectivity scores. The
First Report of the Provisional Council of Educational Reform (1985), a gov-
ernment committee on education established under the Prime Minister,

Chapter 9. Rankings and information on Japanese universities 179


tried to show the alternatives to such an over-reliance on the paper-based
entrance examination and to implement an admission system that would
permit acceptance for students with various talents (Monbusho, 1992).

In the 1990s, a campaign promoted by the leaders of top national universi-


ties on further investment in university education research was overall
positively viewed by the government in line with their policy on strength-
ening investment in the knowledge economy. The enactment of the Science
and Technology Basic Act in 1995 made this policy direction a decisive one.
Moreover, the ranking of the University of Tokyo in the Gourman Report
was referred to at the Diet on three occasions (1991, 1994 and 1998) in the
1990s as an indicator of the need to examine the low international prestige
of Japanese universities and the necessity to invest in science, technology
and university education.

In 2001, Atsuko Toyama, the Minister of Education, Culture, Sports, Science


and Technology at the time, revealed his idea to foster around thirty
‘world-class’ universities, while failing to making any mention of the
ranking itself. The emergence of the more widely acknowledged world
university rankings, such as World University Rankings by the Times Higher
Education Supplement and the Academic Ranking for World Universities
(ARWU) by Shanghai Jiaotong University, certainly directed the attention
of the Japanese government towards such rankings in a more direct way.

Heizo Takenaka, who served as Minister of Internal Affairs and


Communication from 2005 to 2006, started to argue, after his retirement
from the political world, that the University of Tokyo should be privatized,
or financially independent from the national government on the grounds
that almost all of the top US universities were private universities. The
Liberal Democratic Party (LDP), which lost its ruling position in 2009, set
up a project team to improve the ranking positions of Japanese universi-
ties in 2006. In 2007, the Central Council for Education (CCE) of MEXT (the
Ministry of Education, Culture, Sports, Science and Technology) also set
up a working group to discuss the provision of quality information on top
Japanese universities to the world.

At the same time, the change of ranking position has been utilized as a
political tool for both budgetary requirements and budgetary cuts. The
Democratic Party of Japan (DPJ) became a ruling party in 2009 and intro-
duced an expenditure review to examine the effectiveness of prospective
budgetary items as a process open to the general public. At this point, MEXT

180 Rankings and Accountability in Higher Education: Uses and Misuses


provided ranking data to argue for the necessity to make further invest-
ment in national universities. The examiners, however, tried to interpret
this data as evidence of the inefficiency of public spending for universities
and academic matters.

The government has also tried to demand information disclosure on the


part of universities. The National Institution for Academic Degrees and
University Evaluation (NIAD-UE) developed a database of university infor-
mation on the national universities. From April 2011, the government started
to officially require all universities to disclose information on the basic
indicators. At the same time, the Japanese government actively provides
information on Japanese universities to the UNESCO Portal to Recognized
Higher Education Institutions.

Conclusion: a new era of regional


collaboration in Asia?
University rankings have certainly had a large influence on Japanese higher
education, both at national policy level and at an institutional level. At the
same time, it is widely recognized that the existing rankings are clearly
insufficient for assessing the highly complex activities and performance of
contemporary universities. Through the adoption of a policy that calls for a
concentration of public investment in top universities among Asian coun-
tries, as well as the rapid economic development of many Asian countries
outside of Japan, the gap of ranking positions between Japanese universities
and other Asian universities has nearly disappeared over the last decade
or two. At present, Japanese universities are faced with the challenge of
developing forms of collaboration with Asian universities as equal partners
in education and research. The Collective Action for Mobility Program of
University Students (CAMPUS Asia) is a symbolic project being establishing
by Japanese universities, government and industries in cooperation with
Japan’s closest neighbours: China and Korea. Stimulated by the efforts of
neighbouring countries to develop a world-class higher education system,
Japanese universities will continue to strive to enhance their global status.
Although rankings still prove problematic, they continue to be influential
for the institutional behaviours of Japanese top universities.

Chapter 9. Rankings and information on Japanese universities 181


References

Arai, K. and Hashimoto, A. (eds). 2005. Koko to Daigaku no Setsuzoku [High School to College
Articulation]. Tokyo: Tamagawa University Press [in Japanese].

Dore, R. 1997. The Diploma Disease: Education, Qualification and Development (2nd edn). London:
Institute of Education.

Kobayashi, M., Cao, Y. and Shi, P. 2007. Comparison of Global University Rankings. Tokyo: Center
for Research and Development of Higher Education, University of Tokyo.

Kobayashi, M. (ed.) 2011. Daigaku Benchmarking ni yoru Daigaku Hyoka no Jissho teki Kenkyu
[Positive Research on University Evaluation through Benchmarking]. Tokyo: Center for
Research and Development of Higher Education, University of Tokyo [in Japanese].

Kobayashi, T. 2007. Nippon no daigaku [Japan’s Universities]. Tokyo: Kodansha [in Japanese].

Marginson, S., Kaua, S. and Sawir, E. (eds). 2011. Higher Education in the Asia-Pacific. Dordrecht,
Netherlands: Springer.

Monbusho. 1992. Gakusei Hyakunen Shi. [One Hundred Year History of Japan’s School System].
Tokyo: Monbusho [in Japanese].

Provisional Council of Educational Reform. 1985. Dai Ichiji Toshin [First Report]. Tokyo: Cabinet
Office [in Japanese].

Taki, N. 2000. 2000 Nen no Nyushi Jokyo no Sokatsu. [Summary of University Admission in 2000].
IDE 421: 61–66 [in Japanese].

Yonezawa, A., Nakatsui, I. and Kobayashi, T. 2002. University rankings in Japan. Higher Education
in Europe, 17(4): 373–82.

Yonezawa, A., Akiba, H. and Hirouchi, D. 2009. Japanese university leaders’ perceptions of
internationalization: the role of government in review and support. Journal of Studies in
International Education, 13: 125–42.

182 Rankings and Accountability in Higher Education: Uses and Misuses


Appendix

Index 1. Indicators and topics that appeared


in Asahi University Rankings 2012

Information disclosure
Yield rate in admission
Dropout rate
Review by university presidents
Review by high schools
Administrative staff
World university rankings
Tuition fees
Learning environments
Newly established universities
Local public universities
Women’s universities
Religious universities
Small-sized universities
University libraries
University repository
International volunteer
Contests
Study abroad programmes
Rate of proceeding to a graduate programme
Number of doctoral degrees issued
Share of female students
Share and number of international students
Universities from which the university presidents graduated
Newly recruited faculty members
Average age of faculty members
Share of alumni of faculty members
Share of faculties with doctoral degrees
Share and number of female faculties
Share and number of international faculties
Share of adult students
Participation in parents meetings
Student life
Appearance in fashion magazines
Job placement support staff
Internship

Chapter 9. Rankings and information on Japanese universities 183


Pass to examinations for civil servants teachers
Professional qualifications (lawyers, accountants, etc.)
Highly cited articles
Citation index
Articles listed in Scopus
Articles in Chemical Abstracts
Articles in Nature and Science
Articles in international journals of economics
Academic awards
Patents
Research grants from the national government
External research funds
Research grants from foundations
Governmental subsidies to private universities
Salaries of faculty members
Appearance in media
Movies and TV drama shootings on campuses
Members of governmental councils
Alumni recruiters at companies
Alumni among Diet members
Alumni among the presidents of enterprises
Alumni among sport players
Alumni among novelists
Alumni among female TV announcers
Alumni members
Number of applicants
Applicants/admitted students
Transition students
Returnee students from overseas
Students from the local province
Participants of open campus events
Hensachi (student selectivity score)

Index 2. Indicators appearing in Daigaku no Jitsuryoku


by Yomiuri Shimbun

Students/student quota set by the governments


Faculty/other academic staff
Full-time academic staff/part-time academic staff
Admitted students in various admission processes
Dropout rate

184 Rankings and Accountability in Higher Education: Uses and Misuses


Graduate rate
Remedial education programmes
Remedial classes on Japanese language
Remedial classes on English language
Classes for TOEIC examination
Compulsory class of mathematics
Project-based learning
Group work
Classes with debate sessions
Class size of first-year student seminars
Class size of language education
Class size of seminars
Minor or sub-major system and credits
Job placement support by alumni associations
Capacity of dormitories
Notice of academic grades to parents
Requirement of undergraduate thesis
Self-evaluation score

Index 3. Indicators appearing in Tokyo Keizai

Expenditure for education and research per student


Property of the library (books, journals, multimedia)
Research grant from Japan Society of Promotion of Science
Student-faculty ratio
Ratio of students who acquired jobs upon graduation
Number of senior managers of stock companies at the graduate level
Annual income of 30-year-olds among alumni
Ratio of increase/decrease of applicants
Ordinary income ratio
Share of external funds (except tuition fees and governmental support for
operational expenditure) within the total income
Capital adequacy ratio

Chapter 9. Rankings and information on Japanese universities 185


Chapter 10

The national and


institutional impact of
university rankings:
the case of Malaysia
Sharifah Hapsah
Introduction
The first time that universities were ranked globally in 2004 by QS-THES,
Malaysia had two universities in the top 200. The response from one univer-
sity was extreme jubilation with billboards on campus announcing its arrival
in the league of top universities of the world.

However, in September 2005, when the second world university rankings


(WUR) were announced the news was like a bombshell. The two top universi-
ties had slipped by almost 100 places, one dropping out of the top 200 list
altogether. It soon transpired that in the 2004 ranking Malaysian students of
Chinese and Indian descent were classified as ‘international’. When the mis-
take was corrected in the subsequent year the ranking dropped. This incident
serves to remind us that the criteria and methodology used in rankings have
limitations and are often controversial. Ranking organizations continuously
improve their rankings system. Hence year-on-year comparison is not an
accurate reflection of a university’s performance.

Notwithstanding the methodological improvements, the drop in ranking of


the premier university in the country was traumatic for all. The leader of the
opposition in Parliament called for a Royal Commission of Inquiry. Seminars
were held in which QS was invited to explain the ranking system. Letters on
the pros and cons of ranking were incessantly published in the media.

Rankings continue to make headlines, and are closely scrutinized by stu-


dents, politicians, institutional leaders, policy-makers and employers. In
fact ranking organizations are themselves surprised at how an innocuous
consumer product has rapidly become a global intelligence information
business and widely misconceived as lists of quality, evoking intense com-
petition between establishments.

Governmental responses
The initial strong governmental reaction was not unexpected. At that time the
country was implementing the Ninth Development Plan in which research
universities were to be created to contribute to the transformation of the
country into a knowledge-based economy. The government recognized that
universities, through education, research and development, play a key role
in driving economic growth and global competitiveness by creating, applying

188 Rankings and Accountability in Higher Education: Uses and Misuses


and spreading new ideas and technologies, and producing a skilled workforce
(World Bank, 2002). The Ninth Plan was specific in identifying areas such as
information and communication technologies (ICT), biotechnology and new
materials, where application of such knowledge could result in accelerated
growth and more efficient ways of producing goods and services and deliver-
ing them more effectively and at lower costs to a greater number of people.
Not having a university in the top 200 list was unacceptable.

Until recently, much such initiatives had ‘ranking’ overtones. This was visible
in the references to ranking in the National Higher Education Action Plan
(2007–2010) accompanying the National Higher Education Strategic Plan:
Beyond 2020 that was launched in 2007. While the Strategic Plan, through
seven strategic thrusts, aimed at substantially transforming and making
higher education institutions in the country comparable to the best in the
world, the Action Plan details critical implementation mechanisms that
include five critical agenda to catalyse systemic change. One of the critical
agenda projects is the creation of an APEX university as the means towards
achieving world-class status.

As the issues surrounding ranking have become clearer the government has
taken a more holistic view about ranking. The Minister of Higher Education
has expressly stated that universities should not be ‘obsessed with ranking’
(Khaled Nordin, 2011). The government realizes too that the fiscal require-
ments may go far beyond national budgets. For example, the top 20 univer-
sities in the QS World University Rankings, on average, have about 2,500
academic faculty members, are able to attract and retain top personnel
(high selectivity), and have approximately US$1 billion in endowments and a
US$2 billion annual budget.

Instead the government is focusing more on making the education system


‘world class’ to accommodate the increasing entrants to higher education.
Under the Economic Transformation Programme (PEMANDU), several initia-
tives have been identified for improving the supply as well as demand side
to increase access and enhance quality towards making Malaysia a global
education hub. During implementation of the Ninth Plan the selection of
research universities was completed. Following a rigorous evaluation pro-
cess four were selected and each was given additional funding and a set of
key performance indicators to achieve. Those designated as research univer-
sities were able to strategically plan their research programmes and carry
out activities that ultimately will raise their profile among peers, increase
citations in high-impact journals, and attract reputable international faculty
and quality students, which will increase graduate employability.

Chapter 10. The national and institutional impact of university rankings:


the case of Malaysia 189
The Ministry of Higher Education, recognizing that good leadership was cru-
cial in improving the performance of universities, established the Academy for
Academic Leadership (AKEPT). Management courses are conducted for senior
academicians, and a search committee was established to vet candidates for
senior management positions in universities. In 2010 a good university gov-
ernance guide was developed and used in the audit for institutional autonomy.
Good governance and autonomy are perceived as crucial elements of com-
petitiveness, dynamism and ability to face the challenges of a rapidly changing
world, and hence all research universities have subsequently been audited.

Many of the initiatives that have been put in place are aimed at restructuring
higher education for national competitiveness in the global economy and
increasing their share of scientific advancement. In the long term this will
strengthen the performance of Malaysian universities in ranking indicators,
and while resources are not being allocated purely for improving ranking per
se, indirectly ranking has become a tool for driving excellence.

Institutional response
The National University of Malaysia (UKM) has always maintained that
rankings are here to stay. They are used not only in education, but also to
compare anything from business competitiveness to innovation, corruption,
web attractiveness and even the world’s richest. Today there are over fifty
national and ten global rankings of educational institutions, including the
European Union’s planned U-Multirank.

Despite the criticisms there is much to be learned from global ranking.


Comparative quantitative data on publications and citations, for example, can
be a driving force for a university to examine its research quality, and hence
design appropriate strategies and actions for continuous quality enhance-
ments in building a research culture and the foundation for a great university.

However, lessons from comparative data are only useful if they are utilized
to devise institutional changes that will ensure a genuine and sustainable
improvement in the quality of universities over the medium to long term.
Malaysia is very much aware that the citation ratio, which indicates the quality
and strength of the research culture at its universities, is far from satisfactory.
According to the UK Royal Society (Economist, 2011) the United Kingdom and
the United States accounted for 38.5 per cent of global citations in 2004–2008.

190 Rankings and Accountability in Higher Education: Uses and Misuses


Canada, China, France, Germany, Italy and Japan shared the other 29.1 per cent.
Malaysia had a negligible share of the remaining 32.4 per cent.

At UKM, shortfalls in the citations ratio have been addressed in strategies


that are embedded in a comprehensive transformation plan with a goal that
balances international aspirations with the national mission. Thus, strategies
were devised to reinforce the national mission to promote the national lan-
guage as an academic language and the university’s role in nation-building,
and to balance these with strategies that will internationalize UKM as a lead-
ing research university by 2018.

Using the metaphor of a tern flying high in a balanced and focused way
towards its transformative goal, we devised a UKM Knowledge Ecosystem to
incorporate all strategies comprehensively. The backbone of the bird repre-
sents the core processes of education, research and service, each containing
both international and national dimensions. Also in the backbone are the
elements of an efficient delivery system.

An important strategy in the transformation plan is to nurture a vibrant


research culture by growing interdisciplinary research groups and research
centres. Acknowledging it cannot do everything and be known for every-
thing, UKM has focused on eight areas of national and global importance
and impact in terms of attracting researchers, internal and external grants,
collaboration from academic institutions and industry partners, publications
in high-impact journals and socio-economic benefits. An important niche
is the challenge of nation-building, which examines issues such as ethnic
relations, national unity and globalization. The other areas are medical and
health technology, sustainable regional development, renewable energy,
nanotechnology and advanced materials, biodiversity for biotechnology,
visual informatics and climate change.

The wings of our bird represent cross-cutting driver projects that push us
forward faster. The right wing represents specific projects that support our
mission as a national university – making the Malay language attractive
globally and contributing to the development of a united nation. The left
represents projects that advance UKM on the global stage.

An example of a left-wing project is the citation leap, which is aimed at


improving our citation ratio. Paper-writing workshops are held regularly and
a mentoring system has been instituted. Citations are monitored monthly
with financial and career advancement as incentives, and a target for all our
journals to be indexed by 2018 has been set. Currently five are indexed by

Chapter 10. The national and institutional impact of university rankings:


the case of Malaysia 191
Scopus, and one, indexed both by Scopus and ISI Web of Science, has received
an impact factor. Getting UKM’s journals indexed is a way to international-
ize our national language. It is heartening to note that we are beginning to
receive citations for articles published in the Malay language.

Other driver projects are aimed at enhancing internationalization through


global outreach for long and short-term international students. Better
employability is enhanced through outcome-based curricula with an empha-
sis on experiential learning, entrepreneurship, English-language proficiency
and industrial attachment. Members of academia are encouraged to work with
the private sector on consultancy and research projects so as to obtain external
funding, as well as to leverage the expertise available in the private sector.

At the tail end of the bird is the transformation machinery, containing pro-
jects, structures and processes that will help us manage change in a stable
manner. Key performance indicators are identified in six pillars of excellence.
Targets are set and monitored for good governance, leadership and succes-
sion planning, talent management, teaching and learning, research and
innovation as well community engagement.

The university has subjected itself to an institutional audit and has been
granted self-accreditation status and a reaffirmation of its research univer-
sity status. It recently submitted to a good governance audit as an account-
ability measure for greater autonomy. With autonomy it is expected that the
road to excellence will be expedited.

Shortcomings of university rankings


The ranking process is kept simple because few indicators of quality in higher
education translate reliably across borders. From the six qualitative data
used by QS, a single number is mathematically derived to give a snapshot
expression of the position of an institution relative to others with regards to
the different aspects of the quality of an institution. However, the indicators
used in ranking have to be interpreted with care. Internationalization as
measured by international students and staff does not necessarily equate to
quality, a factor much valued in a university of repute.

Validity issues also still dominate every discussion about the measures used
to assess research and teaching quality. How do we account for the dramatic

192 Rankings and Accountability in Higher Education: Uses and Misuses


differences in citation volumes between disciplines, and the tendency for
researchers to ‘cite each other’? Citations have increased tremendously since
the World University Rankings came into being. The Royal Society (Economist,
2011) reports that citation in the period 2004–2008 grew by 55 per cent com-
pared to the 1999–2003 period. The number of published papers however
grew by just 33 per cent.

Further, does the staff–student ratio reflect good culturally defined


teaching-learning methodology? The university provides the intellectual
and ethical environment for enriching students’ learning experiences, where
young minds are inspired and free to create and innovate, unfettered and
unhampered by fear, anxiety or constricting mores, and imbued with a deep
sense of social responsibility. Fostering entrepreneurship skills is another
area of focus of UKM to prepare graduates to launch start-up companies or,
if employed, be able to enhance the productivity of their companies through
innovation. Yet, what universities and teachers do to inculcate these values
and skills is not translated into ranking indicators, and certainly cannot be
measured by simple staff–student ratios. Teaching quality must be measured
by students’ learning experience. We need a better indicator.

With regards to choice of indicators, this could be a never-ending debate. For


example, should we just depend on employers’ perceptions to gauge employ-
ability when more and more universities are now stressing entrepreneurship
and self-employment as a measure of their graduates’ success? Using inter-
national students as an indicator disadvantages many developing countries
where universities are expected to fulfil unmet local demands.

There is also heavy reliance on the qualitative peer review and recruiter
survey, which together comprise 50 per cent of the scores. Such judgments
are known to be influenced by factors such as legacy and traditions, which
may confer advantages to older institutions with wide subject coverage.

There is also concern regarding the commercial interest in ranking and how
educational budgets might be diverted to playing the ranking game, such
as participating in promotional tours and buying advertisement space on
ranking websites. Conversely, there are factors that bring global recognition
to a university, but which may not be considered in ranking. UKM has part-
nered with QS (Third QS World-Class Seminar, 2008) to showcase how UKM’s
research in the geology and biodiversity of the flora and fauna of Langkawi
and its collaboration with the local authority has culminated in the island
being declared the world’s 52nd Geopark in the UNESCO Global Geopark
Network, and the first in Southeast Asia. Langkawi’s Geoforest park is now a

Chapter 10. The national and institutional impact of university rankings:


the case of Malaysia 193
favourite destination for sustainable ecotourism and one that has brought
greater economic opportunities for local people. Through their work on the
Geopark, UKM’s researchers have been accorded international recognition
by the Global Geopark Bureau. UKM is the secretariat for the Asia-Pacific
Global Geopark Network (APGGN).

The university’s contribution to sustainable development is a highly mean-


ingful indicator, but one that is seldom used. UKM has numerous projects
that promote social harmony, environmental conservation, entrepreneur-
ship, gender equality, poverty eradication and sustainable as well as inclusive
development that generally lead to a better quality of life of communities.
Yusuf and Nabeshima (2007) have also reported such findings. These meas-
ures are not captured in international rankings although they may also bring
universities to global prominence. This omission should not deter universi-
ties from pursuing their mission to transfer knowledge that can create value
in producing innovations that directly drive sustainable wealth generation
and societal development at the local, national and global levels.

Transparency and misuse of rankings


Ranking is not a perfect measure of a university’s actual worth. It also does
not address the needs of individual stakeholders, such as students who are
looking for specific information to help them select an institution for their
programme of study.

Likewise, in the rush to enhance international reputation some universities


recruit international staff and students with scant regard for their qualifica-
tions. Such an approach is not only short-sighted and counterproductive for
institutional capacity-building, but may jeopardize the nurturing of a true
academic culture and endanger the mission of the university itself.

The future of world university rankings


Rankings should not be used to make judgments about who is best.
Universities should be judged on how best they fit the purpose of their
establishment and public funding. Even in the same country, universities

194 Rankings and Accountability in Higher Education: Uses and Misuses


differ in their missions. At UKM, for example, our work is not only con-
fined to producing leaders, research output and science that is expected of
a ‘world-class’ university, but also in nation-building and promoting Malay
as a scientific language as befitting our mandate as a national university. In
addition, we are expected to play our role in the national innovation system
by initiating changes that create new value in financial and social returns to
our stakeholders.

Thus, national ranking systems such as SETARA in Malaysia are evolving


based on additional measures which are weighted according to the type
of university being evaluated. A research university for example, will have
50  per  cent of total scores devoted to research output and the quality of
academic staff. Indicators measure competitiveness in securing research
grants and endowments, ability to transfer technology to the marketplace
for wealth generation, and effectiveness in transferring knowledge for policy
formulation or community gains.

Conclusion
Rankings are here to stay but this does not mean that governments should
initiate policies targeted at creating ‘world-class’ universities as a panacea for
success in a global economy. Instead they should focus on making their edu-
cation system world class. International rankings do provide useful compara-
tive data that can be a driving force for a university to examine its research
and teaching quality, and hence design appropriate strategies and actions
for continuous quality enhancements in building a research culture and the
foundation for a great university. It is even more useful if ranking methodol-
ogy evaluates the deeper contextual level to enable forward planning for
institutional changes that will ensure a genuine and sustainable improve-
ment in the quality of universities in the medium to long term. A university’s
worth is more than the criteria used in ranking. Although innovations in
stakeholder engagement are currently missing in ranking, the omission
should not deter us from continuing to value these activities. The genuine test
of a university’s mettle is how it continuously anticipates and leads change
through innovations that create new value and give social, environmental
and financial returns for the university, the nation and region. We need to
devise better indicators and methods for assessing the impact on business
innovation, socio-cultural promotion and environmental development of a
region. In the words of Einstein, ‘Not all that counts can be counted’.

Chapter 10. The national and institutional impact of university rankings:


the case of Malaysia 195
References

Economic Transformation Programme. 2010: http://etp.pemandu.gov.my

Nordin, K. 2011. Berita Harian, Daily News (11 September 2011).

Economist, The. 2011. Global science research: paper tigers (29 March 2011): www.economist.com/
blogs/dailychart/2011 (Accessed 1 June 2011.)

Ministry of Higher Education. 2007. National Higher Education Strategic Plan: Beyond 2020.
Malaysia.

Ministry of Higher Education. 2007. National Higher Education Action Plan: 2007-2010. Malaysia,
p. 36.

Third QS WorldClass seminar 2008: Boosting a University’s Global Recognition by Championing


Sustainable Regional Development, Langkawi, Malaysia. UKM seminar brochure.

World Bank. 2002. Constructing Knowledge Societies. Washington DC.

Yusuf, S. and and Nabeshima, K. 2007. How Universities Promote Economic Growth. Washington DC:
World Bank.

196 Rankings and Accountability in Higher Education: Uses and Misuses


Chapter 11

What’s the use of rankings?


Kevin Downing
The value of rankings to the ‘consumer’
Regardless of their controversial nature, global university rankings are
now a reality, are already exerting substantial influence on the long-term
development of higher education across the world, and are likely here
to stay (Marginson and van der Wende, 2007). Three ranking systems
are currently in positions of relative global dominance in the English-
speaking world. The oldest system, by one year, is that prepared by the
Shanghai Jiao Tong University (SJTU), which was first issued in 2003,
with the QS World University Rankings published by QS Quacquarelli
Symonds (QS), now in their eighth year, first being published in 2004.
In 2010, Times Higher Education also launched a world university rank-
ing system, having separated from seven years of collaboration with QS,
to produce the third global university-ranking offering. These rankings
recognize the growing impact of the global environment on higher edu-
cation systems and institutions, and the importance placed on some
means of identifying institutional excellence by prospective ‘consumers’.
Some of these consumers have the advantage of government-funded or
subsidized opportunities to access higher education, while others will
be investing their own funds to obtain the best education possible for
themselves or, more likely, their offspring.

In almost every walk of life we can make informed choices because we


are provided with appropriate ways of assessing the quality of what we
purchase and consequently narrowing down the choice of products we
wish to investigate further. However, for government-funded higher edu-
cation, the currency consumers use is not always money, but the quality
of secondary education and subsequent achievements (usually via final
secondary exit examination grades), and it is only natural that these con-
sumers, and their parents, want to make the right choices from among a
bewilderingly large and globally diverse group of offerings. Very broadly,
the advent of rankings has enabled these individuals to access information
about an institution as a whole that will assist with that choice. While it
might not provide information about the particular strengths and weak-
nesses of the disciplines and departments encompassed within any given
higher education institution (HEI), at undergraduate level it is often the
reputation and ranking of the HEI that will encourage further investigation.
In fact, outside of academic circles (and in some cases inside as well) the
strengths and weaknesses of particular departments or disciplines within

198 Rankings and Accountability in Higher Education: Uses and Misuses


an institution are often ignored in favour of recognizing that someone has
a degree from a widely acknowledged and traditionally prestigious institu-
tion. Academics, students, their parents and employers recognize this, and
as students become more globally mobile, the reputation of any HEI in
terms of its standing or ranking comparative to others, will continue to
grow in importance.

Flaws in ranking practices


Taking the QS rankings as an example of a more holistic ranking than
its Shanghai Jiao Tong counterpart, which will be regarded by some as
limited in scope by its focus on research, it is relatively easy to criticize
the ranking process in terms of both the criteria used and the relative
weightings of these. For example, 40 per cent of the QS ranking is based
upon a reputational survey of international academic opinion and the
results from these criteria probably roughly indicate the existing market
position of the institution, rather than its particular merits. In terms of
indicators of internationalization, 5 per cent of the ranking is based upon
the proportion of international students and 5 per cent on the proportion
of international staff. As such, Marginson (2007) is right to point out that
this is probably, in many cases, an indicator of the success of a university’s
marketing division, rather than its researchers. This criticism is further
supported by the fact that only 20 per cent of the QS ranking comes from
research papers and citations. The remaining 30  per  cent of the ranking
score is made up of faculty-student ratio (20 per cent) and employer review
(10  per  cent). Accepting that faculty-student ratio is not a particularly
sophisticated indicator of learning and teaching quality, it is nonetheless
an attempt in a large and wide-ranging survey to obtain some measure of
the contact students might have with their academic mentors. Employer
review is also a reasonable recognition of something that academics are
often too ready to forget: that the majority of their students will probably
be seeking employment after graduation rather than aspiring to careers as
academics. Therefore, these criteria relate more to graduate employability
and work-readiness rather than academic strength, and in particular the
ability to work effectively in a multi-cultural team, to deliver presentations,
and to manage people and projects.

Chapter 11. What’s the use of rankings? 199


Commoditization vs. healthy competition
To some these global rankings are an indicator that higher education is
being turned into a commodity, with a menu of ‘fast’ options emerging
from the sectorization of institutions both within their own countries and
globally. However, it is important to realize that this sectorization of institu-
tions from high-end research intensive universities, to those who special-
ize largely in learning and teaching without much emphasis on research
output, has been encouraged by governments around the world for many
years, and long before the advent of the major global ranking systems dis-
cussed in this chapter. In some ways, the ranking systems can help those,
often younger, institutions with a rapidly developing research base dem-
onstrate that they are evolving and changing in ways which require their
governments and funding bodies to reassess the identified national role. In
fact, this is the area where it could be expected that rankings will continue
to exert positive influence. For example, if the same institutions remain in
the top 100 or so, year after year, with few newcomers, that would suggest
that either the ranking system does not have sufficient discriminative valid-
ity, or that universities are complacent about their global role and practice.
We live in societies where competition is generally regarded as a necessity
in order to drive progress, and to continuously improve both the quality of
products and the efficiency with which they are produced. Is higher educa-
tion so different or remote from the real world that we are justified in
arguing that we should not be subject to these universal forces? Of course
not. In fact, research has been driven by competition for hundreds of years
and humanity has nonetheless managed to innovate and thrive.

The uses of rankings for higher education


institutions (HEIs)
Having established that they are probably here to stay, and considered just a
few of the failings of rankings and some of the possible negative influences
they could exert on global higher education, it is now time to turn to the
positive aspects of rankings for HEIs and suggest some strategic actions that
ambitious universities might take to improve and evidence the quality of the
learning experience for their students, increase the quality of their research
output, attract top researchers and potentially improve funding streams. At
an institutional level, rankings can help focus the minds of faculty on the core

200 Rankings and Accountability in Higher Education: Uses and Misuses


business of teaching, research and knowledge transfer, particularly if senior
management identifies a clear set of goals in relation to the ranking criteria.
At its most basic level, this can simply involve recognition that, at a particular
stage in an HEI’s development, it is no longer an issue of researchers pro-
ducing papers, but more a question of the quality of the papers produced
and the journals in which they are published. This can lead to institutions
and departments/disciplines targeting a particular segment of journals for
particular academic grades to publish in. This ensures that faculty members
are provided with a clear idea of what is expected of them in relation to
their grade, and what some of the criteria related to research and publication
might be for promotion.

Institutional rank aside, examining ranking criteria can help an institution


focus on some crucial areas of practice, and identify appropriate benchmarks
in line with their institutional aspirations. For example, the QS rankings
identify faculty–student ratio as a very crude indicator of teaching quality
and this has sparked debate at many universities about how to break down
this very broad indicator into something that can be of direct benefit to the
learning environment. Many HEI’s currently rely largely on student feedback
questionnaires to provide evidence of quality teaching, and compare scores
across departments and disciplines. These typically invoke considerable
debate within HEI’s, with those faculty who achieve good student feedback
ratings generally extolling the virtues of the system, and those who do not
coming up with a range of, often legitimate, reasons why they are inaccurate
or simply a measure of a teacher’s popularity. At City University of Hong
Kong, the debate about the validity of student feedback questionnaires has
continued for many years with little hope of consensus. However, the advent
of evidence-based competitive bidding for government funding in the Hong
Kong sector, and the early realization that we were, like it or not, being
ranked by independent external bodies, provided additional impetus for
a radical and wide-ranging look at not only how we assess the quality of
our learning environment, but also how we encourage continuous devel-
opments and improvements which benefit our students. While there are
clearly a number of qualitative indicators that can be and are used to assess
quality, quantitative factors remain a useful tool in terms of institutional
management because they provide ‘hard’ data by which to assess progress
towards strategic goals. One glaring factor to emerge from this debate was a
general lack of clear and reasonably objective performance indicators upon
which the majority of faculty could agree, and which might be used to chan-
nel funding for learning and teaching improvements to areas of potential or
proven excellence.

Chapter 11. What’s the use of rankings? 201


Benchmarking and performance
indicators
The rankings have provided a timely catalyst for HEI’s to identify and engage
in comprehensive benchmarking exercises against institutions, sometimes
with a higher ranking overall or on selected criteria, providing some fas-
cinating insights into how global peers have tackled certain key issues.
Consequently, many HEI’s are beginning to develop their own systems for
assessing the quality of learning and teaching at a departmental level, which
incorporates the best of the observed global practices, while ensuring these
meet particular local and regional requirements. Theoretically this poses a
problem for some who suggest that this might lead to a future lack of dif-
ferentiation in higher education systems around the world as they copy best
practices from one another. In practice, it can be argued that this is unlikely
because universities will always interpret best practice in terms of their local
and regional requirements and contexts. For example, many universities will
have a strong community role that is central to their performance assess-
ment, and this will inevitably differ from one location to another.

The use of more comprehensive benchmarking, encouraged by the various


rankings criteria, provides a starting point for evidenced-based institutional
improvements, and a more thorough understanding of an institution’s role
against a wider backdrop of similar institutions elsewhere in the world. It
also encourages those HEIs that do not typically give evidence of their per-
formance in certain key areas of practice to consider not only who they are
within their local and regional context, but also how they might demonstrate
that they are developing and improving. Within institutions, this requires
encouraging faculty to both collaborate and compete with each other to help
the institution achieve a level of excellence and adhere to its strategic goals.

This approach involves identifying clear, agreed quantitative performance


indicators for the core areas of business (e.g. research, learning and teaching,
knowledge transfer, community, etc.). The example given here is for learn-
ing and teaching, assessed at departmental level and within colleges and
schools, and involves establishing an annual panel of college/school man-
agement together with external experts to consider any anomalous data or
representations from departments (Figure 1).

202 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 1. The process from benchmarking to performance indicators

Use ranking criteria to identify


appropriate benchmarks in line
with institutional aspirations.
External Benchmarks against ‘best practice’
Benchmarking and learn from peer institutions.

Establish panel of
management and
external experts to
College/School Level consider anomalous data
or representations from
departments. Strategy
can then be developed
to address issues of
Departmental Level accountability and
improve quality.
Annual assessment based on quantitative performance indicators for
learning and teaching, research, and knowledge transfer.

Source: author.

The example performance indicators for learning and teaching (Figure  2)


suggested in this chapter are based on many years of feedback from students
and alumni, and the broad requirements of governments in terms of moni-
toring the progress of HEIs within their jurisdiction. Many are also aligned
with some rankings criteria. They include three indices: the Input Quality
Index, the Output Quality Index, and the Staffing and Resources Index. Each
index contains a number of performance indicators, which form a basis for
assessing the annual performance of departments compared to their inter-
nal and external benchmarks.

Chapter 11. What’s the use of rankings? 203


Figure 2. Example performance indicators for learning and teachning

% Graduates
% Faculty % Faculty
% Outbound with FT
% International Average Entry to Total with PhD or
Exchange Employment
Students A-Level Score Academic Professional
Students (within 6 months
Staff Accreditation
of completion)

Input Staffing & Output


Quality Resources Quality
Index Index Index

% Self- Number of
Average Entry % International % Student with Internship
financed Students per
English Score Faculty Experience
Students Faculty

Source: author.

For example, one of the indicators for the Input Quality Index is the per-
centage of international students studying full time in that department (this
is data increasingly required by governments for competitive bidding and
funding allocation purposes), and is one indicator of international outlook.
Clearly, some departments will not, by the very nature of their programmes,
be able to attract large numbers of international students, and this is a fac-
tor which can be considered by the annual review panel. The Input Quality
Index also provides a baseline for longitudinal measurement of a selection
of performance indicators via the gap between input data and data from
the Output Quality Index. A typical indicator for output quality might be
the percentage of students with an internship or placement experience
over thirty days in total. This might be an important factor in terms of
the academic direction of those institutions where employers of graduates
have indicated that they increasingly want graduates who leave university
with transferable ‘functioning’ knowledge rather than just ‘declarative’ or

204 Rankings and Accountability in Higher Education: Uses and Misuses


‘procedural’ knowledge (Biggs, 1999). Many students who have engaged
in positive internship opportunities have indicated that the experience
contributed significantly to their learning at university. Therefore, the dif-
ferences between the Input Quality Index performance indicators and the
Output Quality Index performance indicators can provide a quantitative
measure of value-added during the period of undergraduate study against
a selected range of important criteria designed to improve and demon-
strate the developing quality of the learning experience.

The Staffing and Resources Index contains data related to staff: student ratio,
staff grades, IT provision, and the percentage of international staff attracted
to work with a particular HEI, as well as other indicators. A serious look at
this data from my own institution’s viewpoint has allowed us, among other
initiatives, to make significant cuts in self-financed programmes over the
past three years, freeing up more time for faculty to spend with students
and engage in research-related work. It has also helped us begin to create
the right staffing profile within departments in terms of grades and the
ratio between academic faculty and administrative staff.

All three indices are made up of a number of performance indicators,


which can be adapted or changed to align with the strategic direction
of a particular university or department, or as result of feedback from
stakeholders. Interestingly, although the indices are clearly aligned with
the QS criteria, this was not an intentional process when designing the
indicators, and although the QS rankings were a key catalyst for seriously
re-examining how to demonstrate the quality of learning environments (as
well as research and knowledge transfer performance), the major drivers
in the case of the City University of Hong Kong were feedback from our
students, alumni and employers, and moves from the Universities Grants
Council in Hong Kong to introduce evidenced-based competitive bidding
amongst local HEIs.

Within HEIs, the identification of appropriate performance indicators


for core tasks in line with strategy allows for better management of per-
formance at departmental level. Institutional research offices can then
prepare annual ‘growth charts’ with selected indicators, which allow
departments within a discipline or college to be compared in terms of the
chosen criteria. Example charts for learning and teaching are provided as
Figures 3 and 4, although additional criteria are also available for research
and grant income as well as knowledge transfer, community contributions
and administration.

Chapter 11. What’s the use of rankings? 205


Figure 3. Example growth chart (department X)

Current Threshold* Towards Excellence*** Transition


Performance (One star) Excellence** (Three star) Delta
(Two star)

Average Entry A-Level Score 15.8 0.2

Input Average Entry English Score 3 0.5


Quality
Index % International Students 18% 2%

% Self-financed Students 0% 0%

% Faculty (A to I Grade) to Total 62% 18%


Academic Staff

Stuffing % International Faculty (FTE) 51% 0%


Resources
Index Number of Students Per Faculty 9 -2

% Faculty with PhD or Professional 91% 9%


Accreditation

% Outbound Exchange Students 17% 3%

Output % Students with Internship 53% 17%


Quality Experience
Index
% Graduates with FT Employment 97.5% 2.5%
(within 6 months of completion)
Source: author.

Figure 4. Example growth chart (department Y)

Current Threshold* Towards Excellence*** Transition


Performance (One star) Excellence** (Three star) Delta
(Two star)

Average Entry A-Level Score 13.8 0.2

Input Average Entry English Score 1.6 0.9


Quality
Index % International Students 3% 7%

% Self-financed Students 39% 1%

% Faculty (A to I Grade) to Total 30% 10%


Academic Staff

Stuffing % International Faculty (FTE) 31% 9%


Resources
Index Number of Students Per Faculty 14 -1

% Faculty with PhD or Professional 60% 20%


Accreditation

% Outbound Exchange Students 0% 15%

Output % Students with Internship 12.5% 17.5%


Quality Experience
Index
% Graduates with FT Employment 9.3% 2%
(within 6 months of completion)
Source: author

206 Rankings and Accountability in Higher Education: Uses and Misuses


These charts can then be used as a basis for more evidence-based allocation
of funding at annual budget hearings and funding allocation meetings. They
can also be used to compare the performance of departments (and potentially
their leadership and faculty) in line with the defined institutional and depart-
mental/college based strategies. A comparison of Figures 3 and 4 above reveals
the gap in performance between department X and department Y, which can
alert senior management to departments where performance is not optimized,
so that appropriate steps can be taken to identify and rectify the problem(s).
Essentially, departments can then be graded within the institution as zero,
one, two or three-star in terms of the core area being assessed – in this case
learning and teaching. It might be the case that one department does not
perform well on some indicators, but this might be as a result of its discipline
or role. For example, one might expect that a local social work department
might not attract many international students, or that a largely learning and
teaching focused department with a strong community role might not be too
interested in outbound student exchange numbers. While I have not shared
the research, knowledge transfer, community and administration indicators
that I have developed here, the reader will recognize that it is possible that
some departments will be three-star for learning and teaching, and perhaps
one-star for research or knowledge transfer. There might be good strategic
reasons for this within the institution, so the performance of that department
might be regarded as exemplary despite a lower score than other departments
in the same college in terms of research or knowledge transfer. Equally, some
disciplines might not be suited to some indicators for a range of reasons, in
which case indicators can be adapted and weighted accordingly. Consequently,
this model is entirely flexible and can be fitted to a wide range of disciplines
and contexts. Therefore, by adapting and using the performance indica-
tors wisely and fairly, universities can ensure that they stay on their chosen
strategic course, and, perhaps just as important in today’s metrics-conscious
environment, they can provide evidence of their progress.

Conclusion
This chapter has considered an often-neglected aspect of the new rankings
culture, namely the benefits individual institutions can gain from the rank-
ing concept. A fairly pragmatic view has been taken which acknowledges that
rankings are here to stay, and have in fact been with us long before the advent
of the Shanghai Jiaotong or QS rankings. Are rankings propelling us towards
the commoditization of HEIs and their offerings, or merely providing at least
some comparative measures of an institution’s global standing and a catalyst

Chapter 11. What’s the use of rankings? 207


for further healthy competition? Whatever the answer to this question, there
can be no doubt that the notion of a ‘world-class university’ is becoming ever
more important to governments, employers, investors, alumni, students,
parents and institutions themselves and, without some sort of measurement,
it is difficult to identify which universities may qualify today, and how those
institutions with real ambition might qualify tomorrow. Reputation alone is a
recipe for stagnation and avoidance of healthy competition, and encourages
potentially biased self-justification.

All rankings inevitably invite criticism (Downing, 2012) and it is often easier to
concentrate on what is wrong with them, than try to identify how they might
be used to bring about practical positive, strategic change which will benefit all
stakeholders, not least the ultimate product of our endeavours – the quality of
our graduates and our research output. The author believes that rankings have
encouraged many institutions to take a new approach that concentrates on ana-
lysing and identifying appropriate performance indicators (in consultation with
all stakeholders), which provide evidence of improvements to the core activities
of learning and teaching, research and knowledge transfer. Consequently, rank-
ings have helped create a global environment where it is now easier to make
and provide evidence of real and positive qualitative improvements in these
areas. If the result of these improvements is a significant rise in the institution’s
score on one or more of the ranking criteria then that should be regarded as
a bonus. Rankings do provide reasonable, broadly comparative measures of
an institution’s global standing and can be used to help foster healthy com-
petition among the best universities. They are also useful self-evaluation tools
that enable universities to appropriately benchmark and bring about positive
strategic change, which ultimately benefits all stakeholders, not least students.

References

Biggs, J. 1999. Teaching for Quality Learning in University. London: The Society for Research into
Higher Education/ Open University Press.

Downing, K. 2012. Do rankings drive global aspirations at the expense of regional development?
M. Stiasny and T. Gore (eds) Going Global: The Landscape for Policy Makers and
Practitioners in Tertiary Education. Bingley, UK: Emerald Group, pp. 31–39.

Marginson, S. 2007. Global university rankings: Implications in general and for Australia. Journal of
Higher Education Policy and Management, 29(2): 131–42.

Marginson, S. and van der Wende, M. 2007. To rank or be ranked: The impact of global rankings in
higher education. Journal of Studies in International Education, 11(3/4): 306–29.

208 Rankings and Accountability in Higher Education: Uses and Misuses


Chapter 12

A decade of international
university rankings:
a critical perspective from
Latin America
Imanol Ordorika and Marion Lloyd
A decade ago, education researchers at Shanghai Jiao Tong University set
out to determine how far Chinese institutions lagged behind the world´s
top research universities in terms of scientific production (Liu and Cheng,
2005). The result was the Academic Ranking of World Universities (ARWU,
2003),1 the first hierarchical classification of universities on a global scale.
Despite the relatively narrow focus of the ranking methodology, the results
were widely viewed as a reflection of the quality of an individual institution,
or at least, the closest possible approximation. Other international univer-
sity rankings quickly followed, creating a ripple effect with far-reaching
consequences for higher education institutions worldwide.

While similar classification systems and league tables have existed on


a national or regional scale for several decades in the English-speaking
world (Turner, 2005; Webster, 1986), the impact of international rankings
has been particularly significant, both on individual institutions and on
national higher education systems as a whole. By comparing institu-
tions as far afield as Shanghai, Cape Town and New York, the rankings
project the universities beyond their local and regional contexts, expos-
ing them to unprecedented scrutiny. In the context of globalization and
dwindling government funding for higher education, universities already
face increasing pressure to compete for resources and students. In their
efforts to stand out, university administrators frequently seize on inter-
national rankings as ‘evidence’ of the superior quality of their institution.
Meanwhile, government officials, higher education experts and the media
employ these classification systems to defend or criticize higher-education
policies (Ordorika and Rodríguez, 2008; 2010). In some cases, interna-
tional rankings have been used to determine the amount of state subsi-
dies public institutions receive, as well as to influence students’ decisions
about which university to attend and how much tuition they are willing to
pay. They also impact decision-making and strategic planning on the part
of administrators, as they seek to emulate the highest-ranked universi-
ties. In Denmark, rankings even play a role in immigration policy, with

1 The Academic Ranking of World Universities (ARWU) has been produced annually since 2003 by
the Institute of Higher Education at Jiao Tong University in Shanghai. It compares 1,200 universities
worldwide and classifies 500 on the basis of their scientific production, taking into account the
following criteria: the number of Nobel Prize and Field Medal winners among the university´s alumni
and staff; the number of highly cited researchers in twenty-one subject categories; articles published in
the journals Science and Nature, and the number of publications listed in Thomson Reuters (ISI) Web of
Knowledge (ISI Wok), one of two competing bibliometric databases of peer-reviewed scientific journals;
and per capita scientific production, based on the previous indicators.

210 Rankings and Accountability in Higher Education: Uses and Misuses


graduates of highly ranked universities receiving extra points in applying
for work or residency permits.2

In short, the impact of international rankings can hardly be overstated.


This is because, beyond their scope, purpose or limitations, they are viewed
by many as objective measures of institutions’ quality, and the similarities
in the order of the different rankings only serves to legitimize the results.
But is this uncritical view really justified? The answer is a categorical no.
In reality, as we argue in this chapter, the rankings are heavily biased
towards a sole model of higher education: the elite, US research univer-
sity, of which Harvard is the premier example. Furthermore, the myriad
problems and limitations of the rankings, such as lack of transparency in
their methodology, bias towards the English language, and their homog-
enizing influence, often far outweigh their potential benefits (Berry, 1999;
Bowden, 2000; Federkeil, 2008a; Florian, 2007; Ishikawa, 2009; Jaienski,
2009; Ordorika et  al., 2009; Provan and Abercromby, 2000; Van Raan,
2005; Ying and Jingao, 2009).

Such is the case in Latin America which, despite a 500-year tradition of higher
education, has fewer than a dozen universities represented among the top
500 in the main rankings. The shortage of funding for higher education and
research, in particular, is partly to blame for the region’s limited presence. But
there is another explanation: the rankings do not take into account the full
range of roles and functions of Latin American universities, which extend far
beyond teaching and research. Public universities, in particular, have played
a vital role in building the state institutions of their respective countries
and in solving their nations’ most pressing problems, to say nothing of the
wide array of community service and cultural programmes that they offer
(Ordorika and Pusser, 2007; Ordorika and Rodríguez, 2010). The largest pub-
lic universities act as what Ordorika and Pusser have termed ‘state-building
universities’ (2007), a concept that has no equivalent in the English-speaking
world (Ordorika and Pusser, 2007). However, the rankings do not take into
account the huge social and cultural impact of these institutions of higher
education in Latin America and elsewhere. Instead, such universities often
feel pressure to change in order to improve their standing in the rankings, in

2 Denmark classifies candidates for work and residency permits according to a point system, which takes
into account the candidate’s level of education, among other factors. In evaluating post-secondary
degrees, it relies on the results of the QS World University Rankings, produced by the British-based
educational services company, Quacquarelli Symonds. Graduates of universities ranked among the top
100 universities receive 15 points (out of a total of 100); graduates of institutions in the top 200 receive
10 points; and those in the top 400, 5 points, according the following government immigration website:
www.nyidanmark.dk/en-us/coming_to_dk/work/greencard-scheme/greencard-scheme.htm

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 211
the process sacrificing their individual and national character as institutions
(IESALC, 2011; Ordorika and Rodríguez, 2008; 2010).

Such a homogenizing influence is only one of several negative effects of the


rankings, which we examine in further detail in this chapter. We begin by dis-
cussing the context in which rankings emerged almost a decade ago, before
consolidating their influence, primarily within government and university
policy offices and the media. We also discuss the principal rankings, on the
national, regional and international level, and the diversity among them. We
then go on to analyse the limitations of the ranking methodologies, before
examining their effects, with particular focus on the Latin American context.

The context behind the rankings


The popularity of rankings is partly a reflection of the increasingly pervasive
‘culture of accountability’ in policy agendas, as well as societal demands for
access to information in both the public and private spheres. In this con-
text, higher education institutions have faced growing pressures to develop
instruments to measure, classify and track their performance in academic
and administrative areas, resulting in evaluation dynamics with wide-ranging
goals (Bolseguí and Fuguet, 2006; Elliott, 2002; Power, 1997). These include
transparency and accountability with regard to finances, particularly in the
case of publicly funded institutions; the implementation of formulas for
improving and guaranteeing quality; public accounting of goals and results;
and government control over the performance of individual institutions or
a system as a whole, among others (Acosta, 2000; Borgue and Bingham,
2003; Díaz Barriga, Barrón Tirado and Díaz Barriga Arceo, 2008; Ewell, 1999;
Mendoza, 2002; Palomba and Banta, 1999; Rowley, Lujan and Dolence, 1997;
Villaseñor, 2003). Among the range of mechanisms for achieving account-
ability, comparative evaluation has gained in prominence, to the degree that
it offers reference points for contrasting achievements and improvements by
different institutions or within university systems. In that context, rankings
and league tables have become increasingly popular and their results are
frequently taken into account in designing university policies (Merisotis and
Sadlak, 2005; Marginson, 2007). In the logic of the rankings, there is a need to
reestablish the principle of academic hierarchy, which has been undermined
by the massification and indiscriminate dissemination of knowledge via the
internet. Rankings argue that it is in the interest of higher education insti-
tutions, national governments, editorial companies, scientific communities

212 Rankings and Accountability in Higher Education: Uses and Misuses


and other relevant actors to agree on classification criteria that are based on
common ideals and academic values, in order to compete within the global
knowledge economy (Ordorika and Rodríguez, 2008).

The methodology also responds to demands, established from a market


perspective, to classify and arrange hierarchically the multiplicity of insti-
tutions that coexist within an increasingly diversified and stratified world
of education services (Brennan, 2001; Cuenin, 1987; Dill, 2006; Elliott,
2002; Kogan, 1989; Marginson and Ordorika, 2010; Puiggrós and Krotsch,
1994; Strathern, 2000).

The rankings reflect the evolving battle on a global level for control over the
flow of knowledge: the system of knowledge prestige, exemplified by the
rankings, tends to reproduce the status quo, in which universities that have
traditionally dominated in the production of scientific knowledge ratify their
position in the global hierarchy, and a minority of emerging institutions
attempt, and occasionally succeed, in establishing a competitive presence
(IESALC, 2011; Marginson and Ordorika, 2010). ‘Rankings reflect prestige and
power; and rankings confirm, entrench and reproduce prestige and power’
(Marginson, 2009: 13). The pressure to follow the leader results in an expen-
sive ‘academic arms race’ for prestige, measured mostly in terms of research
production in the sciences, medicine and engineering (Dill, 2006).

The pernicious effect of this competitive pursuit of academic prestige


is that it is a highly costly, zero-sum game, in which most institutions
as well as society will be the losers, and which diverts resources as
well as administrative and faculty attention away from the collective
actions within universities necessary to actually improve student
learning (Dill, 2006: 6).

In such a context, other university priorities, such as community outreach


and extension programmes, or even research in the humanities and social
sciences, tend to fall by the wayside.

The diversity of rankings


There are currently a wide variety of ranking-style classification systems at the
inter­national, regional and national levels. The international rankings with
the greatest impact in Latin America are ARWU, the Times Higher Education

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 213
World University Rankings (THE),3 the QS World University Rankings,4
Webometrics5 and SCImago Institutions Rankings (SIR).6 The European
Union7 and the University of Leiden,8 which in recent years has begun pro-
ducing its own international ranking as well, stand out among the regional
systems. There are also national classification systems in several countries.
In the United States, the most well-known of these are the one produced
by US News and World Report 9 and The Top American Research Universities.10
In the United Kingdom, several newspapers (The Times,11 The Independent12
and The Guardian13) publish occasional guides to the best universities and

3 The Times Higher Education ranking was originally published by the higher education supplement of the
Times newspaper, one of Britain´s leading dailies. From 2004 to 2009, the THE rankings were compiled
by Quacquarelli Symonds, a private educational services company based in London. The ranking
classifies the universities throughout the world on the basis of a combination of indicators related to
scientific production, as well as the opinions of academic peers and employers.
4 Starting in 2004, Quacquarelli Symonds began producing international rankings of universities for the
Times Higher Education Supplement (THE). However, in 2009, QS ended its agreement with THE and
began producing its own rankings, using the methodology it previously employed for THE. Since 2009, it
has produced annual versions of the Ranking of World Universities, as well as expanding its production
to include rankings by region and by academic area. The most recent are the QS Ranking of Latin
American Universities and the QS World University Rankings by Subject, both of which were introduced
for the first time in 2011. The latter ranking classifies universities on the basis of their performance in
five areas: engineering, biomedicine, natural sciences, social sciences, and arts and humanities.
5 The Webometrics Ranking of World Universities has been produced since 2004 by Cybermetrics Lab
(CCHS), a research group belonging to the High Council for Scientific Research (Consejo Superior
de Investigación Científica) (CSIC) in Spain. Webometrics classifies more than 4,000 universities
throughout the world on the basis of the presence of their webpages on the internet.
6 Since 2009, the SCImago Research Group, a Spanish consortium of research centers and universities
– including the High Council for Scientific Research (CSIC) and various Spanish universities – has
produced several international and regional rankings. They include the SIR World Report, which
classifies more than 3,000 universities and research centres from throughout the world based on their
scientific production, and the Ibero-American Ranking, which classifies more than 1,400 institutions
in the region on the basis of the following indicators: scientific production, based on publications in
peer-reviewed scientific journals; international collaborations; normalized impact and publication
rate, among others. SCImago obtains its data from SCIverse Scopus, one of the two main bibliometric
databases at the international level.
7 The ranking of the scientific production of twenty-two universities in European Union countries was
compiled in 2003 and 2004 as part of the Third European Report on Science & Technology Indicators,
prepared by the Directorate General for Science and Research of the European Commission.
8 The Leiden Ranking, produced by Leiden University´s Centre for Science and Technology Studies
(CWTS) is based exclusively on bibliometric indicators. It began by listing the top 100 European
universities according to the number of articles and other scientific publications included in
international bibliometric databases. The ranking later expanded its reach to include universities
worldwide.
9 The US News and World Report College and University ranking is the leading classification of colleges and
universities in the United States and one of the earliest such system in the world, with the first edition
published in 1983 (Dill, 2006). It is based on qualitative information and diverse opinions obtained
through surveys applied to university professors and administrators. See: www.usnews.com/rankings
10 The Top American Research Universities, compiled by the Center for Measuring University Performance,
has been published annually since 2000. The university performance report is based on data on
publications, citations, awards and institutional finances. See: http://mup.asu.edu/research.html
11 See Good Universities Guide, at: www.gooduniguide.com.au/
12 See The Complete University Guide, at: www.thecompleteuniversityguide.co.uk/
13 See The Guardian University Guide, at: http://education.guardian.co.uk/universityguide2005

214 Rankings and Accountability in Higher Education: Uses and Misuses


programmes based on ranking indicators. In Canada, the most prestigious
is the Maclean’s universities guide, produced by the magazine of the same
name;14 in Australia, The Good Universities Guide,15 and in Germany, the rank-
ing produced by the Center for the Development of Higher Education (CHE),16
which includes classifications for Germany, Switzerland and Austria. In Chile,
El Mercurio newspaper publishes the General Panorama of the Country´s Best
Universities.17 In Brazil, the publisher Abril produces the Student´s Guide18
series, which includes a university ranking. It also awards the annual Best
University Prizes, with sponsorship from Banco Real,19 a leading bank. It is
worth noting that the vast majority of classification lists have been developed
either by newspaper or magazine publishers or by independent consulting
firms. However, an increasing number of academic bodies, comprised of
specialists in evaluation techniques, are starting to generate and disseminate
their own such instruments20 (Ordorika and Rodríguez, 2008; 2010).

One area in which institutional evaluation practices converge with the rank-
ings is in the use of the results from student exams, as well as information
related to the fulfillment of other parameters and performance indicators.
One such instrument is the National Student Performance Exam (ENADE),
administered by the National Institute of Educational Research and Studies
(INEP) in Brazil, as well as the State Higher Education Quality Exams (ECAES),
administered by the Colombian Institute for the Support of Higher Education
(ICFES) (Ordorika and Rodríguez, 2008; 2010).

The explicit objective of these general exams is to provide education authori-


ties (both in government and within the institutions) with elements to
facilitate decision-making. The results of the tests applied to institutions
and programmes are also made available to the public as part of a culture
of accountability. The public dissemination of the evaluations is part of an

14 It is published in the OnCampus supplement, accessible at: http://oncampus.macleans.ca/education/


category/rankings/
15 Published by Hobsons, a publisher and educational and labour services consulting company. See: www.
gooduniguide.com.au/
16 The CHE describes itself as a think-tank dedicated to promoting development and advocating new ideas
and concepts to be applied to educational systems and institutions. It provides consulting and training
services, as well as publishing a yearly university ranking. See: www.che-ranking.de/cms/
17 See: www.emol.com/especiales/infografias/ranking_universidad/index.htm
18 See: http://guiadoestudante.abril.com.br/
19 See: www.melhoresuniversidades.com.br
20 For example, the group of academics at the Graduate School of Education, at Shanghai Jiao Tong
University, charged with producing the Academic Ranking of World Universities (ARWU); the Research
Group SCImago, comprised of researchers at universities in Spain; and the Map of Higher Education in
Latin America and the Caribbean, which is in the process of being developed by a team of specialists at
IESALC-UNESCO.

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 215
effort to promote competitiveness among institutions and programmes.
Although the results of the ENADE (Brazil) and ECAES (Colombia) exams are
not presented in the form of institutional rankings, they tend to be taken as
such by the media and public opinion (Ordorika and Rodríguez, 2008).

Other institutional evaluations, in particular in the case of the programme


accreditation systems, also offer possibilities for hierarchical classifications.
Given the tendency within countries to adopt the international accreditation
protocols for higher education, the results of these evaluation processes also
tend to form part of the criteria included in the rankings (Buelsa et al., 2009;
Rodríguez, 2004).

The information generated by the mechanisms for institutional evaluation


(student exams, processes of evaluation and accreditation of institutions and
programmes, evaluation of the academic staff ) is used by the rankings to
strengthen their degree of objectivity. However, as we argue in this chapter,
many critics question the use of rankings as instruments for determining,
based on a limited range of indicators, the quality of universities. There is
also criticism surrounding the undesirable effects of basing public policy
decisions and institutional reforms on the results of rankings.

Methodological basis of rankings:


problems and perspectives
University rankings distinguish themselves essentially on the basis of their
methodologies: those that base their analysis on the quantitative evaluation
of knowledge production, employing indicators such as the number of pub-
lications and citations, among other comparative data (Dill and Soo, 2005);
and those that rely on surveys of institutional image and reputation: evalu-
ations of academic peers or of the consumers of educational services, such
as students, parents and employers (Ackerman, Gross and Vigneron, 2009;
Beyer and Snipper, 1974; Cave et  al., 1997; Federkeil, 2008b). Increasingly,
there is a tendency by rankings to make use of both methodologies, with
some combination of quantitative and qualitative indicators (Filip, 2004;
Usher and Savino, 2006).

As previously mentioned, these classification systems tend to serve as key


reference points in the design of public policies and institutional reforms.

216 Rankings and Accountability in Higher Education: Uses and Misuses


At the same time, they have become a recurrent topic in the media, leading
to a distorted perception that equates an institution’s position in the rank-
ings with a complete picture of the quality of an institution, that includes
all aspects of its performance (Espeland and Sauder, 2007; Hazelkorn,
2007; Marginson, 2009; Marginson and Van der Wende, 2006; Roberts and
Thomson, 2007; Salmi and Saroyan, 2007; Siganos, 2008; Thakur, 2008).

This situation has sparked intense debates, studies, analyses and criticisms
regarding the limits and risks of the hierarchical classification systems. Among
controversial aspects of comparing institutions of higher education are: the
selection and relative weight of the indicators; the reliability of the informa-
tion; and the construction of numeric grades on which the hierarchies are
based. There has also been criticism surrounding the homogenizing nature of
the rankings, the predominance of the English language, and the reductionist
manner in which a single evaluation of the quality of an institution, which is
in turn based solely on its scientific production, is taken as definitive (Berry,
1999; Bowden, 2000; Federkeil, 2008a; Florian, 2007; Ishikawa, 2009; Jaienski,
2009; Ordorika, Lozano Espinosa and Rodríguez Gómez, 2009; Provan and
Abercromby, 2000; Van Raan, 2005; Ying and Jingao, 2009).

The commercial orientation of many of the rankings – and of THE and QS


in particular – has also sparked concerns, due to the potential for profit
motives to sway the results (Ordorika and Rodríguez, 2010). For example,
QS and other commercial rankings offer consulting services to universities
with the promise of improving their standing in the ranking. This creates a
potential conflict of interest, as the ranking organization may feel obligated
to elevate its client in the following year´s ranking to justify the cost of its
consulting services. Since many of the rankings do not provide access to
the information used in ordering the universities, there is potential leeway
for tampering with the results to favour one university over another. Other
profit-making activities associated with rankings are: the sale of advertise-
ments both in print and on the ranking organization’s webpage, particularly
around the time the annual results are released; charging a fee for access
to the full list of universities and related information; promoting their own
data providers; and the creation or sale of specialized information services
(Ordorika and Rodríguez, 2010).

In order to be profitable, rankings must generate expectations regarding


their results. One way of doing this is to change the order of the universi-
ties from year to year, at times, in the case of the lower-ranked institu-
tions, even moving them by 100 or more spots in the hierarchy (Ordorika
and Rodríguez, 2010). In the case of the first QS Latin America University

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 217
Rankings, the order of universities in the region did not correspond to their
respective positions in the same year’s QS World Ranking, a phenomenon
which resulted in a flurry of media reports highlighting the unexpected
winners – and thus, heightened exposure for QS. We examine the Latin
American presence in the rankings in more detail in the section on the
region’s university tradition.

The shift in ranking methodologies from year to year could be expected to


produce small variations. But the degree of volatility is such that it calls
into question the very justification for the rankings: the need for objective
measurement systems that policy-makers can take at face value in orient-
ing their institutional or national strategies. So far, the critiques of the
rankings on the part of academics, both at the national and international
level, have yet to acquire the critical mass needed to provoke changes in
the methodologies applied, nor have they succeeded in limiting the pro-
liferation of rankings. On the contrary, all signs seem to indicate that the
rankings are establishing themselves as key actors in institutional reform
processes, given their current use on the part of public policy designers, as
well as the increasing demand for information regarding the performance
of institutions or programmes (Altbach, 2006; Cyrenne and Grant, 2009;
Hazelkorn, 2008; Sanoff, 1998).

However, while the criticisms of the rankings have had little practical
impact, they have generated a space for constructive discussion of the ben-
efits and limitations of the classification systems. In this regard, there are
numerous proposals that seek to define adequate standards and practices,
in the interest of improving the transparency, reliability and objectivity of
existing university rankings. Such proposals would benefit both the rank-
ings administrators and their users (Carey, 2006; Clarke, 2002; Diamond
and Graham, 2000; Goldstein and Myers, 1996; Salmi and Sorayan, 2007;
Sanoff, 1998; Vaughn, 2002; Van der Wende, 2009). The most well-known of
these initiatives is the one proposed by the International Ranking Experts
Group (IREG).21

During their second meeting on rankings in Berlin, in May 2006, the


group of specialists that form part of IREG released a report entitled
Berlin Principles on Ranking of Higher Education Institutions. Subsequently,

21 The IREG was established in 2004 as part of the Follow-up Meeting for the Round Table entitled ‘Tertiary
Education Institutions: Ranking and League Table Methodologies.’ The meeting was jointly sponsored by
the UNESCO European Centre for Higher Education (CEPES) and the Institute for Higher Education
Policy (IHEP).

218 Rankings and Accountability in Higher Education: Uses and Misuses


the IREG has concentrated its efforts on organizing the International
Observatory on Academic Ranking and Excellence,22 which disseminates
information on the main national and international rankings, as well as
the activities conducted by the working group. Some of the suggested
practices are starting to be adopted by the most influential global rank-
ings and, in general, the principles have focused the current debate on
future perspectives for the classification models (Cheng and Liu, 2008;
McCormick, 2008).

The Latin American perspective


In May 2011, university presidents and administrators from throughout
Latin America and the Caribbean gathered in Buenos Aires for a UNESCO-
sponsored conference on higher education and drafted a joint declaration
in opposition to the rankings.23 The document cites the following limita-
tions and negative effects of the rankings: (a) the lack of clarity regarding
the selection criteria by which institutions are evaluated; (b) the failure
of the rankings to specify the numeric distance between institutions, or
to reveal the actual indicators used to compute the results; (c)  the use
of a limited number of indicators to determine the overall quality of the
institutions; (d) the undesirable effects of the rankings’ dissemination by
the media, and in particular, the pressure exerted on institutions to make
changes within the logic of the rankings, rather than based on their own
institutional goals; (e) the totalizing nature of the rankings, which equate
numeric indicators with the universities’ merit as institutions; (f ) the risk
to university autonomy posed by the pressure on institutions to focus
solely on those areas measured by the rankings; (g) the resulting distor-
tion of university budget priorities; and (h) the fact that the rankings are
based on a sole ideal of a university, with the implicit assumption that all
universities should transform themselves in accordance with that model
(IESALC, 2011).

22 See: www.ireg-observatory.org/
23 The conference, the Fourth Meeting of University Networks and Councils of Rectors of Latin America
and the Caribbean, was sponsored by UNESCO´s International Institute for Higher Education in Latin
America and the Caribbean (IESALC). An English translation of the document, Position of Latin America
and the Caribbean with regard to the Higher Education Rankings, is available on the IESALC website:
www.iesalc.UNESCO.org.ve/dmdocuments/posicion_alc_ante_rankings_en.pdf

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 219
The logic and methodology of the rankings also run counter to international
declarations on higher education, in particular the two definitions ratified
by the UNESCO-sponsored World Conferences on Higher Education. In the
first conference, in 1998, delegates defined higher education as a public
good, whose mission extends beyond that of providing quality and rel-
evance in teaching, research and cultural diffusion; it includes the broader
goal of promoting sustainable development and focusing on ‘eliminating
poverty, intolerance, violence, illiteracy, hunger, environmental degrada-
tion and disease’ (UNESCO, 1998), among other roles. Furthermore, the
declaration asserts the importance of strengthening research focused on
analysing and anticipating social needs (IESALC, 2011).

In the World Conference held again ten years later, in 2008, the Latin
American delegation successfully advocated for higher education to be
defined as a social public good, access to which should be guaranteed and
free of discrimination. At the suggestion of the region, the final commu-
niqué lists social responsibility as the first of five general components of
the mission of higher education (IESALC, 2011). The declaration states that
‘higher education must not only develop skills for the present and future
world, but also contribute to the education of ethical citizens committed to
a culture of peace, the defense of human rights, and the values of democ-
racy’ (IESALC, 2011).

Such a focus on the humanistic and societal missions of higher education


is clearly absent from the ranking criteria. But it is in just those areas that
Latin American universities tend to excel. Such is the case of the state-
building universities, such as the Universidad Nacional Autónoma de
México (UNAM), the Universidade de São Paulo, the Universidad de Buenos
Aires, the Universidad Nacional de Córdoba or the Universidad Central de
Venezuela, to name a few. All are dominant teaching and research-oriented
universities in their own right. But their reach extends far beyond their
scientific mission (Ordorika and Pusser, 2007).

UNAM, the region’s largest institution of higher education with nearly


200,000 post-secondary students and another 120,000 enrolled in its
system of public high schools (UNAM, 2011a), is a prime example of a state-
building university.

At various points in its long history, UNAM has played a major role in the
creation of such essential state institutions as public health ministries and the
Mexican judicial system. The national university has also played a key role in

220 Rankings and Accountability in Higher Education: Uses and Misuses


the design of innumerable government bodies and offices and in educating
and credentialing the civil servants who dominate those offices. UNAM has
served since its founding as the training ground for Mexico’s political and
economic elites as well as for a significant portion of the nation’s professionals.
Perhaps most important, at many key moments in Mexican history, UNAM
has served as a focal point for the contest over the creation and recreation
of a national culture that placed such post-secondary functions as critical
inquiry, knowledge production, social mobility and political consciousness at
its centre (Ordorika and Pusser, 2007: 190).

UNAM is among the handful of Latin American universities that figure in


the top 200 in the most influential international rankings, just behind the
Universidade de São Paulo. That standing is a reflection of both univer-
sities’ impressive research production. UNAM, for example, accounts for
roughly a third of all scientific articles produced by Mexican researchers
and indexed by the ISI Web of Knowledge, while São Paulo represents more
than a quarter of its country’s article production (DGEI, 2012). However, the
rankings do not take into account the huge social and cultural impact of
nation-building universities in Latin America and elsewhere (Ordorika and
Pusser, 2007). In the case of UNAM, the university operates the National
Seismological System and the National Astronomical Observatory, sails
two research vessels along the Mexican coasts, and operates more than
60,000 extension programmes. It is also home to one of the country’s most
respected symphonic orchestras, as well as the country’s national library
and national periodicals repository (UNAM, 2011a; UNAM, 2011b).

The ranking methodologies also tend to give greater weight to production


in natural sciences, medicine and engineering, with a lesser focus on the
social sciences and the humanities – areas in which Latin America has a
long and respected tradition. In addition, in terms of their perception of
research production, the rankings have a clear bias towards the English
language. The vast majority of scientific journals listed in the main biblio-
graphic databases consulted by the rankings – the ISI Web of Knowlege and
SciVerse Scopus – are published in English-language journals, while only a
small number are published in Spanish or Portuguese.

The ranking organizations are aware of the problem, however they tend to
downplay its significance. In 2007, Quacquarelli Symonds, which at the time
was producing the rankings for the Times Higher Education Supplement, cited
the more extensive coverage of non-English journals within the Scopus
database as justification for switching to the latter; at the time, 21 per cent

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 221
of the journals in Scopus were in languages other than English or in both
languages.24 However, that still meant that 79 per cent of the publications
tallied by QS were published in English. Even at universities of the size
and weight of UNAM and the Universidade de São Paulo (USP), articles
published in English still represent a minority of the research production
of the universities, but they comprise a majority of the articles registered
in ISI and Scopus. In 2009, 88 per cent of the 3,571 articles that UNAM reg-
istered in ISI were published in English; and in the case of USP, 90 per cent
of the 8,699 articles in ISI were in English (DGEI, 2012).

A better measure of the Latin American production could be found in


regional databases such as Latindex,25 SciELO,26 CLASE27 and PERIODICA.28
Of the latter two, 71  per  cent of the scientific journals included in their
indexes are in Spanish and 18 per cent in Portuguese, compared with just
11 per cent in English (CLASE, 2011; PERIODICA, 2011). While consulting those
databases might not alter the order of the institutions, it would reflect a
more complete picture of their scientific production in the native language
of their researchers.

24 For more details on the reasoning behind QS’ decision to switch databases, see Why Scopus? at: www.
topuniversities.com/world-university-rankings/why-scopus.
25 Based at UNAM, Latindex is a cooperative bibliographic information system, which was co-founded
in 1995 by Brazil, Cuba, Venezuela and Mexico. Housed at UNAM, it acts as a kind of regional
clearinghouse for scientific publications. It maintains a database of more than 20,000 publications
from throughout Latin America, the Caribbean, Spain and Portugal, with articles written in Spanish,
Portuguese, French and English.
26 Based in Brazil, SciELO (Scientific Electronic Library Online) is a bibliographic database and open-
access online scientific archive, which contains more than 815 scientific journals. It operates as a
cooperative venture among developing countries, with support from the Brazilian federal government,
the government of São Paulo state, and the Latin American and Caribbean Center on Health Sciences
Information.
27 CLASE (Citas Latinoamericanas en Ciencias Sociales y Humanidades) is a bibliographic database that
specializes in Social Sciences and the Humanities. Created in 1975 and housed at UNAM´s Department
of Latin American Bibliography, it contains nearly 270,000 bibliographic references to articles, essays,
book reviews and other documents published in nearly 1,500 peer-reviewed journals in Latin America
and the Caribbean, according to the database’s website: http://biblat.unam.mx/
28 PERIÓDICA (Índice de Revistas Latinoamericanas en Ciencias) was created in 1978 and specializes in
science and technology. It contains approximately 265,000 bibliographic references to articles, technical
reports, case studies, statistics and other documents published in some 1,500 peer-reviewed scientific
journals in Latin America and the Caribbean.

222 Rankings and Accountability in Higher Education: Uses and Misuses


Latin American universities in the
rankings
Given such methodological biases, as well as financial and other constraints,
it is not surprising that Latin American universities have not figured promi-
nently in international rankings. In spite of this, universities like UNAM,
Buenos Aires and a group of Brazilian universities led by São Paulo have
managed to keep within reach of top-level institutions from the wealthiest
countries, where expenditures in higher education as well as in research and
development are many times higher.

However, as with other regions, the respective positions of the Latin


American universities vary significantly over time and among rankings. As
part of a broader study of university classification systems, the Directorate
General for Institutional Evaluation at UNAM maintains an interactive data-
base29 that tracks the presence of the Iberoamerican universities (in Latin
America, Spain and Portugal) from 2003 to the present in the following
rankings: ARWU, QS, THE, SCImago, HEEACT,30 and Webometrics. According
to the database, the Universidade de São Paulo has the highest average posi-
tion of any university in the region in the main rankings: 112. However, its
position varies from twentieth place in this year’s Webometrics ranking to
264th in the 2006 edition of Times Higher Education Supplement (THE). UNAM,
which at times has ranked higher than São Paulo, particularly in the Times
Higher Education ranking, has an average position of 135, although it has been
ranked anywhere from 38th to 354th place.

Given its relative longevity, the Shanghai ranking provides a good example
of the degree to which the universities’ standings can change over time,
even within the same ranking. In the case of the nine Latin American
universities that appear in the ranking’s top 500 list, São Paulo was the
favorite last year. But it has fluctuated between the 166th and 115th posi-
tion – a difference of 49 places – while the Universidad de Buenos Aires
has ranged from 309th to 159th position, a difference of 150 places. UNAM,

29 The database Universidades Iberoamericanas en los principales rankings internacionales 2003-2011 is


accessible at: http://dgei.unam.mx/?q=node/27.
30 In 2007, the Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT) began
producing the Performance Ranking of Scientific Papers for World Universities, which classifies
universities on the basis of their scientific production, over time and in the current year. In 2008, the
ranking also began classifying the top 300 universities in accordance with their publications in six
subject areas, based on data from the ISI Web of Knowledge.

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 223
which in 2004 led Buenos Aires by 139 places, last year trailed the Argentine
university by 11 positions.

Table 1. Iberoamerican universities in ARWU 2003-2011 (ordered according to their


position in 2011)

2006

2009
2008
2004
2003

2005

2007

2010

2011
University

Universidade de São Paulo 166 155 139 134 128 121 115 119 129
Universidad de Buenos Aires 309 295 279 159 167 175 177 173 179
Universidad Nacional Autónoma de México 184 156 160 155 165 169 181 170 190
Universidade Estadual de Campinas 378 319 289 311 303 286 289 265 271
Universidade Federal do Rio de Janeiro 341 369 343 347 338 330 322 304 320
Universidade Estadual de São Paulo 441           419 334 351
Universidade Federal de Minas Gerais         453 381 368 347 359
Pontificia Universidad Católica de Chile             423 410 413
Universidad de Chile   382 395 400 401 425 436 449 416

Source: Adapted from DGEI (2011).

There can even be variations within the same year in rankings produced by
the same company. Such is the case with the QS World University Rankings
and the first QS Latin America University Rankings, in 2011. While UNAM
tied with USP as the top-ranked Latin American university in the world-
wide ranking, it placed fifth in the Latin American rankings. Meanwhile,
the Universidade Estadual de Campinas was far behind UNAM in the global
ranking, but two places ahead in the Latin America ranking (Table 2).

QS officials argue that the discrepancy in the results between the two rank-
ings is due to the differing methodologies employed, and that in the case
of Latin America, ‘the methodology has been adapted to the needs of the
region’ (QS, 2011/2012). According to its producers, the methodology includes
an ‘extensive’ survey of academics and institution leaders in the region, and
takes into account ‘student satisfaction, and the quality, number and depth
of relationships with universities outside the region’ (QS, 2011/2012: 4). It is
unclear, however, how such perceptions are measured. More importantly,
according to its creators, the regional ranking is more exact than the world-
wide version, which calls into question not only the methodology employed
in the larger ranking, but also the methodology of the rankings as a whole.
The differences among the universities’ positions in both rankings serve to
underscore this point.

224 Rankings and Accountability in Higher Education: Uses and Misuses


Table 2. Latin American universities in the World and Latin American editions of the
QS rankings

LAR2011
Country

WR2010

WR2011
Institution

Universidad Nacional Autónoma de México Mexico 222 169 5


Universidade de São Paulo Brazil 253 169 1
Universidade Estadual de Campinas Brazil 292 235 3
Pontificia Universidad Católica de Chile Chile 331 250 2
Universidad de Chile Chile 367 262 4
Universidad de Buenos Aires Argentina 326 270 8
Instituto Tecnológico de Estudios Superiores
de Monterrey Mexico 387 320 7
Universidad Austral Argentina 358 353 13
Universidade Federal do Rio de Janeiro Brazil 381 381 19
Universidad de los Andes Colombia 501-550 401-450 6
Universidad Nacional de Colombia Colombia 551-600 451-500 9
Universidade Federal de Minas Gerais Brazil 501-550 501-550 10

Source: QS World University Rankings (2010, 2011), Latin America University Ranking (2011).

Conclusion
Given the limitations and problems present in the current rankings, there is
a growing trend towards alternative comparative systems that provide hard
data in lieu of hierarchical lists. One such effort is the Comparative Study of
Mexican Universities,31 produced by the Directorate General for Institutional
Evaluation at UNAM. The study, known by its Spanish acronym ECUM and
accessible through an interactive, online database, provides official indica-
tors in a broad range of academic and research areas. Statistics are available
for each of more than 2,600 individual universities and research centres, as
well as by type of institution (e.g.  technological institutes or multicultural
universities) and by sector (public or private). While the study allows users
to rank institutions on the basis of individual indicators, it does not enable
them to generate an overall hierarchy – a deliberate omission on the part of

31 See: http://www.ecum.unam.mx/node/2

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 225
its creators, who intended the study to foment future research and analysis,
rather than provoke competition among institutions (Lloyd, 2010).

However, while such alternatives are growing in popularity, they have yet to
gain sufficient critical mass to impact the predominant ranking paradigm or
to undermine its influence. As a result, there is an urgent need for policy-
makers at the university and government levels to change the way they
perceive the rankings. In the case of Latin America, they should also demand
that producers of rankings and comparisons take into account the most sali-
ent features and strengths, as well as the broad range of contributions, of the
region’s universities to their respective countries and communities, such as
those outlined in this chapter.

The rankings should not be confused with information systems, nor should
they be taken at face value, given their limited scope and the heavily biased
nature of their methodologies. At best, they may serve as guides to which
institutions most closely emulate the model of the elite, US research uni-
versity. At worst, they prompt policy-makers to employ wrongheaded deci-
sions  – such as diverting funding from humanities programmes in order
to hire Nobel Prize laureates in the sciences, solely in order to boost their
standing in the rankings.

Rather than attempt to transform all universities along a sole institutional


model, policy-makers should work to provide a diversity of options in higher
education, based on the particular needs of individual communities, coun-
tries or regions, and to evaluate them on the basis of a wide range of criteria.

The producers of the rankings, meanwhile, should take a much broader


view in evaluating the institutions. Or, at least, they should be explicit and
open about the limitations of their methodologies, rather than pretending
to provide a holistic picture of the universities surveyed. While there is much
at stake for the ranking institutions in terms of profits and reputation, there
is even more at stake for universities worldwide, whose autonomy is being
undermined by the homogenizing influence of these systems of classifica-
tion, and their market-oriented message.

226 Rankings and Accountability in Higher Education: Uses and Misuses


References

Ackerman, D., Gross, B.L. and Vigneron, F. 2009. Peer observation reports and student evaluations
of teaching: who are the experts? Alberta Journal of Educational Research, 55(1): 18–39.

Acosta Silva, A. 2000. Estado, políticas y universidades en un periodo de transición. México,


Universidad de Guadalajara/Fondo de Cultura Económica.

Altbach, P.G. 2006. The dilemmas of ranking, International Higher Education, 42: www.bc.edu/
bc_org/avp/soe/cihe/newsletter/Number42/p2_Altbach.htm (Accessed 11 April 2012.)

Berhdahl, R.M. 1998. The Future of Flagship Universities, graduation speech at Texas A&M
University, (5 October 1998): http://cio.chance.berkeley.edu/chancellor/sp/flagship.htm
(Accessed 11 April 2012.)

Berry, C. 1999. University league tables: artifacts and inconsistencies in individual rankings, Higher
Education Review, 31(2): 3–11.

Beyer, J.M. and Snipper, R. 1974. Objective versus subjective indicators of quality in graduate
education, Sociology of Education, 47(4): 541–57.

Bolseguí, M. and Fuguet Smith, A. 2006. Cultura de evaluación: una aproximación conceptual,
Investigación y Postgrado, 21(1): 77–98.

Bowden, R. 2000. Fantasy higher education: university and college league tables, Quality in Higher
Education, 6: 41–60.

Borgue Grady, E. and Bingham Hall, K. 2003. Quality and Accountability in Higher Education:
Improving Policy, Enhancing Performance. Westport, Conn.: Praeger.

Brennan, J. 2001. Quality management, power and values in European higher education,
J.C. Smart (ed.) Higher Education: Handbook of Theory and Research, vol. XVI, Dordrecht,
Netherlands: Kluwer Academic Publishers, pp. 119–45.

Buelsa, M., Heijs, J. and Kahwash, O. 2009. La calidad de las universidades en España: elaboración
de un índice multidimensional. Madrid: Minerva ediciones.

Carey, K. 2006. College rankings reformed: the case for a new order in higher education,
Education Sector Reports. Washington DC: www.educationsector.org/usr_doc/
CollegeRankingsReformed.pdf (Accessed 11 April 2012.)

Cave, M., Hanney, S., Henkel, M. and Kogan, M. 1997. The Use of Performance Indicators in Higher
Education: The Challenge of the Quality Movement. London: Jessica Kingsley Publishers.

Cheng, Y. and Liu, N.C. 2008. Examining major rankings according to the Berlin Principles, Higher
Education in Europe, 33(2–3): 201–08.

Clarke, M. 2002. Some guidelines for academic quality rankings, Higher Education in Europe, 27(4):
443–59.

Cuenin, S. 1987. The use of performance indicators in universities: an international survey,


International Journal of Institutional Management in Higher Education, 11(2): 117–39.

Cyrenne, P. and Grant, H. 2009. University decision making and prestige: an empirical study,
Economics of Education Review, 28(2): 237–48.

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 227
DGEI. 2011. Reporte del ranking ARWU 2011: Presencia de la UNAM y del grupo de universidades
iberoamericanas, 16 August 2011.

DGEI. 2012. La complejidad del logro académico: Estudio comparativo sobre la Universidad Nacional
Autónoma de México y la Universidad de Sao Paulo. Preliminary document. Mexico City:
DGEI-UNAM.

Diamond, N. and Graham, H.D. 2000. How should we rate research universities? Change, 32(4):
20-33.

Díaz Barriga, Á., Barrón Tirado, C. and Díaz Barriga Arceo, F. 2008. Impacto de la evaluación en
la educación superior mexicana. Mexico City: UNAM-IISUE; Plaza y Valdés.

Dill, D. 2006. Convergence and Diversity: The Role and Influence of University Rankings. Keynote
Address presented at the Consortium of Higher Education Researchers (CHER) 19th
Annual Research Conference, 9 September 2006, University of Kassel, Germany.

Dill, D. and Soo, M. 2005. Academic quality, league tables, and public policy: a cross-national analysis
of university ranking systems, Higher Education Review, 49(4): 495–533.

Elliott, J. 2002. La reforma educativa en el Estado evaluador, Perspectivas, XXXII(3): 1–20.

Espeland, W.N. and Sauder, M. 2007. Rankings and reactivity: how public measures recreate social
worlds, American Journal of Sociology, 113(1): 1–40.

Ewell, P.T. 1999. Assessment of higher education quality: promise and politics, S.J. Messick (ed.)
Assessment in Higher Education: Issues of Access, Quality, Student Development, and Public
Policy. Mahwah, NJ: Lawrence Erlbaum Associates, pp. 147–56.

Federkeil, G. 2008a. Graduate Surveys as a Measure in University Rankings. Presentation at OECD,


Outcomes of Higher Education. Quality, Relevance and Impact, París: www.oecd.org/
dataoecd/4/16/41217828.pdf (Accessed 11 April 2012.)

Federkeil, G. 2008b. Rankings and quality assurance in higher education, Higher Education in
Europe, 33(2/3): 219–31.

Filip, M. (ed.) 2004. Ranking and League Tables of Universities and Higher Education Institutions.
Methodologies and Approaches. Bucarest: UNESCO-CEPES: www.cepes.ro/publications/
pdf/Ranking.pdf

Florian, R.V. 2007. Irreproducibility of the results of the Shanghai Academic ranking of world
universities, Scientometrics, 72(1): 25–32.

Goldstein, H. and Myers, K. 1996. Freedom of information: towards a code of ethics for
performance indicators, Research Intelligence, 57: 12–16.

Hazelkorn, E. 2007. Impact and influence of league tables and ranking systems on higher
education decision-making, Higher Education Management and Policy, 19(2): 87–110.

Hazelkorn, E. 2008. Learning to live with league tables and ranking: the experience of institutional
leaders, Higher Education Policy, 21(2): 193–215.

IESALC (International Institute for Higher Education in Latin America and the Caribbean). 2011.
The Position of Latin America and the Caribbean on Rankings in Higher Education. Fourth
Meeting of University Networks and Councils of Rectors in Buenos Aires, 5–6 May 2011,
IESALC.

228 Rankings and Accountability in Higher Education: Uses and Misuses


Ishikawa, M. 2009. University rankings, global models, and emerging hegemony: critical analysis
from Japan, Journal of Studies in International Education, 13(2): 159–73.

Jaienski, M. 2009. Garfield’s demon and ‘surprising’ or ‘unexpected’ results in science,


Scientometrics, 78(2): 347–53.

Kogan, M. (ed.) 1989. Evaluating Higher Education. Papers from the International Journal of
Institutional Management in Higher Education. Paris: OECD.

Liu, N.C. and Cheng, Y. 2005. The academic ranking of world universities, Higher Education in
Europe, 30(2): July.

Lloyd, M. 2010. Comparative study makes the case for Mexico´s public universities, The Chronicle
of Higher Education, 11 November 2010.

Marginson, S. 2007. Global university rankings: implications in general and for Australia, Journal of
Higher Education Policy and Management, 29(2): 131–42.

Marginson, S. 2009. University rankings, government and social order: managing the field of
higher education according to the logic of the performative present-as-future, M. Simons,
M. Olssen and M. Peters (eds) Re-reading Education Policies: Studying the Policy Agenda of
the 21st Century. Rotterdam: Sense Publishers, pp. 2–16.

Marginson, S. and Van der Wende, M. 2006. To Rank or to be Ranked: The Impact of Global Rankings
in Higher Education. University of Twente-Centre for Higher Education Policy Studies:
www.studiekeuzeenranking.leidenuniv.nl/content_docs/paper_marginson_van_der_
wende.pdf (Accessed 11 April 2012.)

Marginson, S. and Ordorika, I. 2010. Hegemonía en la era del conocimiento: competencia global en la
educación superior y la investigación científica. Mexico City: SES/UNAM.

McCormick, A.C. 2008. The complex interplay between classification and ranking of colleges and
universities: should the Berlin Principles apply equally to classification?’ Higher Education in
Europe, 33(2/3): 209–18.

Mendoza Rojas, J. 2002. Transición en la educación superior contemporánea en México: de la


planeación al Estado evaluador. Mexico City: UNAM-CESU/Miguel Ángel Porrúa.

Merisotis, J. and Sadlak, J. 2005. Higher education rankings: evolution, acceptance, and dialogue,
Higher Education in Europe, 30(2): 97–101.

Ordorika, I. and Pusser, B. 2007. La máxima casa de estudios: Universidad Nacional Autónoma
de México as a state-building university, P.G. Altbach and J. Balán (eds) World Class
Worldwide: Transforming research universities in Asia and America. Baltimore: Johns
Hopkins University Press, pp. 189–215.

Ordorika, I. and Rodríguez, R. 2008. Comentarios al Academic Ranking of World Universities 2008,
Cuadernos de Trabajo de la Dirección General de Evaluación Institucional, year 1, no. 1.

Ordorika, I., Lozano Espinosa, F.J. and Rodríguez Gómez, R. 2009. Las revistas de investigación
de la UNAM: un panorama general, Cuadernos de Trabajo de la Dirección General de
Evaluación Institucional, year 1, no. 4.

Odorika, I. and Rodríguez, R. 2010. El ranking Times en el mercado del prestigio universitario,
Perfiles Educativos, XXXII(129).

Palomba, C.A. and Banta, T.W. 1999. Assessment Essentials: Planning, Implementing and Improving
Assessment in Higher Education. San Francisco: Jossey-Bass.

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 229
Power, M. 1997. The Audit Society. Oxford: Oxford University Press.

Provan, D. and Abercromby, K. 2000. University League Tables and Rankings: A Critical Analysis.
Commonwealth Higher Education Management Service (CHEMS), document no.  30:
www.acu.ac.uk/chems/onlinepublications/976798333.pdf

Puiggrós, A. and Krotsch, P. (eds) 1994. Universidad y evaluación. Estado del debate. Buenos Aires:
Aique Grupo Editor/Rei Argentina/Instituto de Estudios y Acción Social.

QS World University Rankings. 2010: www.topuniversities.com/university-rankings/world-


university-rankings/2010; 2011: www.topuniversities.com/university-rankings/world-
university-rankings; Latin America University Ranking 2011: www.topuniversities.com/
university-rankings/latin-american-university-rankings/2011 (Accessed 11 April 2012.)

QS. 2011/2012. QS University Rankings: Latin America 2011/2012. QS Intelligence Unit: http://
content.qs.com/supplement2011/Latin_American_supplement.pdf (Accessed 11 April
2012.)

Roberts, D. and Thomson, L. 2007. Reputation management for universities: university league
tables and the impact on student recruitment, Reputation Management for Universities,
Working Paper Series, no. 2. Leeds: The Knowledge Partnership.

Rodríguez Gómez, R. 2004. Acreditación, ¿ave fénix de la educación superior?, I. Ordorika (ed.) La
academia en jaque. Perspectivas políticas sobre la evaluación de la educación superior en
México. Mexico City: UNAM/Miguel Ángel Porrúa, pp. 175–223.

Rowley, D.J., Lujan, H.D. and Dolence, M.G. 1997. Strategic Change in Colleges and Universities:
Planning to Survive and Prosper. San Francisco: Jossey-Bass.

Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses, Higher
Education Management and Policy, 19(2): 24–62.

Sanoff, A.P. 1998. Rankings are here to stay: colleges can improve them, Chronicle of Higher
Education, 45(2): http://chronicle.com/

Siganos, A. 2008. Rankings, governance, and attractiveness of higher education: the new French
context, Higher Education in Europe, 33(2-3): 311–16.

Strathern, M. 2000. The tyranny of transparency, British Educational Research Journal, 26(3):
309–21.

Thakur, M. 2008. The impact of ranking systems on higher education and its stakeholders, Journal
of Institutional Research, 13(1): 83–96.

Turner, D.R. 2005. Benchmarking in universities: league tables revisited, Oxford Review of
Education, 31(3): 353–71.

UNAM (Universidad Nacional Autónoma de México). 2011a. Agenda Estadística. Mexico City:
UNAM: www.planeacion.unam.mx/Agenda/2011/ (Accessed 11 April 2012.)

UNAM. 2011b. Un Siglo de la Universidad Nacional de México 1910-2010: Sus heullas en el espacio a
través del tiempo. Mexico City: Instituto de Geografía-UNAM.

UNESCO. 1998. World Declaration on Higher Education for the Twenty-First Century: Vision and
Action. World Conference on Higher Education, UNESCO, Paris, 9 October 1998.

230 Rankings and Accountability in Higher Education: Uses and Misuses


Usher, A. and Savino, M. 2006. A world of difference: a global survey of university league
tables, Canadian Education Report Series: www.educationalpolicy.org/pdf/World-of-
Difference-200602162.pdf (Accessed 11 April 2012.)

Van der Wende, M. 2009. Rankings and classifications in higher education: a European perspective,
J.C. Smart (ed.) Higher Education: Handbook of Theory and Research, vol. 23. New York:
Springer, pp. 49–72.

Van Raan, A.F.J. 2005. Fatal attraction: conceptual and methodological problems in the ranking of
universities by bibliometric methods, Scientometrics, 62(1), 133–43.

Vaughn, J. 2002. Accreditation, commercial rankings, and new approaches to assessing the
quality of university research and education programmes in the United States, Higher
Education in Europe, 27(4): 433–41.

Villaseñor García, G. 2003. La evaluación de la educación superior: su función social,


Reencuentro, 36: 20–29.

Webster, D.S. 1986. Academic Quality Rankings of American Colleges and Universities. Springfield,
Mass.: Charles C. Thomas.

Ying, Y. and Jingao, Z. 2009. An empirical study on credibility of China’s university rankings: a case
study of three rankings, Chinese Education and Society, 42(1): 70–80.

Chapter 12. A decade of international university rankings:


a critical perspective from Latin America 231
Part 4

Alternative
Approaches
Chapter 13

If ranking is the disease,


is benchmarking the cure?
Jamil Salmi
Introduction
Preoccupations about university rankings reflect the general recognition
that economic growth and global competitiveness are increasingly driven
by knowledge and that universities play a key role in that context. Indeed,
tertiary education institutions have a critical role in supporting knowledge-
driven economic growth strategies and the construction of democratic,
socially cohesive societies. Through the preparation of a skilled, productive
and flexible labour force and the creation, application and dissemination
of ideas and technologies, tertiary education helps countries become more
globally competitive.

However, attempts to measure and analyse what works at the tertiary educa-
tion level have emphasized the performance of individual institutions, for
example, in terms of the competitiveness of admissions, research output and
employability of graduates, among other factors. International rankings have
focused on the relative standings of countries, using the position of their top
universities as a proxy for the performance of the entire tertiary education
system. But while rankings may provide information about individual insti-
tutions in comparison to others, they do not provide an adequate measure
of the overall strength of a country’s tertiary education system. This chapter
explores, therefore, the appropriateness of rankings as a measure of perfor-
mance of tertiary education systems. After looking at the uses and abuses
of rankings, it explains the difference between rankings and benchmarking
methodologies. Finally, it presents the World Bank’s benchmarking tool,
which is currently under construction.

Uses and abuses of rankings


There has been a proliferation of ranking in recent years, including, for
example, the Academic Ranking of World Universities (ARWU), Times Higher
Education’s Ranking, the Web of World Universities Ranking, CHE, U.S. News
& World Report, and many rankings of business schools. These rankings have
been produced by various organizations ranging from national governments
and independent agencies to the media. Figure 1 below shows the distribu-
tion of ranking production as of 2010.

236 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 1. Who prepares the rankings?

4 Government
3 Independent
Agency
2
Media
1

0
Eastern Europe Asia & Latin Africa & Western North
& Central Asia Pacific America Middle East Europe America

Source: the author.

Accompanying the proliferation of rankings have been intense reactions,


ranging from disagreements about the very principle of rankings, criticism
about the methodology of rankings, boycotts, political pressure, and even
court actions to stop the publication of rankings.

The expansion of league tables and ranking exercises has not gone
unnoticed by the various stakeholders and the reaction they elicit is
rarely benign. Such rankings are often dismissed by their many critics
as irrelevant exercises fraught with data and methodological flaws,
they are boycotted by some universities angry at the results, and they
are used by political opponents as a convenient way to criticize gov-
ernments (Salmi and Saroyan, 2007: 80).

This type of intense reaction is not unwarranted. The results of a ranking


and/or the desire to move up in a ranking can add perverse incentives to
institutional decision-making. For example, a university keen on moving up
a ranking may consider altering admission policy to give increased priority
to top students, while compromising principles of equity or diversity in the
student body in order to boost entering average scores and, thus, the per-
ceived quality of the institution. Or at the extreme, institutions may encour-
age students to lie in order to boost results of student satisfaction surveys,
which are often weighted into institutional scores, as happened, for example,
at Kingston University in the United Kingdom in the early 2000s.

Chapter 13. If ranking is the disease, is benchmarking the cure? 237


As noted by Malcolm Gladwell (2011: 70), rankings such as that of widely
popular U.S. News & World Report are flawed because they are heterogene-
ous. For example, they do not just compare public institutions of the same
size but rather private institutions that tend to be smaller, more specialized
and have higher funding per student:

The U.S. News and World Report doesn’t just compare U.C. Irvine,
the University of Washington, the University of Texas-Austin, the
University of Wisconsin-Madison, Penn State, and the University of
Illinois, Urbana-Champaign – all public institutions of roughly the
same size. It aims to compare Penn State – a very large, public, land-
grant university with a low tuition and an economically diverse stu-
dent body, set in a rural valley in central Pennsylvania… with Yeshiva
University, a small, expensive, private Jewish university whose under-
graduate programme is set on two campuses in Manhattan (one in
mid-town for the women, and one far uptown for the men).

Given that there is so much at stake in a positive ranking of a country’s insti-


tutions, it is not surprising to see that some governments have responded
by encouraging the preparation of alternative rankings when they are not
satisfied with the standing of their national universities. For example, a new
global ranking in Russia, elaborated by RatER, the Russian ranking agency,
has placed Moscow State University ahead of universities such as Harvard,
Stanford and Cambridge, which come on top of the Shanghai and Times
Higher Education rankings (Smolentseva, 2010). During the French presidency
of the European Union in 2008, one of the achievements of the Minister of
Higher Education was to convince the European Commission in Brussels to
launch a new European ranking that would be ‘more objective and more
favourable to European universities’.1

Despite the controversy surrounding rankings, there are good reasons why
rankings persist. These include the benefits of information provided to stu-
dents who are looking to make a choice between various institutions, either
domestically or for studies abroad. As a consequence, rankings and informa-
tion about student engagement and labour market outcomes in the country
of interest are valuable. Further, rankings promote a culture of transparency,
providing institutions with incentives to collect and publish more reliable
data. Finally, rankings promote the setting of stretch goals by the institution.
In so doing, institutions may find themselves analysing key factors explaining

1 From Minister Valérie Pécresse’s declaration at the Conference on International Comparisons in


Education held in Paris in December 2008.

238 Rankings and Accountability in Higher Education: Uses and Misuses


ranking, seeking to improve teaching, learning and research, proposing con-
crete targets to guide (but not replace) strategic planning, and entering into
mutually advantageous partnerships.

From ranking to benchmarking


The rankings lens allows students, parents and employers to look at the
results of individual institutions; however, they do not say much, if anything,
about the overall performance of tertiary education systems. For example,
rankings do not measure the results of systems in terms of access and equity,
quality and relevance, institutional differentiation, and contribution to local
economic and social development through the training of skilled human
capital and the production of patents. According to global rankings, the best
institutions in the world are overwhelmingly located in the United States, as
shown by Figure 2, which provides the breakdown of the top 50 universities
as identified by Times Higher Education and the Academic Ranking of World
Universities (ARWU). In the first case, 40  per  cent come from the United
States; in the second case, the ranking classifies 70 per cent of the top 50 best
institutions as being from the United States.

Figure 2. Distribution of the Top 50 universities, 2010

THES: 2010 ARWU: 2010


JAPAN, 2
CANADA, 2
CANADA,
3
JAPAN,
3
USA, 20 UK, 5 USA, 35
WESTERN
EUROPE,
5
WESTERN
EUROPE, 6
AUSTRALIA,
5

OTHER ASIA,
6
UK, 8

Sources: Shanghai Jiao Tong (2010); Times Higher Education (2010).

Chapter 13. If ranking is the disease, is benchmarking the cure? 239


Surprisingly, Japan is the only Asian country represented in ARWU, while
six Asian universities besides Japan are represented in the Times Higher
Education rankings. And yet, the so-called Asian tigers (Hong Kong,
Singapore, South Korea, Taiwan) are usually considered among the most
dynamic knowledge economies in the world. In addition, if one calcu-
lates the number of ‘world-class universities’ relative to the population
of countries, it appears that there are a number of dynamic knowledge
economies which seem to be doing very well without universities in the
top 500 ranked universities in the world, as indicated in Table 1 below. In
addition this table shows that some countries are more efficient in creating
top 500 ranked institutions compared to others. For example, Finland, New
Zealand and Sweden have the highest number of ranked institutions per
capita compared to other countries.

Table 1. ARWU ranking of countries taking their population into consideration, 2009

Thousands of people
Population
Country No. top 500s required to produce each
(in thousands)
top 500 institution
Sweden 11 9 394.13 854.01
New Zealand 5 4370.7 874.14
Finland 6 5 362.61 893.77
Israel 7 7577 1 082.43
Switzerland 7 7 790.01 1 112.86
Austria 7 8 381.78 1 197.40
Norway 4 4 882.93 1 220.73
Australia 17 22 327.2 1 313.36
Denmark 4 5 565.02 1 391.26
Ireland 3 4 451.31 1 483.77

Sources: Shanghai Jiao Tong (2010); World Bank (2009).

Similarly, rankings give little indication to the effectiveness of a countries’


tertiary education system in serving as a ladder out of poverty. As shown in
Figure 3 below, countries such as the United States and the United Kingdom,
which have the highest number of top ranked universities, are not doing a
good job when it comes to social mobility.

240 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 3. Relationship between income inequality and social mobility

Source: Wilkinson and Picket (2010).2

Thus, an objective framework to measure the effectiveness and efficiency of


tertiary education systems in increasing equity, access, quality, relevance and
promoting local economic and social development is required. While rankings
are somewhat useful in comparing individual institutions, they miss the point
in evaluating the success of a system in achieving the outcomes it is purport-
edly created to do.

Benchmarking tertiary education systems


There is no consensus on what countries should do to improve their perfor-
mance and there are wide variations in system performance with similar fund-
ing levels and common country characteristics. Benchmarking allows comparing
systems from countries in similar stages of development, regions of the world
or political context. Benchmarking is the process of comparing the performance
of one’s tertiary education system to that of other systems. It enables a user to
identify competitors and learn from best practice. Unlike rankings that lead to
a ‘race to the top’, benchmarking provides a tempered learning. The purpose
of benchmarking is to improve performance diagnosis (identification of areas
for improvement) and definition of specific corrective interventions to enable
countries and systems to reach their performance potential. In order to achieve
this objective, users need to understand the determinants of performance. The
matrix below summarizes the major differences between ranking and bench-
marking for assessing performance in tertiary education (Table 2).

2 In their book, The Spirit Level, Wilkinson and Picket develop their own index of income inequality and
social mobility (probability that an individual will have a better socio-economic position than his/her
parents) expressed on a 0–100 scale.

Chapter 13. If ranking is the disease, is benchmarking the cure? 241


Table 2. Comparing ranking and benchmarking

Characteristics Ranking Benchmarking


Tertiary education institution or tertiary education
Unit of analysis University/programme
system
Hierarchical ranking/reputational Comparison to identify strengths and weaknesses
Purpose of exercise
competition for improvement purposes
Degree of Considers all missions of TEIs (education, research,
Research/internationalization focus
comprehensiveness technology transfer, regional engagement)
Ease of use One number summarizes the results Need to consider multiple indicators
Limited to the criteria imposed by Systematic quantitative and qualitative analysis of
Diagnosis of factors
the ranker data, indicators and reports
Choice of comparators Imposed by the ranker Selected by the benchmarking team
Relative importance of indicators Relative importance of indicators determined by
Weight of indicators
determined by the ranker benchmarking team
Reliance on published and verifiable
Transparency Reliance on published and verifiable data
data as well as reputational surveys
At risk with reputational surveys and
Objectivity Linked to choice of indictors
arbitrary weights
Analysis tailored to needs of individual institution
Users General public
or government
Participation of subject Possibility of opting out Decision to opt in
Source: Developed by author.

To show an example of benchmarking in action, it is possible to compare


the performance of Chile and Brazil. If the goal is to have the highest enrol-
ment rates possible, then Chile can be shown to be more efficient than Brazil
relative to the level of public resources used. As illustrated in Figure 4, Chile
spends about 3 per cent of GDP on tertiary education and has an enrolment
rate of about 38 per cent, while Brazil spends almost 0.9 per cent and has
only a 24 per cent enrolment rate. This poses the following questions: why is
Chile more efficient, and what can Brazil learn from Chile’s example?

Figure 4. Comparing enrolment rates and public investment on tertiary education in


Chile and Brazil

40%
Chile, 38%
35%
30%
25%
Enrolment rate

20% Brazil, 24 %
15%
10%
5%
0%
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9

Public spending as % of GDP

Sources: OECD (2009) (ref year 2007); UIS, (2010).

242 Rankings and Accountability in Higher Education: Uses and Misuses


Thus, there is a distinction between the inputs a country invests in their
higher education system and its outcomes. In elaborating the theoretical
framework for the benchmarking programme, this distinction has been con-
ceptualized as performance and health of a system. What are the predictors
of system performance and does a country’s higher education system oper-
ate under conditions known to lead to high performance?

A key feature of the World Bank’s proposed benchmarking tool is that it is


built around a fundamental distinction between the results of tertiary edu-
cation systems (‘system performance’) and the drivers of performance that
account for these results (‘system health’), with the purpose of addressing
the following two questions:

1. How well does the tertiary education system actually produce


expected outcomes at the current time (system performance)?

2. How well do the tertiary education system’s key inputs, processes


and enabling factors reflect conditions that are known to bring about
favourable outcomes?

Furthermore, the benchmarking tool allows users to evaluate ‘system evolu-


tion’ or speed of change of performance and health indicators.

System performance
System performance can be measured by looking at the key outcomes of a
tertiary education system. Reflecting the various missions of tertiary educa-
tion, the benchmarking tool includes the following outcomes:

• Attainment refers to the stock of qualifications in a given population,


measured by calculating the proportion of adults in the working age
population who have completed a tertiary degree.
• Learning achievement refers to the quality and relevance of the education
and training experience of tertiary level graduates. This is one of the most
difficult areas to measure in the absence of widely accepted metrics such
as PISA or TIMMS.
• Equity refers to disparities in the results (attainment and academic
trajectories) of disadvantaged groups (such as low-income groups,
females, minorities and people with disabilities).

Chapter 13. If ranking is the disease, is benchmarking the cure? 243


• Research outcomes refer to publications and advanced training, measured
by the number of scientific journal citations relative to a country’s
population and the capacity of the system to prepare PhD graduates.
• Knowledge and technology transfer represent the contribution of tertiary
education institutions to the development of the regions that they serve.
Some ways to measure this include the number of patents registered
by universities or the proportion of doctoral graduates working outside
universities.
• Values, behaviour and attitudes refer to the effectiveness of tertiary
education in equipping graduates with positive values and citizenship
skills. This is also a very difficult area to measure, but the methodological
challenges do not justify neglecting this important dimension of the role
of education.

Examples of performance indicators would be:

proportion of the working-age population (25-64)


Attainment with a tertiary degree

proportion from highest quintile over proportion


Achievement from lowest quintile
gap

number of ranked universities per 100,000 inhabitants


Quality

Research output
number of citations per 100,000 inhabitants

Technology
Transfer number of patents per 100,000 inhabitants

Values
proportion of voting age people who actually vote

244 Rankings and Accountability in Higher Education: Uses and Misuses


System health
System health refers to the enabling conditions required for the tertiary
system to produce these outcomes, and to improve and sustain its perfor-
mance over time. As Figure 5 below illustrates, these institutions operate in
an environment that includes the following elements:

• Macro environment: the overall political and economic situation of


a country, together with the rule of law and the enforcement of basic
freedoms, which influences the governance of tertiary education
institutions (the appointment of university leaders), their level of funding,
their academic freedom and safety in the physical environment.
• Leadership at the national level: the existence of a vision and a strategic plan
to shape the future of tertiary education and the capacity to implement
reforms.
• Governance and regulatory framework: the governance structure and
processes at the national and institutional levels that determine the
degree of autonomy that tertiary education institutions enjoy and how
and to what extent they are held accountable. This is especially important
for the human resources policies and management practices that allow
tertiary education institutions to attract and keep qualified academics.
• Quality assurance framework: the institutional setup and the instruments
for assessing and enhancing the quality of research, teaching and learning.
• Financial resources and incentives: the absolute volume of resources
available to finance tertiary education (mobilization of both public and
private resources) and the way in which these resources are allocated to
various institutions.
• Articulation and information mechanisms: the linkages and bridges
between high schools and tertiary education and among the various
types of tertiary education institutions, all of which affect the academic
characteristics of incoming students and their academic results within the
tertiary education system.
• Location: the infrastructure and the economic, social and cultural
characteristics of the geographical location of the institution, which
determine its ability to attract outstanding scholars and talented students.
• Digital and telecommunications infrastructure: the availability of broadband
connectivity and end user devices to enable tertiary education institutions
to deliver educational, research and administrative services in an efficient,
reliable and affordable way.

Chapter 13. If ranking is the disease, is benchmarking the cure? 245


Figure 5. Tertiary education ecosystem

political & economic


stability, rule of law,
basic freedoms

telecommunications vision, leadership


& digital infrastructure & reform capacity

PERFORMANCE
OF TERTIARY governance
location
EDUCATION & regulatory
INSTITUTIONS

articulation &
quality assurance
information
& enhancement
mechanisms

resources
& incentives

Source: the author.

This analytical framework translates into specific inputs and process indica-
tors that measure ‘system health’ in the following way:

• Inputs. To what extent do the resources invested in a tertiary system


(such as its funding, the number and qualifications of its academics, the
academic preparation of its incoming students, its curriculum and its
learning infrastructure) lead to positive outcomes?
• Processes. How effective are a system’s processes or policies (such as
its governance arrangements, resource allocation mechanisms and
accountability instruments) in producing positive outcomes?

246 Rankings and Accountability in Higher Education: Uses and Misuses


Examples of system health indicators are identified in the diagram below.

• Proportion of public and private funding over GDP


• Proportion of tertiary education budget over national budget
Financing • Per student public spending
• Proportion of self-generated income in public institutions

• Proportion of students enrolled in private institutions


Access and • Proportion of students enrolled in open universities
Equity • Institutional diversification

• P roportion of tertiary institutions that have received satisfactory


assessments from QA system
Quality and • Qualifications of Faculty
Relevance • Proportion of ferign students (undergraduate)

The benchmarking of tertiary education systems relies on three types of


indicators: quantitative indicators, objective qualitative indicators and sub-
jective qualitative indicators.

• objective measure
Quantitative

• obective description
Qualitative-observed

• value judgement
Qualitative-interpreted

Quantitative indicators provide the user with a tangible measure to


compare performance across various dimensions of country systems and
institutions. Data for these indicators are relatively easier to collect than
qualitative data. Thus, there are fewer gaps in the dataset for this group of
indicators. Examples of such indicators include tertiary attainment rate, or
research output (number of citations per 100,000 inhabitants).

Objective qualitative indicators describe key dimensions of system health


in a non-numeric way. For example, in the area of governance and quality
assurance, qualitative indicators can capture the main characteristic of ter-
tiary education systems and institutions in an objective manner (e.g. exist-
ence of an independent board, mode of selection of university leaders,
existence of an accreditation system, etc.).

Chapter 13. If ranking is the disease, is benchmarking the cure? 247


Subjective qualitative indicators are constructed on the basis of expert
judgments on key dimensions of system health. For example, one of the
important drivers of system health is the degree of management autonomy
that tertiary education institutions enjoy, which is difficult to measure
objectively.

The proposed approach is informed by the following recent works, which


explore various dimensions of the performance of tertiary education
systems and institutions and try to identify key determinants of this
performance:

• Constructing Knowledge Societies: New Challenges for Tertiary Education,


a World Bank (2002) report outlining key trends in tertiary education,
analysing their implications in terms of shaping and operating tertiary
education systems and institutions, and presenting policy reform
options.

• Tertiary Education for the Knowledge Society, a three-volume OECD (2008)


report presenting the lessons learned after fourteen reviews of tertiary
education in member countries, with a focus on access and equity,
quality, the academic profession, labour market linkages, governance,
financing, internationalization, and the role of higher education in
research and innovation.

• Creating an assessment tool and index to guide countries in enhancing their


competitiveness through improved education systems, a McKinsey (2007)
study prepared exclusively for the World Bank, proposing a methodology
for the development of a benchmarking tool to measure the results of
education systems.

• The Governance and Performance of Research Universities: Evidence from


Europe and the U.S. (Aghion et  al., 2009) a comparative analysis of
European and US universities showing that, beside the level of public
funding and degree of management autonomy, the weak development
of competitive funding mechanisms is one of the major differences
explaining the lower performance of European research universities in
international rankings.

• The Challenge of Establishing World-Class Universities (Salmi, 2009), which


analyses the characteristics of elite research universities and explores
approaches for establishing successful institutions that are recognized
globally.

248 Rankings and Accountability in Higher Education: Uses and Misuses


Comparing Brazil and Chile’s expansion
paths
In order to illustrate how the benchmarking tool can be used, this section
looks at the determinants of enrolment growth by comparing Brazil and
Chile. As analysed in Constructing Knowledge Societies: New Challenges for
Tertiary Education (World Bank, 2002), the main factors that account for a
country’s tertiary level enrolment and the tertiary education attainment of
the adult population are: (i) graduation levels at the end of secondary educa-
tion, (ii)  the level of investment in tertiary education (public and private
funding), (iii) the degree of institutional diversification (types of institution
and development of the private sector), and (iv)  the proportion of public
funding allocated to student aid. The relationship between these variables
can be represented in the form of an equation:

f [SGi, TFi, (ΣEii), PSi, SAi]


where
SGI = High school graduation rate
TFi = Total funding for tertiary education
PSi = Proportion of private sector enrolment
Eij = Distribution of enrolment among various types j of tertiary education institutions
SAi = Student aid

The following indicators are relied upon to measure these various dimensions:

• The secondary school completion rate measures the proportion of the


population 15 years and over that has successfully completed high school.
This indicator provides a strong signal of the potential demand for tertiary
education.

• Total funding for tertiary education reflects the level of public commitment
to tertiary education, as well as the success of resource mobilization
efforts (cost-sharing, donations, research, consultancy and training
contracts, etc.).

• The share of private enrolment complements the previous indicator.


Reflecting the proportion of students enrolled in institutions not operated
by a public provider, it indicates the share of enrolment expansion that is
taking place without bearing on the public purse in terms of investment
and operation costs.

Chapter 13. If ranking is the disease, is benchmarking the cure? 249


• The indicator measuring the proportion of students enrolled in non-
university institutions (short duration vocational institutions, open
universities, polytechnics, etc.) reflects the diversity of institutions in a
country’s tertiary education system and the capacity to expand enrolment
in programmes and institutions whose cost is lower than that of traditional
research universities. Enrolment levels in ISCED 5B are taken as a proxy
for enrolment in non-university institutions.

• The last indicator measures the proportion of public spending allocated to


student aid (loans and grants). In countries with high levels of cost sharing
in public tertiary education and/or a well-developed private sector, the
availability of financial aid is important from an equity viewpoint. It limits
or facilitates the access and success of low-income students to tertiary
education.

To provide a starting point for the comparison of Chile and Brazil, it is impor-
tant to first assess their relative performance on attainment rates. As shown
in Figures 6 and 7 depicting the growth in attainment rates at the primary,
secondary and tertiary levels, Chile has made greater gains in secondary and
tertiary enrolment from 1980 onwards. In 2010, Chile had tertiary attain-
ment rates of 11.6 per cent for the population aged 25–65 years of age, while
Brazil had just 5.6 per cent for the same age group.

Figure 6. Attainment rates in Brazil

5.2
2010 32.4
73.8

5.3
21.2
2000
61.6

3.7
1980 7.9
16.8

1.1
1960 4.8
22.9
Brazil
0 10 20 30 40 50 60 70 80

250 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 7. Attainment rates in Chile

11.6
2010 55.2
82.1

9.5
2000 46.7
72.5

3.3
1980 22.1
60.1

1.8
1960 13.1
46.7
Chile
0 20 40 60 80 100
Note: Code: green = share of adult population that has completed tertiary education
blue = share of adult population that has completed secondary education
red = share of adult population that has completed primary education

Source: Barro and Lee.

Figure 8 below can help explain why Chile has a higher tertiary enrolment
rate. As seen here, the stock of candidates eligible to enter post-secondary
studies is much greater. Chile has a 55 per cent completion rate of secondary
school in 2010, while Brazil has about a 32 per cent completion rate.

Figure 8. Secondary school completion rates in Brazil and Chile, 2010

60
Chile 59.0%
50
Enrolment rate

40
LAC avg 45.8
30 Brazil 33.7
20
10
0
0 10.0 20.0 30.0 40.0 50.0 60.0 70.0

Secondary school completion rate (%)

Source: Barro and Lee (2010).

Chapter 13. If ranking is the disease, is benchmarking the cure? 251


Figure 9 shows the greater investment in tertiary education made by Chile as
compared to Brazil. This is due to the fact that Chile has been able to mobi-
lize a much higher share of private investment through both cost-sharing
in public universities and rapid expansion of private tertiary education,
even though Brazil spends much more public money on tertiary education.
Overall, public and private investments in tertiary education amount to
about 1.8 per cent of GDP in Chile, while those of Brazil are less than half
that at about 0.7 per cent of GDP.

Figure 9. Total investment in tertiary education in Brazil and Chile, 2007

60
50 1.7 Chile
1. 20 LAC avg
Enrolment rate

40
Brazil 0.8
30
20
10
0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8

Public and private spending as percentage of GDP

Source: OECD (2009).

Figure 10 shows that private enrolment share of both Brazil and Chile are
comparable. The LAC average primary enrolment is significantly lower than
these countries at around 30 per cent. Both Brazil and Chile have used enrol-
ment in private institutions as a key element of their expansion strategy.

Figure 10. Private enrolment in Brazil and Chile, 2007

60
50 77.00 Chile
33.00 LAC avg
Enrolment rate

40
Brazil 73.00
30
20
10
0
0 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00

Private enrolment share, tertiary (%)

Source: UIS (2007).

252 Rankings and Accountability in Higher Education: Uses and Misuses


Differentiation is a key indicator of system health. This indicator measures
the proportion of students studying at non-university institutions such as
community colleges, open education institutions or distance learning pro-
grammes. Figure 11 shows that 40 per cent of the students enrolled in tertiary
education in Chile are studying in non-university institutions, while less than
10 per cent of those in Brazil are attending these types of institutions. One
can infer then, that part of the reason for Chile’s higher enrolment rate is
due to the range of learning opportunities it provides to high school gradu-
ates compared to those available in Brazil.

Figure 11. Enrolment in non-university institutions in Brazil and Chile, 2007

60
50 40.0 Chile
Enrolment rate

40
8.0 Brazil 27.00 LAC avg
30
20
10
0
0 10.0 20.0 30.0 40.0 50.0 60.0 70.0

Proportion of students studying at non-university institutions

Source: UIS (2007).

Finally, Figure 12 emphasizes the distinct difference in access to financial aid


that students seeking to enroll in tertiary education face in each country.
The Chilean government allocates about 23  per  cent of tertiary education
related public spending to student aid (grants plus loans), while the Brazilian
government provides just 2 per cent of its spending to this end.

Figure 12. Financial aid in Brazil and Chile, 2007

60
50
22.0 Chile
Enrolment rate

40
1.7 Brazil
30
20
10
0
0 5.0 10.0 15.0 20.0 25.0
Public spending on student aid (grants and loans)
as a % of total tertiary education budget

Source: OECD (2009).

Chapter 13. If ranking is the disease, is benchmarking the cure? 253


As shown through this case study of Brazil and Chile, benchmarking provides
a baseline from which the impact and effectiveness of various policies can be
evaluated. Comparing indicators across countries offers a time-sensitive meas-
ure of performance improvement or degradation and can be used to ascertain
policy options, inform decision-making and guide resource allocation.

Conclusion
The world is interested in rankings in every walk of life. Countries are ranked
for their performance in all possible domains, from the Olympics to the
quality of life. It is not surprising then, that in the present tertiary education
world characterized by increased global competition for talented academics
and students, the number of league tables of universities has grown rapidly
in recent years.

The stakes are high. Governments and the public at large are ever more pre-
occupied with the relative performance of tertiary education institutions and
getting the best-perceived value as consumers of education. Some countries
are striving to establish ‘world-class universities’ that will spearhead the
development of a knowledge-based economy. Others, faced with a shrink-
ing student population, struggle to attract increasing numbers of fee-paying
foreign students. Just as scarcity, prestige, and having access to ‘the best’
increasingly mark the purchase of goods such as cars, handbags and blue
jeans, the consumers of tertiary education are also looking for indicators
that enhance their capacity to identify and access the best universities.

At the same time, these rankings are insufficient to measure the actual
performance of entire tertiary education systems. Beyond the results of
individual universities, it is important to be able to assess how a country
is faring along key dimensions of performance at the tertiary level such as
access and equity, quality and relevance, research productivity and technol-
ogy transfer. The benchmarking tool that the World Bank is in the process of
developing seeks to fulfill this role, by offering a web-based instrument that
policy-makers can use to make comparisons across countries on the vari-
ables and indicators of their choice. While the tool is not meant to give policy
prescriptions, it should provide a platform for facilitating diagnosis exercises
and the exploration of alternative scenarios for reforming and developing
tertiary education.

254 Rankings and Accountability in Higher Education: Uses and Misuses


Note

An earlier version of this article was previously published in Evaluation in


Higher Education, 5(1): June 2011.

References

Aghion, P., Dewatripont, M., Hoxby, C.M., Mas-Colell, A. and Sapir, A. 2009. The Governance and
Performance of Research Universities: Evidence from Europe and the U.S. National Bureau of
Economic Research (NBER) Working Paper 14851.

Barro, R and Lee, J. 2010. Educational Attainment Dataset: www.barrolee.com


(Accessed 1 April 2012.)

Gladwell, M. 2011. The order of things: what college rankings really tell us. The New Yorker
(14 February 2011).

OECD (Organisation for Economic Co-operation and Development). 2002. Tertiary Education for
the Knowledge Society. Paris: OECD.

OECD. 2009. Education at a Glance. Paris: OECD: www.oecd.org/edu/eag2009


(Accessed 1 April 2012.)

Salmi, J. 2009. The Challenge of Establishing World-Class Universities. Washington DC: International


Bank for Reconstruction and Development/World Bank.

Salmi, J. 2011. The road to academic excellence: lessons of experience. P. Altbach and J. Salmi
(eds) The Road to Academic Excellence: the Making of World-Class Research Universities.
Washington DC: World Bank.

Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 119(2). Paris: OECD.

Shanghai Jiao Tong University. 2010. ARWU (Academic Ranking of World Universities). Center
for World-Class Universities and Institute of Higher Education of Shanghai Jiao Tong
University, China: www.arwu.org/index.jsp (Accessed 1 April 2012.)

Smolentseva, A. 2010. In search for world-class universities: the case of Russia. International Higher
Education, 58, Winter: 20–22.

Times Higher Education. 2010. World University Rankings, 2010-2011. www.timeshighereducation.


co.uk/world-university-rankings/ (Accessed 1 April 2012.)

UIS (UNESCO Institute for Statistics). 2010. Statistics: Data Centre. Montreal: UIS: http://stats.uis.
unesco.org (Accessed 1 April 2012.)

Wilkinson, R.G. and Picket. K. 2010. The Spirit Level: Why More Equal Societies Almost Always Do
Better. London: Penguin.

World Bank. 2002. Constructing Knowledge Societies: New Challenges for Tertiary Education.
Washington DC: World Bank.

World Bank. 2009. World Development Indicators. Washington DC: World Bank: http://data.
worldbank.org/data-catalog/world-development-indicators (Accessed 1 April 2012.)

Chapter 13. If ranking is the disease, is benchmarking the cure? 255


Chapter 14

U-Multirank: a user-driven
and multi-dimensional
ranking tool in global
higher education and
research
Frans van Vught and Frank Ziegele
Introduction
The U-Multirank project1 encompassed the design and testing of a new
transparency tool for higher education and research. More specifically, the
focus was on a transparency tool to enhance understanding of the multiple
performances of different higher education and research institutions across
the diverse range of activities they are involved in.

Transparency is of major importance for higher education and research


worldwide, which is increasingly expected to make a crucial contribution to
the innovation and growth strategies of nations around the globe. Obtaining
valid information on higher education within and across national borders
is critical in this regard, yet higher education and research systems are
becoming more complex and – at first sight – less intelligible for many
stake-holders. The more complex higher education systems become, the
more sophisticated transparency tools need to be. Sophisticated tools can
be designed in such a way that they are user-friendly and can cater to the
different needs of a wide variety of stakeholders.

Various types of transparency tools with different purposes exist, in particu-


lar at the national level, but also at the international or even global level.
These include classifications, rankings/league tables, various benchmarking
instruments, and the outcomes of quality assurance and accreditation pro-
cesses. The U-Multirank project included a comprehensive analysis of these
different transparency instruments, the contribution they can potentially
make to our understanding of the diversity of higher education institutions
and their programmes, their possible positive and negative effects, and the
value of the information they provide. The conclusions, in particular, regard-
ing worldwide rankings, are:

• Existing rankings largely focus on only one or very few dimensions of


the broad spectrum of functions of higher education and research
institutions – primarily the research function.

1 The project ‘U-Multirank’ has been funded with the support of the European Commission. This chapter
reflects the views of the authors and the Commission cannot be held responsible for any use that may
be made of the information therein. For more information on the project, see: www.u-multirank.eu.
This chapter uses results and conclusions from the final report of the project (http://ec.europa.eu/
education/higher-education/doc/multirank_en.pdf) and from a recently published book: F. van Vught
and F. Ziegele (eds), Multi-dimensional Ranking, the Design and Development of U-Multirank, Dordrecht:
Springer, 2012.

258 Rankings and Accountability in Higher Education: Uses and Misuses


• Existing rankings appear to have a negative effect on the diversity of higher
education systems; because of their preoccupation with research they tend
to stimulate imitative behaviour on the part of institutions that is directed
towards one single profile: the large, comprehensive, internationally
orientated research university. This ‘world-class university’ thus becomes
synonymous with ‘top research university’ and in the end with ‘top
university’ in general, at the expense of other important higher education
activities, dimensions of performance and successful organizational models.

• In their selection of indicators existing rankings appear to focus on


what is easily measurable, rather than on what is relevant for reflecting
performance across the diverse functions of higher education.

• Existing global rankings do not respond adequately to the differing


information needs of different stakeholders.

• Existing rankings suffer from several methodological flaws:

–– The use of composite indicators can blur differences in performance


across particular dimensions and indicators.
–– The league table approach tends to exaggerate differences between
universities (‘number 57 is better than number 61’). Small differences
in the numerical scores of the indicators can lead to relatively large
yet unavoidable differences in league table position.
–– Where rankings focus only on the level of the institution as a whole,
they ignore differences in performance across different disciplinary
fields within the institution. Averages across fields are of little use to
many users and can be highly misleading.
–– Their bibliometric analyses of publications and citations are not
sufficiently sensitive to varying publication and citation cultures
across different disciplinary fields.
–– They do not take into account major contextual differences between
higher education systems (languages, cultures and varying regulatory
frameworks).
–– They often suffer from non-transparent, unspecified and volatile
procedures in terms of indicator construction, calculation and
aggregation.

• Existing rankings appear to have triggered a ‘reputation race’ in higher


education and research worldwide, stimulating politicians, policy-makers
and university leaders to make a range of policy choices and major
investments specifically designed to achieve a higher ranked position for

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 259
their institutions in the league tables with prejudicial effects on other
important areas of potential improved performance.

• The relative position of institutions on existing rankings appears to


contribute to increasing levels of resource inequality between institutions,
as ‘successful’ institutions are able to generate additional resources on the
basis of their position in the rankings and thus achieve further success. This
pattern further expands academic performance gaps between institutions
and adds reputational and resource fuel to academic stratification processes.

• The current rankings have been shown to trigger strategic behaviour


by institutions by providing incentives for them to ‘game the results’ by
boosting their scores on particular indicators.

Despite the serious critique of existing rankings – and particularly the major
global rankings – outlined above, our comprehensive review of the current
situation found a number of important examples of good practice that we
have carried forward into the design of a new transparency instrument.
These include:

• A group of experts and organizations engaged in producing or researching


rankings developed a set of basic principles for good practice, the Berlin
Principles on Ranking of Higher Education Institutions (International Ranking
Expert Group, 2006). The principles refer to four aspects of rankings: the
purposes and goals of rankings, the design and weight of indicators, the
collection and processing of data and the presentation of ranking results.
We have incorporated these into our design of a new instrument.

• In the area of transparency tools meant to provide relevant information


to (prospective) students, alternatives to the league table approach have
been developed. The rankings published by the CHE and the Dutch
‘Studychoice  123’ are leading European examples. The main principles
underlying this type of ranking include the following:

–– Definition of students as the primary stakeholder target group and


an explicit focus on aiding prospective students to find the study
programmes best matching their aims and needs;
–– Ranking single disciplines or subject areas rather than calculating
averages for entire higher education institutions;
–– Multi-dimensional rankings that are interactively presented so
that end-users may decide which indicators are most important to

260 Rankings and Accountability in Higher Education: Uses and Misuses


them, supported by web-based technologies allowing interactive,
personalized ranking; and
–– A robust division of indicator scores into top – middle – bottom
groupings for each indicator, rather than a presentation in league
tables with the spurious precision of ranking from position 1 to n.

• The Centre for Science and Technology Studies (CWTS) of Leiden University
publishes a ranking aiming at comparison of research performance with
impact measures that take the differences in institutions and disciplines into
account. On the basis of the same publication and citation data, different
types of impact-indicators can be constructed, for instance, one in which
the size of the institution is taken into account. (Rankings are strongly
influenced by the size-threshold used to define the set of universities
for which the ranking is calculated. Smaller universities that are not
present in the top 100 in size may take high positions in impact ranking
if the size threshold is lowered.) A major advantage compared to other
global rankings is the use of field-normalized citation rates that control
for different citation cultures in different fields. CWTS has also started to
develop new bibliometric methods allowing a link between publications
and the dimensions of regional engagement, internationalization and
knowledge transfer by analysing regional, international and university-
industry co-publications. All of these developments have been included
in U-Multirank allowing it to progress beyond existing rankings in the
methods of bibliometric research performance measurement.

Our analysis suggests that an enhanced understanding of diversity in the profiles


and performances of higher education and research institutions at a national,
European and global level requires a new ranking tool that addresses most of
the major shortcomings of existing ranking instruments but incorporates good
practices – such as those outlined above – developed in recent years. The next
section describes how this instrument – U-Multirank – was designed.

Design principles
Based on our analyses of existing transparency instruments and on clear
epistemological and methodological principles we formulated a set of design
principles for U-Multirank:

• Our most fundamental epistemological argument is that as all observations


of reality are theory-driven (formed by conceptual systems), an ‘objective

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 261
ranking’ cannot be developed. Every ranking will reflect the normative
design and selection criteria of its constructors.

• Given this epistemological argument, our position is that rankings should


be based on the interests and priorities of their users: rankings should
be user-driven. This principle ‘democratizes’ the world of rankings by
empowering potential users (or categories of users) to be the dominant
actors in the design and application of rankings rather than rankings being
restricted to the normative positions of a small group of constructors.
Different users and stakeholders should be able to construct different
sorts of rankings. (This is one of the Berlin Principles.)

• Our second principle is multi-dimensionality. As indicated earlier in this


overview, higher education and research institutions are predominantly
multi-purpose, multiple-mission organizations undertaking different
mixes of activities. (Teaching and learning, research, knowledge
exchange, regional engagement and internationalization are five
major categories that we have identified.) Rankings should reflect this
multiplicity of functions and not focus on one function (research) to the
virtual exclusion of all else.

• The next design principle is comparability of institutions. In rankings,


institutions and programmes should only be compared when their
purposes and activity profiles are sufficiently similar. Comparing institutions
and programmes that have very different purposes is worthless. It makes
no sense to compare the research performance of a major metropolitan
research university with that of a remotely located University of Applied
Science, or the internationalization achievements of a national humanities
college whose major purpose is to develop and preserve its unique
national language with an internationally orientated European university
with branch campuses in Asia. This principle also derives from the need to
make the diversity of higher education institutions’ ‘performance profiles’
transparent. In our view the principle implies a two-step-process: first,
institutions with similar profiles have to be identified by ‘mapping’ their
activities. A ranking of these institutions can only be applied afterwards.
This is a completely new approach to international and national rankings.
It connects the description of horizontal diversity of activity profiles to the
assessment of vertical diversity of performance profiles.

• The fourth principle is that higher education rankings should reflect the
multi-level nature of higher education. With very few exceptions, higher
education institutions are combinations of stronger and less-strong

262 Rankings and Accountability in Higher Education: Uses and Misuses


faculties, departments and programmes. Producing only aggregated
institutional rankings disguises this reality and does not produce the
information most valued by major groups of stakeholders: students,
potential students, their families, academic staff and professional
organizations. This does not mean that institutional level focused rankings
are not valuable to other stakeholders and for particular purposes.
The new instrument should allow for the comparisons of comparable
institutions at the level of the organization as a whole and also at the
level of the broad disciplinary fields in which they are active.

• Finally we include the principle of methodological soundness. The new


instrument should refrain from methodological mistakes such as the use
of composite indicators, the production of league tables and the denial of
contextuality. In addition it should minimize the incentives for strategic
behaviour on the part of institutions to ‘game the results’.

Conceptualization
These design principles have underpinned the conceptualization of a new
ranking instrument that is user-driven, multi-dimensional and methodo-
logically robust. This new instrument must enable its users to identify insti-
tutions and programmes that are sufficiently comparable (through U-Map)
and to undertake both institutional and field level performance analyses.

In operational terms U-Multirank consists of:

• Five performance dimensions (teaching and learning, research, knowledge


transfer, international orientation, regional engagement).
• A range of indicators that are used to compare institutional performance
on these five dimensions at the institutional and/or field level.

The selection of these dimensions and indicators has been based on two
processes:

• Stakeholder consultation process: a strong stakeholder orientation has


been a cornerstone of our approach given the centrality of the principle
of rankings being user-driven. Our intensive process of stakeholder
consultation focused primarily on the relevance of potential dimensions
and indicators as the starting point for rankings (see also the Berlin
Principles).

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 263
• Methodological analysis of the validity of the indicators, the reliability of the
information to be gathered, and the expected feasibility of the use of the
dimensions and indicators (availability of data; the extent of extra data
collection required from institutions).

During the design process all potential dimensions and indicators were clearly
described and discussed in stakeholder workshops. After a first validity and
reliability check, we suggested comprehensive lists of possible indicators
derived from the literature and from existing practice (including from areas
beyond rankings). In addition, we designed a number of new, sophisticated
indicators, particularly bibliometric indicators for the research dimension.

We asked stakeholders in an iterative process to assess the relevance of these


indicators. The outcomes of this process were then integrated with the results
of our methodological analysis to produce the set of indicators to be included
in the empirical pilot study. On the basis of the pilot study some indicators
were discarded and others earmarked for further development. The full list of
dimensions and indicators can be found in Appendix 1 of this paper.

On the basis of data gathered on these indicators across the five performance
dimensions, U-Multirank could provide its users with the online functionality
to create two general types of rankings:

• Focused institutional rankings: rankings on the indicators of the five


performance dimensions at the level of institutions as a whole; and
• Field-based rankings: rankings on the indicators of the five performance
dimensions in a specific field in which institutions are active.

A multidimensional ranking is inevitably more complex than publishing a


­simple league table. This raises the issue of user-friendliness: presentation
modes should allow users to digest the information provided in a
multidimensional ranking. Not only the information needs of expert users
such as political or institutional decision-makers should be taken into
account; the information should also be easily accessible to ‘lay’ users like
parents or students.

A number of presentation modes were discussed and developed for U-Multirank.


For example, it allows users to create institutional and field performance pro-
files by including (not aggregating) the indicators within the five dimensions
(or a selection of them) into a multi-dimensional performance chart. At the
institutional level these take the form of ‘sunburst charts’ (see Figure 1) while
at the field level these are structured as ‘field-tables’ (see Table 1).

264 Rankings and Accountability in Higher Education: Uses and Misuses


Figure 1. Sunburst representation of an institutional performance profile
Graduation rate bac
Student internships in reg/loc enterprise income from region Graduation rate mas
Time to degree bac
Res contract with regional firms Time to degree mas
Regional joint research publication % exp on teaching
Graduates working in region Grad. unemployment
% Interdisciplinary prog.
Highly cited research publications

Field normalised citation impact Patents awarded

Post docs per ac staff Startup firms

Art related output Size TTO

% Res income competitive sources Co-patenting

% Exp on research % income third party funding

Interdisc. research Incentives for KT


Res publication output CPD courses offered
Internat. joint research publications
University-industry joint publications
% international staff Internat. doctorate grad. rate
% students in joint degree prog. % prog. in foreign language bac

Source: the authors.

In the sunburst charts, the performance on all indicators at the institutional


level is represented by the size of the rays of the ‘sun’: a larger ray means a
higher performance on that indicator. The colour of a ray reflects the dimen-
sion to which it belongs. The sunburst chart gives an impression ‘at a glance’
of the performance of an institution, without unwarranted aggregation of
information into composite indicators.

Table 1. Performance at the field level


Teaching & Research Knowledge International Regional
Learning transfer orientation engagement
Graduates working in the region
Qualification of academic staff

% income third party funding


Research publication output

International academic staff


External research income

% international students

Joint international publ.

Regional co-publication
Student internship in
CPD courses offered
Student staff ratio

Graduation rate

Citation index

Startup firms

Institution 4
Institution 8 - -

Institution 3 -

Institution 5 -

Institution 1 - -

Institution 9
Institution 7 - - -

Institution 2 -

Institution 6 -

Source: the authors.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 265
In the field-based table, relative a coloured circle indicates performance. A
green circle indicates that the score of the institution on that indicator is in
the top group, a red circle indicates that the performance is in the bottom
group, and a yellow circle means that performance is somewhere in the
middle. The score on the student–staff ratio of the field at institution 4 is
in the top group, whereas the field in institution  2 has a relatively poor
score on this indicator. The user may sort the institutions on all of the
indicators presented. In addition, the users are given the opportunity to
choose the indicators on which they want to rank the institutions selected.
This personalized, interactive ranking table reflects the user driven nature
of U-Multirank.

In order to be able to apply the principle of comparability we have integrated


an existing transparency tool – the U-Map classification – into U-Multirank.
U-Map has been designed, tested and is now being implemented through
a series of projects supported by the European Commission. It is a user-
driven higher education mapping tool that allows users to select compa-
rable institutions on the basis of ‘activity profiles’ generated by the U-Map
tool. These activity profiles reflect the diverse activities of different higher
education and research organizations using a set of dimensions similar
to those developed in U-Multirank. The underlying indicators differ as
U-Map is concerned with understanding the mix of activities an institu-
tion is engaged in (what it does), while U-Multirank is concerned with an
institution’s performance in these activities (how well it does what it does).
Integrating U-Map into U-Multirank enables the creation of user-selected
groups of sufficiently comparable institutions that can then be compared
in focused institutional or field-based rankings.

The user-driven approach has an important implication for the U-Multirank


concept: in particular, it should be noted that U-Multirank is a database
accessible via an internet tool producing user-driven rankings; it is not the
publication of one specific ranking list.

The pilot study


With the initial design phase completed, the next step was to test the
empirical feasibility of U-Multirank with a global sample of higher edu-
cation and research institutions. This pilot test included three clusters of
activities.

266 Rankings and Accountability in Higher Education: Uses and Misuses


1. Establishing the pilot sample of institutions

The intention of the project was to test the feasibility of the multi-dimen-
sional ranking on an initial group of 150 institutions drawn from Europe
and beyond. In most cases institutions needed to be active in one or
both of the fields of (mechanical and electrical) engineering and business
that were identified as the pilot disciplines for the field-based rankings.
Institutions also needed to be chosen to ensure that the diversity of insti-
tutions in participating countries was represented to the extent possible in
the initial pilot group.

We used a number of mechanisms to establish the pilot group: 316 institu-


tions were invited to participate and 166 drawn from 57 countries agreed
to do so after interaction with the project team (8 subsequently withdrew).
In some countries (including China and the United States), special efforts
were made to encourage institutions to participate (see Table 2).

Table 2. Invited and participating institutions by region and country

Regional No. No. % institutions No. completed % completed


participation institutions institutions accepted institutional institutional
invited accepted (of total questionnaires questionnaires
invitations) (of total
acceptances)
Europe – EU 165 94 57% 75 80%
Europe – non-EU 27 15 56% 12 80%
United States 28 4 18% 1 20%
Canada 7 3 43% 1 33%
Japan 9 2 22% 2 100%
China 18 2 12% 1 50%
Other Asia 6 6 100% 3 57%
Australia 11 7 64% 6 86%
India 12 4 33% 2 50%
Africa 8 6 75% 1 17%
Latin America 10 4 40% 4 100%
Middle East 15 12 80% 7 58%
Total 316 159 50% 115 72%

Source: the authors.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 267
2. Developing the data-gathering instruments

Our analysis of potential indicators in the design phase showed that most
of the required data would need to be gathered at the institutional level as
national and international databases included very little of the information
we needed (bibliometric and patent databases were the two exceptions),
or did not provide comparable data. Four online survey instruments were
therefore designed to gather information:

• an online questionnaire (already designed and tested) to provide the


information needed to develop an institutional profile for each institution
within the U-Map classification;
• a second online questionnaire to provide information on the indicators
selected to measure the five performance dimensions at the institutional
level;
• a third online questionnaire for those institutions/faculties active in the
pilot fields of engineering and/or business to gather the information on
the indicators selected to measure the five performance dimensions for
the field-based rankings; and
• a fourth online survey for a sample of students studying in the selected
fields to collect the information needed for a range of ‘student satisfaction’
indicators used in the field-based ranking.

The last three questionnaires were pre-tested on a small sample of participat-


ing institutions before being rolled out to the full group of pilot institutions.

3. Organization of the pilot test

There were complex communication and logistical challenges involved in a


pilot test involving more than 150 institutions, three fields, 50 countries and
thousands of student questionnaires. The systems and processes that were
developed included the online surveys themselves; the U-Multirank website;
secure databases; access code systems; data glossaries; FAQ and help desk
services; general communication flows with participating institutions; and
data cleaning and checking protocols and procedures. All these systems and
processes have been tested and are available for the further implementation
of U-Multirank.

The pilot study data collection phase started in November 2010 and closed
at the end of March 2011. One hundred and nine higher education insti-
tutions completed both the U-Map and institutional questionnaires; 83 of

268 Rankings and Accountability in Higher Education: Uses and Misuses


these institutions also completed field level questionnaires (165 in total: 57
in Business, 50 in Mechanical Engineering and 58 in Electrical Engineering).
The analysis included 5,901 valid completed student questionnaires (from a
gross response of around 6,700).

The outcomes
The major objectives of the project were to develop an alternative to existing
global rankings, to avoid their problems and to test if such an instrument is
feasible. Our general conclusions from the two-years’ project work are:

• The concept of a multidimensional, multilevel and user-driven ranking is


indeed an alternative that avoids the shortcomings of existing rankings.
• The new multidimensional ranking instrument is feasible.
• The required operative tools such as indicator definitions, questionnaires,
data collection processes, databases, data quality check procedures and
presentation modes have been developed in a ‘1.0 version’ of U-Multirank.
• There are some clear issues where further steps have to be taken in order
to even enhance the feasibility of the new instrument and to move to the
phase of the implementation of U-Multirank.

The empirical feasibility of U-Multirank as a new transparency instrument in


higher education and research was assessed along three different lines:

• First, the feasibility of the dimensions and indicators themselves in


terms of data availability, conceptual clarity and data consistency. This
was assessed from an analysis of the data submitted via the different
questionnaires, from comments made within the questionnaires by
respondents, and from a brief survey of participating institutions.
• Second, the feasibility in terms of generating a sufficient critical mass of
institutional interest at European and global levels to make U-Multirank
a viable instrument.
• Third, the feasibility of scaling-up a pilot project of 150 institutions to
one including ten or twenty times that number; and extending its field
coverage from three to around fifteen major disciplinary fields.

Our analysis of the feasibility of the dimensions and indicators themselves


in terms of data availability, conceptual clarity and data consistency was
very positive, as is evident in Table  3. Table  3 shows the total number of

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 269
indicators tested (including field-based and institutional indicators) and
presents for each dimension the percentage of tested indicators proved to
be unproblematic (category A), the percentage that should be kept but needs
further work and refinement (B), and the percentage of indicators that was
discarded in the final set (C). The feasibility test proved that all indicators can
be maintained in the dimension teaching and learning, with 39 per cent of
the indicators needing further work – no indicator had to be excluded. In the
dimensions research and international orientation the outcome is similar,
with only one research indicator excluded.

In two dimensions (knowledge transfer and regional engagement) and with


some concepts (e.g.  graduate employability and non-traditional research
output), feasible indicators are more difficult to develop. Knowledge
transfer is the dimension with the most discarded indicators. Less than
one-third of the indicators can be used in the current form and four indi-
cators were excluded for the field level: patents awarded, co-patenting,
annual income from licensing and number of licensing agreements. In the
regional engagement dimension, the majority of indicators need modifica-
tions. In this dimension two indicators could not be taken into account due
to availability of data: regional participation in continuing education and
summer schools. To summarize, it is not surprising that the problematic
dimensions and concepts cover the areas of higher education and research
performance hardly explored by existing rankings. But still U-Multirank
goes beyond the scope of indicators implemented in existing worldwide
rankings in all its dimensions. The revised set of the U-Multirank indicators
can be found in Appendix 1.

Table 3. Overview of results of indicator feasibility analyses

Dimension Total Assessment of indicators after pilot


no. of
indicators A: % of indicators B: % of indicators C: % of indicators
tested needing no/minor needing further discarded
modification work
Teaching and
23 61% 39% 0%
learning
Research 16 56% 38% 6%
Knowledge 15
27% 47% 27%
transfer
International 16
68% 31% 0%
orientation
Regional 11
18% 64% 18%
engagement
Total 81 49% 42% 9%

270 Rankings and Accountability in Higher Education: Uses and Misuses


The decisions about whether to retain or discard indicators where difficulties
were experienced in the pilot study were made in consultation with stake-
holders. These decisions can be illustrated by a few examples:

• Although there were problems with the availability of employability-


related indicators in the dimension ‘Teaching and learning’, it was decided
to retain these as they cover a highly relevant aspect. Retaining them
underlines their importance and encourages institutions and national and
international data agencies to pay greater attention to these indicators
and to invest in data collection efforts.
• ‘International prizes won’ was discarded as an indicator as there was little
agreement on the list of prizes to be included.
• The feasibility problems with the indicators on regional engagement are
partly related to a lack of consistent and comparable definitions underlying
the data and partly because of lack of available information. Nevertheless
it was decided to retain the indicators for further development as they
potentially add clear value to U-Multirank.

The pilot test demonstrated that multidimensional and multi-level ranking


is certainly possible in terms of the development of feasible and relevant
indicators. It also showed priority areas for further refinement of indicators.
Furthermore, the pilot test proved the virtues of multidimensionality: no
university in the sample performed in the same group in all dimensions and
indicators. On the contrary, institutions showed specific strengths and weak-
nesses in a differentiated performance picture. Without multidimensionality
this would be hidden behind a composite indicator. In traditional rankings
focusing on research performance, only the ‘basic research-oriented, world-
class university’ is able to succeed. U-Multirank is able to identify universi-
ties with excellence and a specific strategic profile in one or more of the
other dimensions as well.

A major issue in international rankings is the quality of the data generated.


In the pilot study, measures were developed to ensure data quality and
to minimize ‘gaming the results’: data-cleaning procedures, plausibility
checks and feedback loops with the institutions. The option of ‘pre-filling’
the questionnaires with data from national sources, which should be
explored in the next phase of the development of the instrument, would
introduce more options for checking the data. In the student survey, we
analysed whether the comparability of responses was distorted by sys-
tematic differences in students’ expectation levels between countries; no
distortions were found.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 271
In terms of the feasibility of U-Multirank regarding the potential level
of institutional interest in participating in the new transparency tool, the
results of the pilot study are positive. In broad terms, half of the institu-
tions invited to participate in the pilot study agreed to do so. Given that
a significant number of these institutions (32 per cent) were from outside
Europe, despite the fact that U-Multirank is clearly a Europe-based project,
this represents a strong expression of interest.

It is also important to recognize that a pilot study is not a real ranking.


The institutions participating in the pilot project will have access to the
institutional profiles and the dimension and indicator outcomes. While
this provides an opportunity to compare and benchmark with over 100
other institutions worldwide, the outcomes of the pilot rankings will not
be made public. The objective of the pilot study was to test the feasibility
of the instrument, not to publish a ranking. We expect that the interest
in a real ranking is likely to be greater than in a pilot project of which the
outcomes are not being published.

Our single caveat concerns the global aspect of the feasibility study. The
prospects for European coverage are encouraging. In addition, institu-
tions from a number of countries not always visible in existing global
rankings, were enthusiastic about the project (including Australia, Japan
and a number of Latin American and Middle East countries). However,
the large amount of data to be collected suggests that U-Multirank cannot
easily achieve extensive coverage levels across the globe in the short-term
and in one step. Thus, in the short term a comprehensive coverage of
European institutions plus a limited extension beyond Europe is realistic.
Additionally, the pilot test proved that it was particularly difficult to recruit
institutions in China and the United States. Higher education and research
institutions in the United States showed only limited interest in the study,
while in China formal conditions appeared to hamper the participation of
institutions. Special attention to launch U-Multirank in these systems will
be needed. Nevertheless, in the pilot project worldwide participation from
higher education institutions could be realized. We believe that there will
be continuing interest from outside Europe from institutions wishing to
benchmark themselves against European institutions and that there are
opportunities for a targeted recruitment of groups of institutions from
outside Europe.

A final aspect of feasibility in terms of institutional participation is the


question of institutional dropout and non-completion rates. A brief survey of
the institutions that agreed to participate but at the end of the day did not

272 Rankings and Accountability in Higher Education: Uses and Misuses


submit data suggests that data (non-) availability was a common theme.
One particular group of institutions took a policy decision to withdraw
from the project. Beyond these two factors a diverse range of particular
institutional issues came into play – including competing claims on the
time of the staff concerned and changes in the key staff. Nevertheless, for
a pilot study a completion rate of 109 of 159 (69 per cent) is more than
respectable.

The third aspect of feasibility explored in the pilot study was the ques-
tion of the operational feasibility of up-scaling. Our experience with the pilot
study suggests that while a major ‘up-scaling’ will bring significant logisti-
cal, organizational and financial challenges, there are no inherent features
of U-Multirank that rule out the possibility of such future growth to a
worldwide level. The field-based ranking showed a substantial overlap of
relevant indicators in the pilot study fields between ‘business studies’ and
‘engineering’. Furthermore, the identification of a number of field-specific
indicators was also achieved. The overlap points to the fact that an exten-
sion to further fields may be assumed to be feasible. For the development
of field-specific indicators, a stakeholder consultation with field organiza-
tions and experts proved to be successful. Therefore, it is realistic to expect
that U-Multirank can be extended to more fields rather easily.

Summary and conclusion


In summary, the pilot test demonstrated that in terms of the feasibility
of the dimensions and indicators, the potential institutional interest
in participating, and the operational feasibility of up-scaling, we have
succeeded in developing a U-Multirank ‘Version 1.0’ that is ready to
be implemented in European higher education and research and for
institutions outside Europe that are interested in participating. As has been
outlined above, further development work is needed on some dimensions
and indicators – hence Version 1.0. Furthermore, two achievements of the
U-Multirank development have to be stressed:

• U-Multirank includes some unique methodological aspects that have not


been implemented before in any kind of national or international ranking,
in particular: the link between mapping and ranking, and the innovative
bibliometric indicators analysing the co-publication behaviour in the
context of international, regional and knowledge transfer collaboration.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 273
• A benefit that should not be underestimated is that with U-Multirank the
chance to be ranked on an international scale is no longer limited to a
small group of research-oriented institutions with ‘global brands’. It also
allows regionally focused institutions, bachelor degree-awarding colleges,
polytechnics, art schools, music academics, specialized research centres
and many more types of higher education and research organizations
to appear in international rankings and to benchmark themselves at a
supra-national level with colleague institutions that may be assumed to
have related orientations.

Aside from these major aspects of feasibility and virtues of U-Multirank sev-
eral other lessons learned for successful rankings could be derived from the
pilot project:

• Stakeholder consultation is not just a practical issue; it has become a crucial


element of the ranking approach. A participative approach to ranking
with intensive stakeholder discussions emphasizes the principle of user
sovereignty and stimulates users’ reflections on the relative importance
of indicators and performances – of course without denying the
responsibility of ranking producers for the indicators and methodology.
Consultations should be a continuous element of ranking processes, not
only in the conceptualization phase.

• The important role of institutional data collection remains a challenge for


elaborate rankings. Institutional data collection will be inevitable if we want
to exceed the traditional ranking approaches focusing on bibliometric or
reputation surveys. Multidimensional rankings that want to take the variety
of institutional missions and profiles into account cannot be realized
without the application of institutional and student surveys. Therefore,
these rankings have to succeed in convincing higher education and research
institutions to invest time as well as energy in data-collection and reporting.
This makes multidimensional rankings vulnerable: if they cannot see clear
benefits from the ranking outcomes, institutions may not be inclined to
get involved in data provision. Ranking producers always have to keep
in mind the cost-benefit-ratio for the ranked institutions, without losing
methodological rigor. During the pilot study, we provided a comprehensive
compilation of ‘their’ data for each institution and offered the possibility to
look for benchmarking partners within the sample. Such additional services
enhance the institutional benefits of the ranking and proved to increase the
institutional willingness to get involved in data-provision.

274 Rankings and Accountability in Higher Education: Uses and Misuses


• In order to stimulate acceptance of U-Multirank, its data gathering has
to be coordinated with other data-collection processes. In the pilot study
three different problems of coordination of data collection activities
were identified: in some cases there were overlaps with national data
collections taking place at the same time, in some countries similar
national data-collection activities already exist, and on the European
level other international data collection projects were taking place in
parallel. To avoid conflicts and overlaps and to create optimal conditions
for acceptability, these initiatives have to be coordinated. Efficient
planning of coordinated data-collection is needed. EU-level data-
gathering initiatives have to be combined and, as far as possible, data
collected in national rankings (and also from national statistics) should
be fed into the U-Multirank database.

• A combination of a user-driven flexible web tool and authoritative


rankings is an attractive way to present U-Multirank results. If a ranking
is based on the user’s selection of institutions and indicators, the
ranking result will not be a ‘one and only’ performance list, presented
in existing worldwide rankings. In a user-driven approach, each user
can produce his/her own ‘personalized’ ranking with a flexible web tool.
In the context of U-Multirank, the release of a new ranking outcome
will not lead to the publication of a specific list, but to the integration
of a data update in the ranking database, allowing a variety of users to
produce a large number of their own personalized rankings. On the
one hand, this is crucial for the ‘democratization’ of rankings; on the
other hand it endangers the awareness for ranking results, and may
lead to a situation where the simplistic picture of a (misleading) league
table identifying the ‘number one in the world’ might still prevail in
the public debate. To avoid this, U-Multirank should also offer so-called
‘authoritative rankings’, in which a specific selection of dimensions and
indicators is pre-defined and selected on the basis of the ‘authority’ of
a certain organization, institution, association or network. Authoritative
rankings can be produced and published by higher education
membership organizations, specific associations of higher education
institutions, national or international public authorities, representation
organizations, independent foundations, media partners and so on.
This will enhance the benefits and attractiveness of U-Multirank.
Authoritative rankings still follow our basic argument that there is no
objective ranking and the subjective character should be an explicit
principle of each ranking.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 275
• Especially for the web tool user-friendliness will be of major importance.
A league table is easy to understand (at least it pretends to be easily
comprehensible because the complexities are concealed behind the
composite indicator). A multidimensional ranking has to give an
illustrative overview on a variety of indicators. Looking at Table 1 above,
one could ask if users are able to deal with this complexity. If the single
indicators are shown in tables, the question of interpretation of indicators
arises. For instance, not everyone would be able to interpret adequately
what a ‘field-normalized citation rate’ says about the performance of a
university. User-friendliness of the presentation of information becomes
a major prerequisite for the adequate use of rankings. Both experienced
and ‘lay’ users should be enabled to benefit from performance rankings.
The presentation modes should include attractive graphical presentations
and make use of symbols and colours to create clear and coherent
impressions at first glance. A web-application should provide clear
guidance and explanation, and in particular address the needs of specific
user-groups. A differentiated information provision format should be an
integrated part of the web tool.

Finally, a crucial condition for a successful international implementation of


U-Multirank will be its institutionalization. The ‘authority’ of the actor, the
organization of the ranking processes and the ‘ownership’ of the data are
sensitive issues in the world of ranking and should be approached carefully.
In our view, U-Multirank should be independently institutionalized, with
extensive advisory and communication structures for experts and stake-
holders. There should be no direct decision-making authority for politics,
governments and interest groups. Yet, a highly transparent governance
structure should be established which carefully guards the independent
character of the ranking outcomes. Funding could come from independent
foundations and from sponsoring public and private organizations, as well as
from the sales of standardized products and services (like data visualization,
benchmarking support processes, SWOT analyses). Interested parties could
be invited to create and publish their specific ‘authoritative rankings’.

This project has demonstrated the complexity of developing transparency


instruments in higher education and it is unrealistic to expect a perfect
new tool to be designed at the first attempt. But the results achieved with
U-Multirank are encouraging and make it possible to continue with its
development. Furthermore, in the long run U-Multirank needs to remain
a dynamic instrument that responds to new developments in higher edu-
cation, the changing interests of users and new possibilities offered by
improved data collection methods.

276 Rankings and Accountability in Higher Education: Uses and Misuses


Reference

IREG (International Ranking Expert Group). 2006. Berlin Principles on Ranking of Higher Education
Institutions. IREG.

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 277
Appendix 1. U-Multirank dimensions and indicators

The definitions of the indicators can be found online at: www.u-multirank.eu

Teaching and learning


Expenditure on teaching
institutional

Graduation rate
Focused

ranking

Interdisciplinarity of programmes
Relative rate of graduate (un)employment
Time to degree
Student–staff ratio
Graduation rate
Percentage graduating within norm period
Field-based
ranking

Qualification of academic staff


Relative rate of graduate (un)employment
Interdisciplinarity of programmes
Gender balance
Inclusion of work experience
Student satisfaction: overall judgment
student satisfaction indicators

Student satisfaction: evaluation of teaching


Student satisfaction: inclusion of work experience
Field-based ranking:

Student satisfaction: organization of programme


Student satisfaction: libraries
Student satisfaction: laboratories
Student satisfaction: quality of courses
Student satisfaction: social climate
Student satisfaction: support by teachers
Student satisfaction: computer facilities

Research
Percentage of expenditure on research
Percentage of research income from competitive sources
institutional ranking

Total publication output


Focused

Post-docs per FTE academic staff


Interdisciplinary research activities
Field-normalized citation rate
Highly cited research publications
Art-related outputs per FTE academic staff
Highly cited research publications
Field-normalized citation rate
Field-based

External research income


ranking

Total publication output


Doctorate productivity
Student satisfaction: research orientation of programme
Post-docs per PhD completed

278 Rankings and Accountability in Higher Education: Uses and Misuses


Knowledge transfer
Incentives for knowledge transfer
Percentage of income from third-party funding
institutional University-industry joint research publications
Focused

ranking Patents awarded


Technology transfer office staff per FTE academic staff
CPD courses offered per FTE academic staff
Co-patenting
Start-ups per FTE academic staff
Academic staff with work experience outside higher education
ranking
Field-
based

Joint research contracts with private sector


University-industry joint research publications

International orientation
Percentage of programmes in foreign language
Percentage of international academic staff
International doctorate graduation rate
institutional
Focused

ranking

International joint research publications


Percentage of students in international joint degrees
Percentage foreign degree-seeking students
Percentage students coming in on exchanges
Percentage students sent out on exchanges
Incoming and outgoing students
International orientation of programmes
International academic staff
Field-based
ranking

International research grants


International joint research publications
Percentage of international students
International doctorate graduation rate
Student satisfaction: opportunities to study abroad

Regional engagement
Percentage of graduates working in the region
institutional

Percentage of income from regional sources


Focused

ranking

Regional joint research publications


Research contracts with regional partners
Percentage of students in internships in local enterprises
Degree theses in cooperation with regional enterprises
Field-based
ranking

Graduates working in the region


Student internships in local enterprises
Regional joint research publications

Chapter 14. U-Multirank: a user-driven and multi-dimensional ranking tool


in global higher education and research 279
Chapter 15

Towards an international
assessment of higher
education learning
outcomes: the OECD-led
AHELO initiative
Richard Yelland and Rodrigo Castañeda Valle
Introduction
The lasting value of education for individuals and societies has been amply
demonstrated. Education gives people the social capital and knowledge
to invest in their own futures and to improve their well-being. Education
offers governments a way to invest in human capital and participate in
the knowledge economy. And it has proven positive social effects. For
these reasons we have seen fifty years of growth and change in higher
education as governments have set out to develop their higher education
systems and provide opportunities to more of their citizens. As demand
has grown, the ways of meeting it have diversified. Different types of
institutions work in different ways, meeting the needs of an increasingly
diverse student body. Technology has brought new opportunities for
online and blended learning. Students now have the opportunity to study
in institutions outside their country of residence and to combine mod-
ules from more than one institution. Growth and diversification have
brought with them a growing focus on the outcomes of higher education,
on the returns that both societies and individual can expect to see from
the increasing investments they make in higher education. Students seek
better and more transparent ways of assessing the quality of providers
in a complex higher education market. Governments want to maximize
the impact of spending in higher education as they seek to balance the
demands of competing sectors. Institutions want to understand how to
improve their teaching and learning to improve their services and their
reputations.

Yet, there is no effective international comparison of higher education


institutions (HEIs) that takes account of higher education’s mission and
circumstances. The international rankings that exist are primarily based
on research output and/or academic reputation. The assessment of
higher education learning outcomes offers the prospect of reliable inter-
national comparisons of what students know and can do at the end of a
bachelor degree, and of greatly improved understanding of what works
in higher education. The OECD is developing an assessment that aims to
address this problem. The feasibility study for the Assessment of Higher
Education Learning Outcomes (AHELO) aims to assess students across
several languages and cultural contexts.

This chapter briefly sets out the key trends in higher education that have
led OECD countries, and others, to work together to develop AHELO, and
describes progress to date with the work.

282 Rankings and Accountability in Higher Education: Uses and Misuses


Context: key trends in HE worldwide

Social value and key trends in HE

Education is a key factor for economic development and social well-being.


Likewise, it is a powerful means to enhance individuals’ life chances. For
example, a person holding a university degree can expect to earn at least
50  per  cent more than someone that has no qualification (OECD, 2011:
138, indicator A8). These two effects – individual and social – are interde-
pendent phenomena. Social externalities such as labour market earnings,
economic growth, wider markets of consumption or cultural capital (as in
wider literate societies) are related to employability, income and individual
literacy. According to recent studies on the social outcomes of learning
(OECD, 2007), education enhances the process consolidating the formation
of people’s identities as citizens and members of a cluster group, and their
participation in civic activities enhancing democratic and more tolerant
systems. In this sense, education improves social cohesion and civil society
(OECD, 2007: 17).

For these reasons, in recent years policy-makers and governments around


the world have encouraged investment in the development of education
systems that offer a majority of students access to a tertiary qualification.
There has been an increase in the number of students and providers for
primary and secondary education, and in the OECD countries it is expected
that an average of 82 per cent of today’s young adults will complete second-
ary education over their lifetimes (OECD, 2011: 44 indicator A2). Further, most
students leaving secondary education have ambitions and expectations to
participate in high-quality higher education.

Massification

The rise in investment in national education systems has had the general
tendency of increasing the number of students enrolling or to be enrolled
in higher education (HE) (see Appendix, chart  1). According to UNESCO,
about 125  million students worldwide will have to be accommodated in
higher education by 2025 (UNESCO, 2011). Indeed, the majority of students
that graduate from secondary education in OECD countries do so in pro-
grammes aimed at providing access to tertiary education (OECD, 2011: 47,
indicator A2). In addition, between 1995 and 2009, entry rates for tertiary

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 283
education in OECD countries increased on average by 25 percentage points,
and today it is expected that about 78 per cent of young adults will join a
tertiary programme (OECD, 2011:  308, indicator  C2). Most students enter
tertiary education immediately after graduating from secondary educa-
tion. As a result, in some countries, 80 per cent of students entering HE
are aged 25 years or younger (OECD, 2011: 310 indicator C2). Moreover, it is
expected that more than one-third of today’s young adults in OECD coun-
tries will finish some type of tertiary education over their lifetime as shown
in chart 2 (OECD, 2011: indicator A3). Thirty-eight per cent of young adults
graduated from some type of HEI in 2009 (OECD, 2011: 62, indicator A3). In
addition, as shown in chart 3, between 1995 and 2009 graduation rates for
HE increased in some countries up to an annual 8 per cent (OECD, 2011:
indicator A3). In addition, about 13  per  cent of adults in OECD countries
proceed to obtain a second HE degree, such as a Masters degree (OECD,
2011: 63, indicator A3).

Internationalization

In 2009, about 3.7 million students were enrolled in HEIs outside their coun-
try of origin (OECD, 2011: 318, indicator C3). The countries that attract most
students in the world are the G20 countries, and among them, HEIs within
OECD countries are seen as the most appealing, attracting about 77 per cent
of all foreign students enrolled in tertiary programmes outside their coun-
tries of citizenship (OECD, 2011: 318, indicator C3). For these reasons, inter-
national students represent an increasingly important fraction of the entry
rates in HE programmes worldwide, as shown in chart  4. The number of
students that cross borders with the intention of studying is a phenomenon
that has been gradually augmenting the entry rates in many HE systems. In
Australia alone, international students increase the entry rates by approxi-
mately 29 per cent (OECD, 2011: 312, indicator C2).

These tendencies towards massification and globalization favour coun-


tries whose language of instruction is widely spoken. In particular, there is
a significant increase in English as the language of instruction in tertiary
education programmes. However, the international market is unequally
distributed among countries (see chart 5). Language of instruction is not the
only factor: international students are also influenced by the openness of the
education system, and the prospect of obtaining employment after obtain-
ing a degree in a foreign country. According to OECD’s Education at a Glance
(EAG), approximately 25 per cent of international students who do not renew

284 Rankings and Accountability in Higher Education: Uses and Misuses


their student permits or visas change their legal status for reasons related to
work opportunities (OCED, 2011: 319, indicator C3).

A need for information

Motivated by their interest in investing in their individual human capital,


students tend to select HEIs in which to conduct their studies based, at
least in part, on the perceived quality of the programme or institution. For
this reason, among others, rankings and league tables have been gaining
ground as a means to support such important decisions. As Education at a
Glance puts it:

International students increasingly select their study destination based


on the quality of education offered, as perceived from a wide array of
information on and rankings of higher education programmes now
available, both in print and on line. For instance, the high proportion
of top-ranked higher education institutions in the principal destina-
tion countries and the emergence in rankings of institutions based in
fast-growing student destinations draws attention to the increasing
importance of the perception of quality even if a correlation between
patterns of student mobility and quality judgements on individual
institutions is hard to establish. In this context, institutions of higher
education are more willing to raise their standards in the quality of
teaching, adapt to more diverse student populations, and are more
sensitive to external perceptions. (OECD, 2011: 323)

Thus, the social and individual value of education has led to a growth in
the higher education. This growth has become politically and strategically
important for governments.

These were among the reasons that led the OECD Education Ministers
meeting in Athens in June 2006 to devote their agenda to higher education
(OECD, 2006). At this meeting, governments highlighted their satisfaction
with the growth of higher education, and its increasing diversity, but
expressed concern about quality and accountability. Precisely as a result of
growing awareness of the value of HE to generate powerful individual and
social capital, ministers were keen to discuss better policies for improving
the access of individuals to HE and its proper and feasible funding. The
need to provide students, families and employers, as well as the institutions
themselves, better information on the quality of HE was clear (OECD,
2006: 84).

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 285
What do we know about the assessment
of quality in higher education?
What kinds of proxies are available?

Despite widespread public and government interests in developing the sophis-


tication of mechanisms for quality assurance in HE, there is a gap between
quality assessment and public opinion. There seems to be a feeling that HE
quality assurance has not been transparent enough. The lack of authoritative,
publicly available assessments of the quality of teaching and learning in higher
education has left a space for the development of an industry that has been
gaining rapid ground: national and international university rankings. Rankings
are very frequently used nowadays as proxies for – or evidence of – the quality
of teaching and learning in HEIs across the globe.

Among the whole range of assessments, perhaps the most prominent are the
following (see Rauhvargers, 2011):

1. Shanghai Jiao Tong Academic Ranking of World Universities (ARWU):


This ranking has been developed and carried out since 2003 by the
Shanghai Jiao Tong University, one of the most prestigious universi-
ties in China. Their assessment is academic in scope and may be con-
sidered less biased by commercial interests. It purports to measure,
among other inputs, the quality of the faculty as well as the quality of
the research in any given HEI.

2. Times Higher Education World University Rankings: Developed by the


United Kingdom’s leading publication on higher education, this is a
league table of the world’s best 200 universities published since 2004
(initially in partnership with QS and since 2010, in partnership with
Thomson Reuters of the Reuters group). It purports to assess teach-
ing, research and reputation of universities all over the world.

3. Quacquarelli Symonds (QS) World University Rankings: This ranking


has been developed by QS since 2010. It purports to assess a series of
variables ranging from teaching and research quality, to other inputs
like the amount of libraries, their capacity to innovate or its com-
munity involvement.

4. U‑Multirank: This ranking has been developed by the European


Commission in order to highlight the diversity of European higher

286 Rankings and Accountability in Higher Education: Uses and Misuses


education, and enhance transparency about the performance of HEIs
across Europe. It recognizes the particular importance of teaching
and learning in HE, but lacks the proxies for their assessment. To
U-Multirank, universities are research centres, but are also teaching
centres whose quality must be assured. Their approach is multi­
dimensional and its first results are expected to be available in 2013.

Despite these different approaches, given the complex and varied contexts in
which higher education is provided, with widely different cultures, languages,
population sizes, economic development and education systems, what reli-
ance can be placed on international rankings as a source of information on
the quality of teaching? The answer is very little, as these rankings are based
largely on research output. That being the case, how can we assess the qual-
ity of teaching in higher education taking account of different approaches to
teaching and learning in different contexts?

Given the important investment made by governments and stakeholders


in HE, and by the individuals interested in investing their social capital,
the development of tools that evaluate and assure the quality of HE is
essential for the education market to develop and support HEIs’ effective-
ness and standards worldwide. Moreover, accreditation and quality assur-
ance of higher education must demonstrate the capacity to deal with the
phenomena of massification, globalization and internationalization. This is
difficult to achieve in a context in which the perceptions of the quality and
value of higher education outcomes are heavily influenced by international
rankings, which are, at best, unreliable devices by which to measure the
quality of teaching.

The relevant question is whether it is possible to develop an assessment


methodology that will meet the needs of the investing governments and
partners, and the demands of the general public.

Is it possible to develop instruments to measure learning outcomes


across borders?

The OECD PISA assessment and others have demonstrated that it is pos-
sible to perform reliable and internationally comparable evaluations of what
school students have learned and can do. Perhaps more importantly, PISA
has demonstrated the practical possibility and usefulness of international
comparisons between education systems to inform policy development
and teaching practice. PISA assesses 15 year-olds. The question of whether

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 287
something comparable can be done for students in higher education remains
open. The challenge is both epistemological and practical. We know what we
are aiming to do, but we lack the data to assess it. Those data have to be
generated. To do that requires the development of an agreed assessment
instrument, and that in turn requires agreement on which outcomes that
are to be assessed and how.

Around the world, there are many different institutions, different disciplines,
different languages and different approaches to teaching and learning. All
these contextual and process factors complicate the task of international
comparison. Therefore, we must aim to evaluate students’ capacity for apply-
ing their knowledge in real life situations (e.g. what sorts of skills have been
acquired and how those skills are used). To add value, and to avoid stifling
diversity, which can be dangerous, such assessments need to go beyond testing
knowledge. They must test students’ capacity to reason in complex and applied
ways, and to use skills and competencies effectively in different settings. The
assessments need to be sophisticated and align with the forms of thinking and
professional work in which most graduates will engage. They need to employ a
wide range of methods, provide for a more balanced view of higher education
quality, and tap into capabilities that both educators and professionals rec-
ognize as important for educational success) (Coates and Richardson, 2011: 5)

The aim is to develop an assessment that provides evidence of learning


outcomes in higher education across borders through a series of tests that
are applicable to students in different countries and cultures. This assess-
ment should be developed on the premise that objective information on
learning outcomes can contribute to HEIs’ understanding of their teaching
performance, and thereby provide a tool to develop and improve learning
and teaching methods.

The AHELO Feasibility Study

AHELO

The OECD’s Assessment of Higher Education Learning Outcomes (AHELO)


has been developed in response to the issues identified above to provide
good evidence of learning outcomes in higher education worldwide. AHELO
focuses on the intention of providing HE leaders with in-depth information

288 Rankings and Accountability in Higher Education: Uses and Misuses


and tools to promote positive change and better learning. Currently, AHELO
is in the feasibility study stage: work is progressing on exploring how an
assessment of teaching and learning standards can be internationally valid.
This entails evaluating the scientific feasibility of carrying out an interna-
tional assessment of higher education learning outcomes (in generic and
subject-specific skills) at the end of a Bachelor’s degree programme, as well
as estimating the feasibility of its practical implementation.

In addition, this study is designed to determine the validity of the AHELO


assessment tools cross-culturally and in many different languages.
Accordingly, instruments have been developed to assess three specific
strands at the end of first-cycle (Bachelor) degrees:

• generic Skills (critical thinking, analytic reasoning, problem-solving and


written communication),
• economics,
• engineering.

The aim is not to assess the student’s knowledge of the specific curricular
content of these strands. Rather, AHELO aims to test what is above content,
meaning the student’s ability to use specific concepts of the discipline and
their own analytical thinking to solve real-life problems. Therefore, instead
of weighting a student’s curricular knowledge gain, AHELO measures the
student’s capacity to master ‘the language’ of the discipline. AHELO aims to
demonstrate that it is possible to devise a set of test instruments applicable
across a range of different institutions, cultures and language, and that the
practical implementation of these tests is feasible.

The content to be measured within each strand has been defined by


multi­national groups of experts in each field, practitioners and technical
specialists. The assessment instruments for economics and engineering are
based on points of common understanding among independent academic
institutions and specialist definitions of conceptual frameworks of expected/
desired learning outcomes within these specific disciplines. In other words,
they are based on general consensus among relevant parties within a specific
discipline, of the desired performance of students upon graduation.

AHELO does not seek homogeneity or uniformity on degree programmes, but


rather basic agreement on expected learning outcomes or competences within
academic fields. Furthermore, recognizing that higher education is context-
dependent, AHELO contains a set of contextual assessment tools to weigh the
particularities of each specific cultural context and institutional setting. These

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 289
tools are intended to provide analytical depth for the student assessments.
Ten-minute questionnaires will be administered to students, faculty members
and institutional leaders to assess the characteristics of each learning context.

• There are now sixteen countries participating in the AHELO feasibility


study including twelve OECD member countries and four non-members.
They are involved in twenty-three strand replications. In addition,
another thirteen OECD member countries have been indirectly involved
either through financial support to the feasibility study or through their
participation in the development of the AHELO concept.

• The feasibility study is being conducted in: Australia, Belgium (Flemish


Community), Canada (Ontario), Colombia, Egypt, Finland, Italy, Japan,
Korea, Kuwait, Mexico, the Netherlands, Norway, the Russian Federation,
the Slovak Republic and the United States (Connecticut, Missouri and
Pennsylvania).

To date, the assessment framework and instruments have been developed.


The early findings are promising regarding both the economics and engi-
neering strands.

The work on AHELO unfolds in two phases (Figure 1):


Figure 1

Generic
Economics Engineering
Frameworks Skills
Framework Framework
PHASE 1 Framework
Initial proof
of concept Instrument Generic Engineering
development Economics
Skills Instrument
& small-scale Instrument
validations Instrument

PHASE 2 Contextual dimension surveys


Scientific
feasibility Implementation
& proof of Project management, survey operations
practicality and analyses of results

Source: OECD (2011: 322, Indicator C3, Chart C3.3).

290 Rankings and Accountability in Higher Education: Uses and Misuses


• The first phase from August 2010 to June 2011 focused on providing
an initial proof of the concept. In this phase, the goal was to develop
provisional assessment frameworks and testing instruments suitable
for an international context for each of the disciplinary strands of work:
economics and engineering; to adapt an existing instrument for the
generic skills strand; and to validate these tools through small-scale testing
(cognitive labs and think aloud interviews) in participating countries. The
goal was to get a sense of cross-linguistic and cross-cultural validity. The
focus has been on the feasibility of devising assessment frameworks and
instruments that have sufficient validity in various national, linguistic,
cultural and institutional contexts.

• In a second phase from March 2011 to December 2012, the goal is to


evaluate the scientific and practical feasibility of an AHELO by focusing
on the practical aspects of assessing student-learning outcomes. During
this phase, the implementation of assessment instruments and contextual
surveys in small groups of diverse HEIs will explore the best ways to
implicate, involve and motivate leaders, faculty and students to take part
in the testing. It will also examine the relationships between context and
learning outcomes, and the factors leading to enhanced outcomes. This
second phase will address issues of practical feasibility, further investigate
validity issues and assess data reliability.

• AHELO has made progress and demonstrated its applicability in several


significant areas:

–– The Engineering Assessment Framework reached a relevant consensus


within the development team that comprised experts from Australia,
Japan and several European countries. There has also been positive
feedback from stakeholders consulted throughout the development
including engineering societies and associations of professional
engineers. Indeed, initial validation of the AHELO Engineering
Assessment shows that there is strong potential for the Engineering
Assessment Framework to be implemented well and provide valid,
reliable and efficient measurement of target constructs.

–– The Economics Assessment Framework, which defines the domain to


be tested and specifies the expected learning outcomes for students in
the target population, has undergone international validation. So far,
the endorsement of the assessment by domain experts and national
managers in a number of countries suggests that it is possible to
develop assessments meeting international standards in this domain.

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 291
Furthermore, the AHELO Economics Assessment has shown up to
now, like the Engineering Assessment, that there is strong potential
for the Economics Assessment Framework to be implemented well.

–– The development of a Contextual Dimension Framework was


undertaken through research and consultation, and by seeking the
expert opinion of groups and individuals from around the world. Its
validation reflects an international consensus on the important contexts
shaping higher education learning outcomes. Widespread consultation
based on the AHELO Contextual Dimension instrumentation suggests
that the Contextual Dimension Student, Faculty and Institution Context
Instruments have the strong potential for the Contextual Dimension
Assessment Framework to be implemented well, as well as providing
valid, reliable and efficient measurement of target constructs.

The potential of an AHELO

Institutional improvement and fairer education systems

AHELO is a programme intended to assess the feasibility of providing sys-


tematic information, which will assist all concerned to make better-informed
judgments about the quality of higher education and how it can be improved.
We have argued here that no international ranking evaluates teaching and
learning in HE effectively. The narrow range of criteria used in university
rankings creates a distorted vision of educational success and fails to capture
the essential elements of an education: teaching and learning. AHELO will
broaden the scope of evaluation to include teaching and learning – aspects
that are fundamental to every higher education institution.

AHELO has the potential to become a powerful ally for quality assurance of
higher education through the assessment of learning outcomes throughout
the world. It is an innovative endeavour intending to assist the improvement
of teaching and learning in HE and its internationalization. Further, it has
the intention to make the performance of HEIs and its added value to the
individual and social capitals more transparent.

292 Rankings and Accountability in Higher Education: Uses and Misuses


Better connect higher education and society

• If the feasibility study is successful, a full-fledged AHELO could provide


valuable analysis at many levels. It will complement quality assurance
in helping institutions to understand their strengths, address their
weaknesses and plan their future agendas. It will help faculty understand
what works in their methods and how to improve it. It will assist policy-
makers to understand the effectiveness of systems and to make decisions
about structure and resources accordingly. Employers may have more
reliable information on the capacities and capabilities of job candidates
and will no longer have to rely on institutional selectivity as a guide.

• Importantly, it will help fill the information void and enable students to
make more informed choices about their futures. Far too many students
drop out of higher education programmes before completing them: this
can be not only a personal setback, but also a waste of valuable resources.
If AHELO can help students better match programmes to their expectations
and aspirations it may offer higher education stakeholders the possibility
of determining general standards for HEIs across national borders.

Conclusion
Higher education brings significant individual and social benefits.
Massification, globalization and internationalization are challenging higher
education systems. New approaches to the assessment of quality and perfor-
mance are demanded both by governments and individuals. The OECD has
taken the initiative in developing an international assessment of learning
outcomes in higher education. Experience to date is encouraging but there
is still some way to go before definitive results will be known.

We are hopeful the successful development of a reliable cross-cultural


analysis will be a building block in the improvement of quality, equity and
effectiveness in higher education.

Chapter 15. Towards an international assessment of higher education learning outcomes:


the OECD-led AHELO initiative 293
References

Coates, H. and Richardson S. 2011. An international assessment of bachelor degree graduates’


learning outcomes, Higher Education Management and Policy, 23/3. Paris: OECD.

Rauhvargers, A. 2011. Global University Rankings and their Impact. EUA report on rankings 2011,
European University Association, Brussels.

OECD (Organisation for Economic Co-operation and Development). 2006. Education Policy
Analysis: Focus on Higher Education. Paris: OECD.

OECD. 2007. Understanding the Social Outcomes of Learning. Paris: OECD.

OECD. 2011. Education at a Glance. Paris: OECD.

UNESCO. 2011. UNESCO Global Forum on Rankings and Accountability in Higher Education:
Uses and Misuses, 16–17 May 2011, UNESCO, Paris.

294 Rankings and Accountability in Higher Education: Uses and Misuses


Appendix
Source: OECD, Education at a glance, 2011, Indicator A2, Chart A2.1, Page 44

Chart 1
Upper secondary graduation rates ( 2009)
Chapter 15. Towards an international assessment of higher education learning outcomes:

of which < 25 of which ≥ 25 Total


100
90
80
70
60
50
40
30
20
10
0
the OECD-led AHELO initiative

ea
ia

l
d

ay

nd

er d

nd

ic

ic

es

ge

ile

ico
l

ly

ey
ae
ga

nd

in
ur
pa

ai

e
an

an

an

ag
ar
ar
en

bl

bl
do

da
I ta

at
rw

Ch
ra

d
Ko

rk
la

la
Is r

ex
Sp

Ch
tu

bo
nm
ng

pu

pu
la
nl

al

e
Ja

rm

er
ov

St
na
ng

I re

Ice

Po

ve

Tu
No

Sw

M
r

ze
Fi

av

m
Hu

Re

Re
Po

Sl

d
Ge

Ca
De

0a
Ki

itz

i te

xe
w

CD
h

ak
d

G2
Ne

Sw

Lu
ec

Un
i te

ov
OE
Cz
Un

Sl
1. Year of reference 2008.
Countries are ranked in descending order of the upper secondary graduation rated in 2009.
Source: OECD. China: UNESCO Institute for Statistics (World Education Indicators Programme)Table A2.1. See annex 3 for notes (www.oecd.org/edu/eag2011).
295
296

Tertiary-type A [Bachelor’s degree] graduation rates in 2009, by gender (first-time graduates)

Chart 2
Source: OECD, Education at a glance, 2011, Indicator A3, Chart A3.1, Page 60
Rankings and Accountability in Higher Education: Uses and Misuses

1. Year of reference 2008.


Countries are ranked in descending order of women’s graduation rates for tertiary-type A education in 2009.
Source: OECD. Table A3.1. See annex 3 for notes (www.oecd.org/edu/eag2011).
First-time graduation rates for tertiary-type A [Bachelor’s] and B [Vocational-Technical] programmes (1995 and 2009)
Source: OECD, Education at a glance, 2011, Indicator A3, Chart A3.2, Page 63

Chart 3
Chapter 15. Towards an international assessment of higher education learning outcomes:
the OECD-led AHELO initiative

1. Year of reference 2000 instead of 1995


2. Year of reference 2008 instead of 2009
3. Break in the series between 2008 and 2009 due to a partial reallocation of vocational programmes into ISCED 2 and ISCED 5B.
Countries are ranked in descending order of the first-time graduation rates for tertiary-type A education in 2009.
Source: OECD. Table A3.2. See annex 3 for notes (www.oecd.org/edu/eag2011).
297
298

Entry rates into tertiary-type A [Bachelor’s] education: Impact of international students (2009)
Source: OECD, Education at a glance, 2011, Indicator C2, Chart C2.3, Page 312

Chart 4
Rankings and Accountability in Higher Education: Uses and Misuses

Adjusted (excluding international students) International students

1. The entry rates at tertiary-type A level include entry rates at tertiary-type B level
2. Year of reference 2008
Countries are ranked in descending order of adjusted entry rates for tertiary-type A education in 2009
Trends in international education market shares (2000, 2009)
Source: OECD, Education at a glance, 2011, Indicator C3, Chart C3.3, Page 322

Chart 5
Chapter 15. Towards an international assessment of higher education learning outcomes:

Percentage of all foreign tertiary students enrolled, by destination


the OECD-led AHELO initiative

1. Data relate to international students defined on the basis of thir country residence
2. Year of reference 2008
Countries are ranked in descending order of 2009 market shares.
Source: OECD and UNESCO Institute for Statistics for most data on non-OECD countries. Table C3.6, available online. See annex 3 for notes (www.oecd.org/edu/eag2011)
299
Notes on
contributors
Phil Baty is Editor of the Times Higher Education Rankings, and Editor-at-
Large of Times Higher Education magazine. Baty has been with the magazine
since 1996, as reporter, news editor and deputy editor. He was named among
the top 15 ‘most influential in education’ 2012 by The Australian newspaper
and received the Ted Wragg Award for Sustained Contribution to Education
Journalism in 2011, part of the Education Journalist of the Year Awards, run
by the Chartered Institute of Public Relations. In 2011, Times Higher Education
was named Weekly Magazine of the Year and Media Brand of the Year
(business category) by the Professional Publishers’ Association.

Rodrigo Castañeda Valle is an anthropologist and a linguist working on the


evaluation of education policies with specific emphasis in post-secondary
education. He is a research associate at the National Autonomous University
of Mexico and previously a consultant in the Directorate for Education at the
OECD in Paris. He is currently a PhD Candidate in Linguistics at Lancaster
University in the United Kingdom.

John Daniel spent seventeen years as a university president in Canada


(Laurentian University) and the United Kingdom (Open University) before
joining UNESCO as Assistant Director-General for Education in 2001 and
serving as President of the Commonwealth of Learning from 2004 to 2012.
He has been closely involved in the development of open and distance
learning for forty years. Best known among his 300-plus publications are his
books Mega-Universities and Knowledge Media: Technology Strategies for Higher
Education and Mega-Schools, Technology and Teachers: Achieving Education for
All. Knighted by Queen Elizabeth in 1994, he has received thirty-one honorary
doctorates from universities in seventeen countries.

Kevin Downing is Director Knowledge, Enterprise and Analysis at City


University of Hong Kong. He is a Chartered Psychologist and Chartered
Scientist with a current Licence to Practice, and Associate Fellow of the
British Psychological Society with wide international experience including
senior academic and administrative posts in Europe and Asia. His published

Notes on contributors 301


work centres on education management, rankings and metacognitive
development. He was awarded the City University of Hong Kong Teaching
Excellence Award in 2004/2005 for his contribution to the development of
blended learning with the innovative use of technology, and is the recipient
of the prestigious International Award for Innovative use of Technology in
Teaching and Learning.

Judith S. Eaton is President of the Council for Higher Education Accreditation


(CHEA), the largest institutional higher education membership organization
in the United States. A national advocate and institutional voice for self-
regulation of academic quality through accreditation, CHEA is an association
of 3,000 degree-granting colleges and universities. Prior to her work at CHEA,
she served as chancellor of the Minnesota State Colleges and Universities,
where she was responsible for leadership and coordination of thirty-two
institutions serving more than 162,000 students statewide. Previously, she
was president of the Council for Aid to Education, Community College of
Philadelphia and the Community College of Southern Nevada, and served as
vice president of the American Council on Education. She also has held full
and part-time teaching positions at Columbia University, the University of
Michigan and Wayne State University.

Sharifah Hapsah is Vice-Chancellor of the Universiti Kebangsaan Malaysia


(UKM) (National University of Malaysia). She has previously served as Chair
and CEO of the National Accreditation Board, during which time she for-
mulated the legal provisions for the Malaysian Qualifications Agency (MQA)
Act, responsible for the quality framework of higher education in Malaysia.
She is also credited for developing the code of university good governance.
Prior to that, she served as Director of the Quality Assurance Division within
the Ministry of Higher Education, and was Professor and Director of the
Centre for Academic Advancement at UKM and Professor and Head of the
Department of Medical Education at the same university.

Ellen Hazelkorn is Vice President of Research and Enterprise, and Dean


of the Graduate Research School, Dublin Institute of Technology, Ireland;
she also leads the Higher Education Policy Research Unit. She is a member
of the Higher Education Authority (Ireland), and is/has been a member of
international review teams for Australia, Finland, Germany, the Netherlands,
Poland and Spain, and a consultant to the OECD. She works closely with the
International Association of Universities (IAU) and is a Visiting Professor at
the University of Liverpool. She is a member of numerous editorial boards,
and has authored/co-authored over sixty peer-reviewed articles, policy
briefs, books and book chapters. She writes a regular blog for the Chronicle of

302 Rankings and Accountability in Higher Education: Uses and Misuses


Higher Education. Her book, Rankings and the Reshaping of Higher Education:
The Battle for World-Class Excellence, was published by Palgrave Macmillan
(2011).

Nian Cai Liu took his undergraduate study in chemistry at Lanzhou University
of China. He obtained his doctoral degree in polymer science and engineer-
ing from Queen’s University at Kingston, Canada. He is currently the Director
of the Center for World-Class Universities and the Dean of Graduate School
of Education at Shanghai Jiao Tong University. His research interests include
world-class universities, university evaluation and science policy. The
Academic Ranking of World Universities, an online publication of his group,
has attracted attentions from all over the world. His latest book is Paths to a
World-Class University: Lessons from Practices and Experiences.

Marion Lloyd is chief project coordinator at the General Directorate for


Institutional Evaluation (DGEI) at the Universidad Nacional Autónoma de
México (UNAM). She holds a BA in English and Romance Languages from
Harvard University (1994) and is completing a Master’s degree in Latin
American Studies at UNAM. Before joining the DGEI in 2011, she spent fifteen
years as a foreign correspondent in South Asia and Latin America for The
Boston Globe, Houston Chronicle and The Chronicle of Higher Education. She
currently writes a monthly column on international higher education for the
Mexican newspaper Milenio.

Mmantsetsa Marope is the former Director of Division for Basic to Higher


Education and Learning at UNESCO Headquarters in Paris. She has over
thirty years of experience in education including ten years at the World
Bank, eleven years as a university professor, as well as extensive advisory
and consultancy services to governments, regional consortia of Ministries
of Education and donors, bilateral and multilateral organizations, manag-
ing international research capacity development networks, civil services, on
school management and school teaching. She holds a Ph.D. in Education
from the University of Chicago, a MEd. from Pennsylvania State University,
and a BA from the University of Botswana and Swaziland.

Peter A. Okebukola is a Professor of Science Education and President, Global


University Network (GUNi)-Africa. He was the winner of the 1993 UNESCO
Kalinga Prize for the Popularization of Science. He is the immediate past
Executive Secretary of the National Universities Commission. Since 2001, he
has been actively involved in ranking universities and promoting the culture
of quality in higher education systems. He has co-directed the series of Africa
regional conferences on quality assurance in higher education. He is the

Notes on contributors 303


Chairman of Council of Crawford University, Nigeria and the recipient of the
national honour of the Officer of the Order of the Federal Republic.

Imanol Ordorika is Professor of Social Sciences and Education at the


Universidad Nacional Autónoma de México (UNAM). Currently, he is General
Director for Institutional Evaluation at UNAM and creator of the online
Comparative Study of Mexican Universities (ECUM). He is the author of
several books, including Power and Politics in Higher Education (Routledge,
2003), and co-editor of the ASHE reader Comparative Education (Pearson,
2010) and Universities and the Public Sphere (Routledge, 2011). Other recent
publications include ‘The Chameleon’s Agenda: Entrepreneuralization of
Private Higher Education in Mexico’ in Universities and the Public Sphere
(Routledge, 2011). He holds a Ph.D. from Stanford.

Jamil Salmi is a global tertiary education expert providing policy advice


and consulting services to governments, universities, multilateral banks
and bilateral agencies. Until January 2012, he was the World Bank’s ter-
tiary education coordinator. He wrote the first World Bank policy paper on
higher education in 1994 and was the principal author of the Bank’s 2002
Tertiary Education Strategy entitled Constructing Knowledge Societies: New
Challenges for Tertiary Education. Over the past twenty years, he has provided
policy advice on tertiary education reform and strategic planning to govern-
ments and university leaders in more than eighty countries. His 2009 book
addressed the Challenge of Establishing World-Class Universities. His latest
book, co-edited with Professor Phil Altbach, entitled The Road to Academic
Excellence: the Making of World-Class Research Universities, was published in
September 2011.

Peter Scott is Professor of Higher Education Studies at the Institute of


Education University of London, Chair of the Council of the University of
Gloucestershire and a member of the board of the Magna Charta Observatory
in Bologna. Sir Peter was Vice-Chancellor of Kingston University London from
1998 to 2010 and President of the Academic Cooperation Association (ACA) in
Brussels from 2000 to 2008. He has written about the internationalization
of higher education, the development of mass higher education systems and
the governance of universities.

Ben Sowter leads the QS Intelligence Unit, which is fully responsible for the
operational management of all major QS research projects including the QS
Top MBA Applicant and Recruiter Research, the QS ‘World University Rankings’
and the QS Asian University Rankings. He graduated from the University of
Nottingham with a BSc. in Computer Science. Upon graduation he spent two

304 Rankings and Accountability in Higher Education: Uses and Misuses


years working for the UK national office of the international student charity
AIESEC, for which he was ultimately elected National President.

Qian Tang joined UNESCO as Senior Programme Specialist, Section for


Technical and Vocational Education, in 1993 and became Chief of the Section
in 1996. In this position, he organized the Second International Congress
on Technical and Vocational Education (Seoul, 1999). From 2001 to 2005, he
was Director of the Executive Office for the Education Sector. In 2005, he
became Deputy Assistant Director-General for Education and was appointed
Assistant Director-General for Education in April 2010. Since then, he has
led the efforts to revitalize the Education Sector in order to raise the vis-
ibility of education on the international development agenda and provide
concrete assistance to UNESCO’s Member States. He has a BA in Education
from Shanxi University, China and a Ph.D. in Biology from the University of
Windsor, Canada.

Frans van Vught is a high-level expert and advisor at the European


Commission (EC). Furthermore he is President of the European Center for
Strategic Management of Universities (Esmu), President of the Netherlands
House for Education and Research (Nether), and member of the board of
the European Institute of Technology Foundation (EITF), all in Brussels.
He was President and Rector of the University of Twente, the Netherlands
(1997–2005), and a member of the national Innovation Platform, the
Socio-Economic Council and the Education Council of the same country.
His many international functions include memberships of the University
Grants Committee of Hong Kong, and the board of the European University
Association (EUA) (2005–2009).

Peter J. Wells is a former Programme Specialist with UNESCO’s Division for


Teacher Development and Higher Education. His main areas of concentra-
tion include the mobility of learning and learners, the quality assurance of
cross-border provision and the internationalization of higher education.
He is also coordinator of UNESCO’s five regional recognition Conventions
and is Secretary to the Europe and North America l ‘Lisbon Recognition
Convention’. Prior to moving to UNESCO Headquarters, Mr Wells was a
Programme Specialist and Director a.i. of the UNESCO European Centre for
Higher Education (CEPES) in Bucharest.

Richard Yelland is Head of the Policy Advice and Implementation Division


(PAI) in the Directorate for Education at the OECD (Organisation for Economic
Co-operation and Development). OECD’s motto is ‘Better Policies for Better
Lives’ and the Directorate’s mission is to assist in achieving high-quality

Notes on contributors 305


lifelong learning for all that contributes to personal development, sustainable
economic growth and social cohesion. Drawing on the data, evidence and
analysis developed by the Directorate for Education’s teams of statisticians
and analysts, PAI coordinates the provision of advice on education policy
to OECD members and other countries, both collectively and individually,
across all sectors of education.

Akiyoshi Yonezawa Ph.D. is an associate professor at the Graduate School


of International Development (GSID), Nagoya University. With a sociology
background, his main research interests are comparative higher education
policies, especially focusing on world-class universities, quality assurance
of higher education, and public-private relationship of higher education.
Before moving to Nagoya University in October 2010, he previously worked
at Tohoku University, the National Institution for Academic Degrees and
University Evaluation (NIAD-UE), Hiroshima University, the OECD and the
University of Tokyo.

Frank Ziegele is Director of the CHE Centre for Higher Education, Gütersloh
(Germany), and Professor for Higher Education and Research Management
at the University of Applied Sciences Osnabrück. He trained as economist
and his research and publications focus on higher education finance,
governance, strategic management, contract management, ranking and
controlling. In these areas he also acts as consultant and trainer. He has
contributed with about 100 publications to the field of higher education
policy and management and realized more than eighty projects in the same
field, for instance as co-leader of the U-Multirank feasibility study.

306 Rankings and Accountability in Higher Education: Uses and Misuses


EDUCATION ON THE MOVE
Bringing the latest thinking in education
to education specialists worldwide
Created by UNESCO, the series – Education on the Move –
focuses on key trends in education today and challenges
for tomorrow. The series seeks to bring research knowledge
produced by various academic disciplines and within various
organizations to those who can shape educational policies
and drive reforms. As such, it also intends to contribute to
on-going reflections on the international education agenda.

Rankings and Accountability


in Higher Education
Uses and Misuses
P. T. M. Marope, P. J. Wells and E. Hazelkorn (Editors)

The growing impact of university rankings on


public policy – and on students’ choices – has
stirred controversy worldwide. This unique volume
brings together the architects of university
rankings and their critics to debate the uses and
misuses of existing rankings. With voices from five
continents, it provides a comprehensive overview
of current thinking on the subject and sets out
alternative approaches and complementary tools
for a new era of transparent and informed use of
higher education ranking tables.

Education
Sector
United Nations
-
(GXFDWLRQDO6FLHQWL¿FDQG
Cultural Organization

www.unesco.org/publishing

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy