Rankings and Accountability in Higher Education
Rankings and Accountability in Higher Education
Rankings and Accountability in Higher Education
Publishing
United Nations
(GXFDWLRQDO6FLHQWL¿FDQG
Cultural Organization
Rankings
and Accountability
in Higher Education
Uses and Misuses
Rankings
and Accountability
in Higher Education
Uses and Misuses
© UNESCO 2013
ISBN 978-92-3-001156-7
The designations employed and the presentation of material throughout this publication do not imply the expression
of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of
its authorities, or concerning the delimitation of its frontiers or boundaries.
The ideas and opinions expressed in this publication are those of the authors; they are not necessarily those of UNESCO
and do not commit the Organization.
Foreword ..................................................................................................................................................... 5
University Rankings: The Many Sides of the Debate
Mmantsetsa Marope and Peter Wells ................................................................................... 7
3
Chapter 11 What’s the use of rankings?
Kevin Downing ........................................................................................................................... 197
Chapter 12 A decade of international university rankings:
a critical perspective from Latin America
Imanol Ordorika and Marion Lloyd ................................................................................. 209
This book is the first of a series of studies that will consider trends in education
today and challenges for tomorrow. This new series has been created to
respond to demand from policy-makers, educators and stakeholders alike
for state-of-the-art analyses of topical issues. As the world’s leading agency
on educational matters, UNESCO is at the forefront of the intellectual debate
on the future of learning, with an unparalleled capacity to bring together
trailblazers from across the globe.
University rankings are one such issue. Since the turn of the millennium
– and in particular following the release in 2003 of the Academic Ranking of
World Universities by Shanghai Jiao Tong University in China and the Times
Higher Education World University Rankings in 2004 – there has been a huge
increase in the attention paid to this topic in the mainstream media. In May
2012, Universitas 21 were the latest to join the throng with the launch of the
U21 Ranking of National Higher Education Systems in forty-eight countries.
With this in mind, UNESCO together with the Organisation for Economic
Co-operation and Development (OECD) and the World Bank organized
the ‘Global Forum on Rankings and Accountability in Higher Education:
Uses and Misuses’ at its Paris Headquarters on 16 and 17 May 2011. The
forum re-confirmed the fact that the advent of mass higher education and
proliferation of new institutional models in the sector over recent decades
has resulted in a welcome and unprecedented expansion of access and
choices in supply, while at the same time raising questions over the validity
5
and quality of provision. This has led in turn to many higher education
stakeholders – including students, researchers, teachers, policy-makers
and funding agencies – floundering or becoming confused over the quality
of what is on offer.
Inspired by the Forum, the current volume brings together both promoters
and opposers of rankings, to reflect the wide range of views that exist
in the higher education community on this highly controversial topic. If
learners, institutions and policy-makers are to be responsible users of
ranking data and league table lists, it is vital that those compiling them
make perfectly clear what criteria they are using to devise them, how they
have weighted these criteria, and why they made these choices. It is hoped
that this information will enable stakeholders to make informed decisions
on higher education institutions.
6
University Rankings:
The Many Sides of the Debate
Mmantsetsa Marope and Peter Wells
7
The tide of attention paid to university rankings, however, well and truly
swept over the sector a decade later in 2003 with the release of the Academic
Ranking of World Universities (ARWU) by Shanghai Jiao Tong University in
China and the Times Higher Education World University Rankings a year
later. The topic has rarely been out of higher education headlines and the
mainstream media ever since. At the same time, the topic has progressively
attracted more and more anti-ranking debates, initiatives and even bodies.
As is evident in this volume, the debate on whether or not universities
should be ranked has not abated.
Comparing and even ranking our ‘worlds, countries and institutions’ impels
the construction and use of common ‘yardsticks’ along whose gradations
these entities can be placed. Yet, unlike length, height and width, these ‘yard-
sticks’ are used to measure very complex, often multi-faceted, fast-changing,
contextually varied and even conceptually contentious phenomena. For
• What are the limitations and therefore potential pitfalls in the use of
university rankings?
• How best can diverse stakeholders benefit from university rankings and
other complementary instruments?
The book is divided into four parts. Part I comprises three chapters that elab-
orate the methodological approaches used by three of the most prominent
‘ranking houses’. It adopts a critical introspective approach and presents not
only the methodologies, but also their evolution as well as their strengths
and shortcomings. Consistent with UNESCO’s intention to develop discern-
ing stakeholders, Part I informs the reader and/or user what they can and
cannot expect to get when they use the university rankings from these three
‘ranking houses’. The three explicitly highlight the limited coverage of their
‘world university rankings’ as they focus on about 200 (or 1 per cent) of the
nearly 17,000 world universities. Although varied in many respects, the 200
ranked universities have much in common:
The ranking houses also recognize that the scope of the currently main rank-
ings is thus limited:
different global rankings have different purposes and they only meas-
ure parts of universities’ activities. Bibliometric rankings focus on
research output, and URWU emphasizes the research dimension of
universities also (Liu, 2012).
The fact that rankings only embrace 1 per cent of the world’s universities,
and that they focus on research and even then mostly scientific research,
has attracted immense criticism from diverse stakeholders (see Parts II, III
and IV). For their part the ranking houses have neither been deaf to nor
blind of this criticism. Part I presents their earnest efforts to not only admit
and explain the scope of their methodologies, but also to demonstrate a
willingness to progressively improve on them to better cover what is com-
monly known as the scope of university functions – research, teaching and
social responsibility.
Refreshingly, the ranking houses also recognize that no matter how much
they expand the base of indicators considered in their methodologies, they
can never exhaustively cover the full range of the universities’ functions and
activities. By their very nature indicators are selective and not exhaustive. As
such, they responsibly caution that,
It can safely be said that the explosion of interest in rankings has been
outmatched by the volume of criticism from virtually all spheres, including
academics, universities, policy-makers, development agencies, education
service providers and students (Parts II and III). Diverse as these constitu-
ents may be, they mostly start with an acknowledgement that ‘love them
or hate them, rankings are here to stay’. On the positive side, rankings
address the growing demand for accessible, manageably packaged and
Critics argue that rankings can draw universities’ attention away from teach-
ing and social responsibility towards research or even scientific research.
However, ‘ranking houses’ acknowledge that they focus their attention on
research-focused universities, and thus are expanding their indicators to
take into account teaching. What perhaps is at issue here is whether there
should be rankings that emphasize other functions. Such developments
could facilitate the building of reliable indicators and databases on the
quality of research and social responsibility.
There have also been concerns that by applying a limited set of crite-
ria to world universities and given the strong desire to feature in the
top 200 universities, rankings could actually ‘McDonaldize’ higher educa-
tion institutions and render them irrelevant to their immediate contexts.
However, evidence equally shows that higher education institutions are
mature, sophisticated and complex enough to balance responsiveness
to the imperatives of globalization with responsiveness to the demands
of their immediate contexts (Downing, 2012). Invariably, universities are
found to use rankings as a supplementary rather than as a sole assess-
ment of their quality. This is in line with the complementarity of tools
advocated in this volume.
Rankings are said to favour the advantage enjoyed by the 200 best-
ranked institutions. These tend to be older (200+ years) established
institutions with 25,000 students or more, 2,500 faculty or more, and
with endowments of over US$ 1 billion and annual budgets of more than
US$2 billion. However, this is a distraction from the focus of rankings,
which emphasizes quality at the pinnacle and not so much the process of
getting there. The two are both legitimate questions; however, rankings
should be critiqued on what they set out to do rather than on what the
critics want them to do. In any case, if characteristics of the top-ranked
universities do not come together to make for a ‘world-class’ higher
education institution, then the obvious question is ‘which characteristics
could possibly do’?
Moving from the institutional to the systemic level, the World Bank proposes
a benchmarking approach to run a ‘health check’ on tertiary education
systems around the world. As with all benchmarking exercises, the purpose
of the approach is said to not create a list of winners and losers, but to
offer a way for national higher education systems to compare themselves
to others of similar design, disposition and context, and from this starting
point to develop strategies for improvements. By looking at the system as a
whole rather than its constituent institutions the suggestion is that policy-
makers can elaborate a long-term vision for their tertiary strategy. Such
a holistic-therapy approach to the health of a system does, however, run
the risk of bypassing fundamental shortcomings at the institutional patient
level – treatable conditions that still need to be addressed in concert with
other complementary quality assessment tools if the body system is to
function properly. If indeed benchmarking is a ‘cure’, the reader should
take it with the full knowledge of its potential side effects; as indeed most
cures tend to come with some!
Invoking the mantra that ‘no one size fits all’, one must be conscious of the
fact that, admirable as these additional quality measuring techniques may
be, they, like ranking initiatives, should not be taken in isolation or consid-
ered definitive. Clearly their level of sophistication compared to the crude
rankings of the last century is at once impressive, even beguiling. Yet, they
still cannot lay claim to capturing every individual characteristic and nuance
of every individual institution they seek to compare.
Conclusion
As a neutral broker of knowledge, UNESCO’s role is not to endorse any of the
above ranking methodologies, the diverse perspectives on rankings or the
complementary approaches to them. What UNESCO does seek to do here is
to identify and explore the critical issues inherent to the ranking phenom-
enon and to give the microphone of debate to the various stakeholders so
that they may share their views on improving the generation and applica-
tion of university rankings.
As noted above, comparisons and rankings are in the DNA of 21st century
life. The world is overflowing with ranked lists, from the ‘top 10 must-see
cities’ to the ‘top 5 grossing movies’, some of which are based on indisputable
facts while others are more nebulous and subjective in their opinions. It is
consequently vital to retain some perspective when interpreting such lists.
The bestselling movies do not necessarily win critics’ approval or industry
accolades. Similarly, billions of people will never visit the world’s ‘must-see’
tourist attractions. This does not belittle the information; it simply renders
it reflective rather than definitive. The same reflection is therefore called for
with modern university rankings.
The 15,000+ institutions around the world that have not, do not and will
not appear on any ‘top’ list of universities continue their noble pursuits of
educating and nurturing learners hungry for knowledge and skills; of con-
tributing to the development of human and social capital; and of undertak-
ing important research for sustainable futures. Obsessing about joining and
climbing a league table or becoming ‘world-class’ ignores the greater role,
purpose and mission of higher learning institutions. This once again points
to the central tenet of this volume in the plea for a responsible and informed
use of university rankings. In her opening address to the ‘Global Forum on
Rankings’, the Director-General of UNESCO offered a timely reminder:
University rankings are a hotly debated issue. They are viewed in very
different ways by rankers, students, employers, pre-university level
schools and the higher education community. It is good to see that
international rankings are diversifying and moving towards more
broadly balanced criteria and becoming multidimensional, as are
national rankings… While competition and international comparisons
can be positive trends, a key challenge for us in UNESCO is to continue
References
Maclean, A. H. H. 1900. Where we get our best Men. Some statistics showing their nationalities,
counties, towns, schools, universities, and other antecedents: 1837-1897 . London.
Myers, L. and Robe, J. 2009. Report from the Center for College Affordability and Productivity: www.
centerforcollegeaffordability.org/uploads/College_Rankings_History.pdf (Accessed
20 August 2012.)
Methodological
Considerations
Chapter 1
The Academic Ranking of
World Universities and
its future direction
Nian Cai Liu
History of Academic Ranking of World
Universities
The Chinese dream of world-class universities
I asked myself many questions during this process. What is the definition
of a ‘world-class university’? How many world-class universities should
there be globally? What are the positions of top Chinese universities
in world university rankings? How can top Chinese universities reduce
the gap between themselves and world-class universities? In order to
answer these questions we began to benchmark top Chinese universities
with world-class universities. This eventually resulted in a ranking of
world universities.
From 1999 to 2001, Dr Ying Cheng, two other colleagues and I worked on
the project to benchmark top Chinese universities with four groups of US
universities, from the very top to the less-known research universities,
according to a wide spectrum of indicators of academic or research perform
ance. According to our estimates, the positions of top Chinese universities
fell within the 200–300 bracket globally. The results of these comparisons
and analyses were used in the strategic planning process of Shanghai Jiao
Ever since its publication, ARWU has attracted worldwide attention. Numerous
requests have been received asking us to provide a ranking of world universi-
ties by broad subject fields/schools/colleges and by subject fields/programmes/
departments. We have tried to respond to these requests. The Academic
Ranking of World Universities by Broad Subject Fields (ARWU-FIELD) and the
Academic Ranking of World Universities by Subject Fields (ARWU-SUBJECT)
were published in February 2007 and October 2009 respectively.
Unexpected impact
Although the initial purpose of ARWU was to ascertain the global stand-
ing of top Chinese universities in the world higher education system, it has
attracted a lot of attention from universities, governments and public media
worldwide. Mainstream media in almost all major countries has reported on
ARWU. Hundreds of universities have cited the ranking results in their cam-
pus news, annual reports and promotional brochures. A survey on higher
education published by The Economist referred to ARWU as ‘the most widely
used annual ranking of the world’s research universities’ (The Economist,
2005). Burton Bollag (2006), a reporter at Chronicle of Higher Education wrote
that ARWU ‘is considered the most influential international ranking’.
ARWU has been widely cited and employed as a starting point for identify-
ing national strengths and weaknesses as well as for facilitating reform and
setting new initiatives (e.g. Destler, 2008). Martin Enserink (2007) referred to
ARWU and argued in his paper published in Science that ‘France’s poor show-
ing in the Shanghai ranking… helped trigger a national debate about higher
education that resulted in a new law… giving universities more freedom’.
Methodologies of ARWU
In total, more than 2,000 institutions have been scanned and about 1,200
institutions have actually been ranked. Universities are ranked by several
indicators of academic or research performance, including alumni and staff
winning Nobel Prizes and Fields Medals, highly cited researchers, papers
published in Nature and Science, papers indexed in major citation indices,
and the per capita academic performance of an institution. Table 1 shows the
indicators and weights for ARWU.
Immunology
Neuroscience
Agricultural
Sciences
Plant and Animal
Science
Ecology/
Environment
Papers indexed in Papers indexed in Papers indexed in Papers indexed in Papers indexed
Science Citation Science Citation Science Citation Science Citation in Social Science
PUB
25%
journals of SCI journals of ENG journals of LIFE journals of MED journals of SOC
fields compared fields compared fields compared fields compared fields compared
to that in all SCI to that in all ENG to that in all LIFE to that in all MED to that in all SOC
journals journals journals journals journals
Not Applicable Total engineering- Not Applicable Not Applicable Not Applicable
Fund
25%
related research
expenditures
Note: SCI for Natural Sciences and Mathematics, ENG for Engineering/Technology and Computer Sciences,
LIFE for Life and Agriculture Sciences, MED for Clinical Medicine and Pharmacy, and SOC for Social
Sciences.
Weight
Code Computer Economics/
Mathematics Physics Chemistry
science Business
Fields Medals in Nobel Prizes in Nobel Prizes in Turing Awards in Nobel Prizes in
Mathematics since Physics since 1961 Chemistry since Computer Science Economics since
1961 1961 since 1961 1961
Staff of an Staff of an Staff of an Staff of an Staff of an
institution winning institution winning institution winning institution winning institution winning
Award
15%
Fields Medals in Nobel Prizes in Nobel Prizes in Turing Awards in Turing Awards in
Mathematics since Physics since 1961 Chemistry since Computer Science Computer Science
1961 1961 since 1961 since 1961
Highly cited Highly cited Highly cited Highly cited Highly cited
researchers in researchers in researchers researchers in researchers in
HiCi
25%
Alumni are the total number of alumni of an institution who have won
Nobel Prizes and Fields Medals. Alumni are defined as those who
have obtained bachelor, Master’s or doctoral degrees from the institu-
tion. Different weights are set according to the periods of obtaining
degrees: 100 per cent for degrees obtained in 2001–2010, 90 per cent
for degrees obtained in in 1991–2000, 80 per cent for degrees obtained
in 1981–1990, and so on, and finally 10 per cent for degrees obtained in
1911-1920. If a person obtained more than one degrees from an institu-
tion, the institution is considered once only.
Award refers to the total number of staff of an institution who have won
Nobel Prizes in Physics, Chemistry, Medicine and Economics and Fields
Medal in Mathematics. The staff are defined as those who work at an
institution at the time of winning the prize. Different weights are set
according to the periods of winning the prizes: 100 per cent for win-
ners in 2001–2010, 90 per cent for winners in 1991–2000, 80 per cent
for winners in 1981–1990, 70 per cent for winners in 1971–1980, and
so on, and finally 10 per cent for winners in 1911–1920. If a winner is
affiliated with more than one institution, each institution is assigned
the reciprocal weighting. For Nobel Prizes, if more than one person
shares a prize, weights are set for winners according to their propor-
tion of the prize.
The calculation of Award and Alumni indicators has been changed For
ARWU-FIELD and ARWU-SUBJECT, with only alumni or laureates post-
1961 considered. The weight is 100 per cent for 2001–2010, 80 per cent
for 1991–2000, 60 per cent for 1981–1990, 40 per cent for 1971–1980
and 20 per cent for 1961–1970. The Turing award is used for subject
ranking of computer science.
N&S is the total number of papers published in Nature and Science in the
last five years. To distinguish the order of author affiliation, a weight of
100 per cent is assigned for corresponding author affiliation, 50 per cent
for first author affiliation (second author affiliation if the first author
PUB is the total number of papers indexed in the Science Citation Index-
Expanded and the Social Science Citation Index in the last year. Only
published articles and Proceedings papers are considered. When cal-
culating the total number of papers of an institution, a special weight
of two was introduced for papers indexed in the Social Science Citation
Index.
PCP is the weighted scores of the above five indicators divided by the num-
ber of full-time equivalent academic staff. If the number of academic
staff for institutions of a country cannot be obtained, the weighted
scores of the above five indicators are used.
The list of the top 500 institutions for ARWU is published on the website.
Taking into consideration the significance of differences in the total score,
ARWU is published in groups of fifty institutions in the range of 100 to
200 and groups of 100 institutions in the range of 200 to 500. In the same
group, institutions are listed in alphabetical order. Table 4 shows the aver-
age performance of institutions in different ranking groups by indicator.
Almost one year and a half after the first publication of ARWU, the Times
Higher Education Supplement published its ‘World University Rankings’ in
November of 2004. After 2005, the ranking was co-published by Times Higher
Education and Quacquarelli Symonds Company every year as THE-QS World
University Rankings. The THE-QS ranking indicators include an international
opinion survey of academics and employers (40 per cent weight for academics
and 10 per cent weight for employers), student faculty ratio (20 per cent),
citations per faculty member (20 per cent) and proportions of foreign faculty
and students (5 per cent weight for each) (THE-QS, 2009). In 2010, Times
Higher Education terminated its collaboration with Quacquarelli Symonds
and both began to publish their own global ranking lists. While the new QS
ranking fully retained the methodology of previous THE-QS rankings, the
Times Higher Education ranking increased its number of indicators to thirteen
and Thomson Reuters became its data provider (Times Higher Education, 2010).
There have been other global university rankings. The ‘Ranking Web of World
Universities’ by the Cybermetrics Lab of CSIC (2004) uses a series of web
indicators to rank 16,000 universities worldwide. A French higher educa-
tion institution, École des Mines de Paris (2007), published the ‘Professional
Ranking of World Universities’ by calculating the number of alumni among
the Chief Executive Officers of the 500 leading worldwide companies. In
December 2011, the University Ranking by Academic Performance Center
of Middle East Technical University (2011) announced the world’s top 2,000
universities based on six indicators of research output. Up to now, more than
a dozen global university rankings have been published.
Different global rankings have different purposes, and they only measure
parts of universities’ activities. Bibliometric rankings focus on research output,
while ARWU also emphasizes the research dimension of universities. These
systems do not assess well the fundamental role of universities – teaching
– and their contributions to society. Although the THE-QS ranking tries to
measure multi-faceted universities by combining indicators of different
activities, including some proxies of teaching quality, its practice largely failed
to convince others and the ranking was taken as a measure of reputation
and ‘not about teaching and only marginally about research’ (Marginson,
2007). Therefore, none of the current global ranking systems can provide
a complete view of universities. Taking any single ranking as a standard to
judge a university’s overall performance is improper.
For the moment none of the ranking indicators is perfect; while some seem
practically acceptable, others have serious flaws. The so-called ‘Academic
Peer Review’ used by THE-QS ranking might be the indicator most often
criticized. First, it is an expert opinion survey rather than a typical peer
review in academic community; the respondents, even though they are
experts, can hardly make professional judgments on on such large entities
in their entirety. (Van Raan, 2007). Second, psychological effects such as the
‘halo effect’ (Woodhouse, 2008) and the ‘leniency effect’ (Van Dyke, 2008)
affect the results of the opinion survey, so that there is a bias towards well-
known universities and respondents’ universities.
Some general criticisms on ranking practices hold true for global rankings. A
common phenomenon in global rankings is the arbitrary decision of weights
of indicators. Another criticism is that the difference between scores for uni-
versities with different global ranks may be statistically insignificant.
ARWU provides a list of 500 universities. This covers less than 5 per cent of
all 15,000 higher education institutions in the world (the number of higher
Since January 2011, we have cooperated with the Global Research University
Profile (GRUP) project, which aims to develop a database compiling facts and
figures of around 1,200 global research universities ranked by ARWU annu-
ally. An online survey tool has been designed to collect the basic information
of universities such as number of academic staff, number of students, total
income, research income and so on. We sent survey invitations to 1,200 uni-
versities and promised to provide participating institutions with an analysis
report based on data collected from all respondent institutions. In the invita-
tion letter we also explain that their data may be used to develop customized
rankings. The number of universities participating in the survey has been
very encouraging so far. In addition to the survey, we have managed to obtain
data from national education statistics agencies in major countries, including
the National Center for Education Statistics in the United States; the Higher
Education Statistics Agency in the United Kingdom; and the Department of
Education, Employment and Workplace Relations in Australia.
Although the comparability and quality of the survey data may not be as good
as that of data obtained from third parties, more useful indicators can be
developed to meet increasing demand to compare global universities from
various perspectives. We plan to employ the survey data and third-party
data to design a web-based platform in which users will be able to select
from a large variety of indicators and weights to compare the universities
concerned. In addition, we will undertake in-depth analysis of the survey
data in order to describe the characteristics of world-class universities and
research universities in different countries and worldwide. We hope that the
results will enhance our understanding of world-class universities and will
be helpful when initiating or adjusting relevant policies.
References
Blair, A. 2007. Asia threatens to knock British universities off the top table. The Times
(21 May 2007): www.timesonline.co.uk/tol/life_and_style/education/article1816777.ece
(Accessed 15 April 2011.)
Bollag, B. 2006. Group endorses principles for ranking universities. Chronicle of Higher
Education: http://chronicle.com/article/Group-Endorses-Principles-for/25703
(Accessed 15 April 2011.)
Cheng, Y. and Liu, N.C. 2006. A first approach to the classification of the top 500 world universities
by their disciplinary characteristics using scientometrics. Scientometrics, 68(1): 135–50.
École des Mines de Paris. 2007. Professional Ranking of World Universities: www.mines-paristech.fr/
Actualites/PR/Archives/2007/EMP-ranking.pdf (Accessed 15 April 2011.)
Enserink, M. 2007. Who ranks the university rankers? Science, 317(5841): 1026–28.
European Commission Research Headlines. 2003. Chinese study ranks world’s top 500 universities.
(31 December 2003): http://ec.europa.eu/research/headlines/news/article_03_12_31_
en.html (Accessed 15 April 2011.)
Hazelkorn, E. 2011. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. London: Palgrave Macmillan.
Huang, M.H. 2007. 2007 Performance Ranking of Scientific Papers for World Universities:
http://ranking.heeact.edu.tw/ (Accessed 15 April 2011.)
Patten, C. 2004. Chris Patten’s speech. The Guardian. (5 February 2004): www.guardian.co.uk/
education/2004/feb/05/highereducation.tuitionfees1 (Accessed 15 April 2011.)
Research Center for Chinese Science Evaluation of Wuhan University. 2006. World Top Universities:
http://apps.lib.whu.edu.cn/newbook/sjdakypj2006.htm (Accessed 15 April 2011.)
University Ranking by Academic Performance Center of Middle East Technical University. 2011.
World Ranking: www.urapcenter.org/2010/index.php (Accessed 15 April 2011.)
Van Dyke, N. 2008. Self- and peer-assessment disparities in university ranking schemes. Higher
Education in Europe, 33(2/3): 285–93.
Van Raan, A.F.J. 2007. Challenges in the ranking of universities. J. Sadlak and N.C. Liu (eds) The
World-Class University and Ranking: Aiming beyond Status. Bucharest: UNESCO-CEPES,
pp. 87–121.
An evolving methodology:
the Times Higher Education
World University Rankings
Phil Baty
Historical overview
The first ever global university ranking, produced by Shanghai Jiao Tong
University, appeared in 2003. A year later, in 2004, the Times Higher Education
(THE) magazine published its first global university ranking, and has continued
to do so ever since. Although a second runner up, THE was the first global rank-
ing of universities to sample the views of academics across the world, as well
as to include the latest measures of research excellence and teaching capacity.
The publication emphasized that leading United Kingdom (UK) universities were
increasingly defining their success against global competitors, and noted that:
unlike domestic university league tables, this ranking does not set out
to steer students towards the best undergraduate education: it looks at
institutions in the round. Despite the importance of overseas students
to universities, international comparisons inevitably centre mainly on
research… the positions will… be used as ammunition by politicians and
vice chancellors in funding negotiating (Times Higher Education, 2004: 14).
The initial methodology used by THE was simple. There were five performance
indicators: a staff-student ratio (weighted at 20 per cent) designed to give a sense
of the ‘teaching capacity’ at each institution; an academic reputation survey
(weighted at 50 per cent); an indicator of research quality based on citations
(20 per cent); and two measures of internationalization (worth 10 per cent),
one looking at the proportion of international staff on campus, and the other
looking at the proportion of international students.
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 43
Key methodological concerns
In order to help develop the new ranking system, in early 2010, Times Higher
Education held a meeting of its expert editorial advisory board, to discuss
concerns about rankings. Three strong concerns were raised about the origi-
nal THE-QS methodology of world university rankings published between
2004 and 2009.
First, was the heavy weight (20 per cent) that had been assigned to a staff-
student ratio as the only proxy for teaching quality in the old ranking system.
It was not seen as a particularly helpful or valid indicator of teaching quality,
and it was believed that data was easily manipulated.
Second was the quality and value of reputational surveys of academics and
employers, and concerns about the size and quality of the samples. There were
further concerns that excessive weight was given to results of such subjec-
tive reputational surveys, which made up 50 per cent of the overall rankings
indicators in the 2004–2009 ranking system.
Andrejs Rauhvargers later reiterated this concern in a June 2011 report from
the European University Association, entitled Global University Rankings and
their Impact. He noted that the reputation scores in the 2004–2009 ranking
system were based on ‘a rather small number of responses: 9,386 in 2009 and
6,534 in 2008; in actual fact, the 3,000 or so answers from 2009 were simply
added to those of 2008. The number of answers is pitifully small compared
to the 18,000 email addresses used’ (Rauhvargers, 2011: 28). Rauhvargers also
raised concerns that the lists of universities that survey respondents were
asked to select from were incomplete: ‘What are the criteria for leaving out a
great number of universities or whole countries?’ (Rauhvargers, 2011: 29)
The third concern was actually raised by the Times Higher Education editorial
board and related to the use of citations data to indicate research excellence.
Given the wide variety of publication habits, and therefore given the wide vari-
ety of citation volumes between different disciplines, Times Higher Education was
advised to normalize the citations data by subject. No normalization was car-
ried out under the 2004–2009 ranking system, meaning that institutions with
strengths in areas with typically lower citation volumes, such as engineering
and the social sciences, were at a serious disadvantage compared to those with
strengths in the life sciences, where citation levels tend to be much higher.
Times Higher Education took the survey as a clear indication that, despite
the inherent problems with reducing all the complex and often intangible
activities of a university into a single ranked table, the limitations of global
ranking systems are outweighed by their perceived general usefulness.
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 45
although never totally eliminated. The real risks associated with the
methodologies, scope and use of rankings place a lot of responsibility and
accountability on ranking houses, and almost compel the establishment of a
code of conduct for these hallowed houses.
The Times Higher Education holds that as long as rankers are responsible and
transparent, as long as they invest properly in serious research and sound
data, as long as they are frank about the limitations of the proxies they
employ, and as long as they help to educate their users and engage in open
debates, rankings can be a positive force in higher education.
We are entering a world of mass higher education and the traditional world
order is shifting. Massification of higher education is made possible by the
diversification not only of providers, but also of programmes and, possibly, their
quality. If based on defensible and clearly explained methodologies, rankings
can help to fill a crucial information gap. Often students as consumers in a
competitive global market need comparative information on the institutions
they may seek to study at. Faculty, who are increasingly mobile across national
boarders, also need information to identify potential new research partners
and career opportunities. University leaders need benchmarking tools to
help forge institutional strategies. National governments need comparative
information to help determine higher education policies. Industry needs
information to establish where to invest in university research and innovation.
Carefully selected and appropriately suited indicators can provide rankings
that address information needs of such different clientele.
Times Higher Education has data on many hundreds of institutions; however, its
official rankings list comprises only the first 200 placed universities. This is done
specifically to undermine the notion that everyone should aspire to the same
model. Times Higher Education recognizes that one of the strengths of the higher
education system is its diversity. It is keen to emphasize that it does not deem
it appropriate to judge every university on the same scale against the model set
by the universities such as Harvard and Stanford and Oxford and Cambridge.
Not every institution can be a Harvard and not every institution would want
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 47
to be. Each institution of course will have its own mission and its own priori-
ties for development, serving in some cases a largely teaching-led role, and in
others focusing on local or national skills needs. Times Higher Education’s World
University Rankings examine only a globally competitive, research-led elite.
The Times Higher Education World University Rankings were finalized only
after ten months of open consultation, and the methodology was devised
with expert input from more than fifty leading figures from fifteen countries,
representing every continent.
The new Times Higher Education World University Rankings, first published on
16 September 2010, and again on 6 October 2011, recognize a wider range of
what global universities do. While the Academic Ranking of World Universities,
compiled by Shanghai Jiao Tong University, really focus only on research
performance, the Times Higher Education World University Rankings seek to
capture the full range of a global university’s activities – research, teaching,
knowledge transfer and internationalization.
Consistent with its aim to take a more holistic view of the mission of universi-
ties, the new THE rankings use thirteen separate indicators – more than any
other global system (Figure 1).
For the 2011–12 world university rankings, THE examined more than 6 million
research publications, producing more than 50 million citations accumulated
over a six-year period (2005–2009). In response to strong criticism of the
2004–2009 methodology, data were fully normalized to reflect variations in
citation volume between different subject areas. As such, universities with
strong research in fields with lower global citation rates were not penalized.
In addition, citations per paper produced by each university were measured
against world average citation levels in each field.
Times Higher Education judges knowledge transfer in terms of just one indica-
tor – research income earned from industry – but, in future years, this cate-
gory will be enhanced with other indicators. One proposal being considered,
at the time of going to press, is to take into account the number of research
papers a university publishes in partnership with an industrial partner.
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 49
Internationalization is recognized through data on the proportion of
international staff and students attracted to each institution – a sign of
how global an institution is in its outlook and, perhaps, reputation. The
ability of a university to attract the very best staff from across the world is
key to global success. The market for academic and administrative jobs is
international in scope, and this indicator suggests global competitiveness.
Similarly, the ability to attract students in a competitive global market-
place is a sign of an institution’s global competitiveness and its commit-
ment to globalization.
For the first time for the 2011–12 rankings, THE also added an indicator that
rewards a high proportion of internationally co-authored research papers.
Perhaps the most dramatic innovation for the world university rankings
for 2010 and beyond is the set of five indicators designed to give proper
credit to the role of teaching in universities, with a collective weighting
of 30 per cent. However, it should be clarified that the indicators do not
measure teaching ‘quality’. There are currently no recognized, globally
comparative data on teaching outputs, so fair global assessments of teach-
ing outputs cannot be made. What the Times Higher Education rankings do
is to look at the teaching ‘environment’ to give a sense of the kind of learn-
ing milieu in which students are likely to find themselves. Times Higher
Education takes a subjective view, based on expert advice and consulta-
tion, that the indicators of the teaching environment they have chosen are
indicative of a high quality environment.
The key indicator for this category draws on the results of an annual aca-
demic reputational survey carried out for the world university rankings
by Thomson Reuters. To meet criticisms of the reputation survey carried
out for the rankings between 2004 and 2009, Thomson Reuters brought
in a third-party professional polling company to conduct the survey. The
Academic Reputation Survey is distributed worldwide each spring. It is a
worldwide, invitation-only poll of experienced scholars, statistically repre-
sentative of global subject mix and geography. It examines the perceived
prestige of institutions in both research and teaching.
Some 19 per cent of the 2011 respondents were from the social sciences,
with 20 per cent from engineering and technology, and the same propor-
tion from physical sciences. Seventeen per cent came from the ‘clinical,
pre-clinical and health’, while 16 per cent came from the life sciences. The
smallest number of responses came from the arts and humanities – just
7 per cent – and while this is a little disappointing, it still provides a statisti-
cally sound basis for comparisons.
There was also an excellent spread of responses from around the world,
facilitated by the fact that the survey was distributed in nine languages:
Arabic, Brazilian, Chinese, English, French, German, Japanese, Portuguese
and Spanish. The vast majority of respondents, some 36 per cent, came
from North America, while 17 per cent came from Western Europe,
10 per cent from Eastern Asia, 8 per cent from Eastern Europe and
7 per cent from Oceania.
Times Higher Education also look at the ratio of PhD to bachelor’s degrees
awarded, to give a sense of how knowledge-intensive the environment is,
as well as considering the number of doctorates awarded, scaled for size, to
indicate how committed institutions are to nurturing the next generation
of academics and providing strong supervision.
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 51
Sector response to these new tables has been excellent. However, there
was notable criticism, coming mainly from heads of institutions that have
taken the biggest hits from the new methodology, but many other com-
ments have been positive.
The new methodology equally attracted praise. David Willetts, the UK gov-
ernment minister for universities and science, congratulated Times Higher
Education for revising its rankings methodology. Steve Smith, vice-chancellor
of the University of Exeter and the former president of Universities UK, which
represents all UK vice-chancellors, said that the new methodology – and
particularly its reduced dependence on subjective opinion and increased
reliance on more objective measures – ‘bolstered confidence in the evalua-
tion method’ (Smith, 2010: 43).
Future directions
References
Beck, S. and Morrow, A. 2010. Canadian universities ranked among world’s best. The Globe
and Mail, GlobeCampus supplement (16 September 2010). www.globecampus.
ca/in-the-news/article/canadian-universities-ranked-among-worlds-best/
(Accessed 30 September 2011.)
Mroz, A. 2009. Only the best for the best. Times Higher Education (5 November 2009): www.
timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=408968
(Accessed 30 September 2011.)
Rauhvargers, A. 2011. Global University Rankings and their Impact: EUA Report on Rankings 2011.
Brussels: European University Association.
Smith, S. 2010. Pride before the fall? Times Higher Education, Times Higher Education World
University Rankings supplement (16 September 2010).
Chapter 2. An evolving methodology: the Times Higher Education World University Rankings 53
Chapter 3
Issues of transparency
and applicability in global
university rankings
Ben Sowter
In 1911, one hundred years ago, the world was a dramatically different
place, and higher education was no exception. Fewer than 10 per cent of the
universities in existence today had been established. The population of the
planet was less than a quarter of what it is today (United Nations, 2004) and
a substantially smaller proportion of them were going through university.
It had been only eight years since Orville and Wilbur Wright made the first
controlled, sustained and heavier-than-air human flight, and only ten since
the first Nobel Prizes were awarded (Nobel Foundation, n.d.).
Today, there are more than 20,000 higher education institutions (HEIs) in
the world and more than 3.3 million students are studying outside their
home country (OECD, 2010). In 2008 there were over 29 million flights
(OAG Aviation, n.d.) and 813 individuals have now been awarded a Nobel
Prize (Nobel Foundation, n.d.). With universities under increasing pressure
to accept more students and increase research productivity on increasingly
constricted budgets, student attraction at increasing fee levels is becoming
an ever more important priority for universities worldwide.
Such information has not always been broadly available and remains
unavailable in certain contexts, but there has been a drive for transpar-
ency among institutions over the last thirty years. University league
tables have been one of the most influential factors in driving the trans-
parency of information on higher education. League tables, regardless of
how sophisticated their underlying measures, are compelling because of
their allure of simplicity and facility to place institutions in a hierarchy
where one is presented as superior to the next. Simple and conclusive
statements can be inferred and used to attract headlines and contribute
to marketing messages.
The tendency and even preference for the simplification of otherwise com-
plex realities is not limited to higher education. The human mind seems
International league tables, emerging for the first time in 2003, have
increased in number (see Table 1). Their rise in influence has mirrored this
increase but, if anything, at an accelerated rate, attracting an unanticipated
profile. Additionally, in certain contexts, international rankings have served
as an effective wake-up call to institutions and governments in countries
that may have previously had an inflated view of their own performance
and global impact. Rankings have also led to a revolution in the availability
of data on HEIs and intelligence to guide institutional and government
strategies for higher education.
Institutional diversity
Universities differ greatly from one another. While the universities evalu-
ated in the QS World University Rankings® are at the top end of the world’s
20,000+ and as a result are pursuing high performance in teaching and
research, their characteristics can vary greatly. The University of Buenos
Aires has over 300,000 students, while ENS Paris has around 2,000. In
this aspect alone it is clear that the two institutions are dramatically differ-
ent in terms of funding and facilities, before their difference is studied in
any more detail. In a global ranking context these differences are entirely
overlooked.
The QS response to this issue has been to devise a devastatingly simple clas-
sification system based on three key metrics: size, as defined by full-time
equivalent student enrolments; focus, as defined by the number of broad
faculty areas in which they are active; and research intensity, as defined by
the total volume of papers published factored against the size and focus of
the institution.
This concept will be extended in future and may consider aspects such as:
• institutional age,
• principal study mode,
• location/campus type (i.e. urban/suburban/rural),
• study levels offered/enrolment profile,
• institution status (i.e. public/private).
From personal experience and focus groups it is clear that a large proportion
of prospective students address the question of institution choice already
equipped with a strong idea of the discipline in which they want to study.
Prior to 2011, global ranking compilers were not generating results at this
level of granularity – reducing the potential utility of the data being com-
piled. It is clear that there is a need for better data at the discipline level.
In response to this, QS is in the process of releasing tables at a narrower
y
graph
Statistic
1 - English
S oc
y
1 - G eo
iolo
stor
s-5
s
tic
gy -
Po
- Hi
100
uis
liti
cs
g
Lin
-5
80 es
1-
ag
Law ngu
-5 1 - La
60
hy
Educa sop
ti hilo
on - 5
40 1-P
cience
Economics - puter S
5 20 2 - Com
2 - Electrical Engineering
ication - 5
Commun
2-M
-5 ech
ent anic
al En
em
nag gine
Ma 3- e ring
g Bio
tin 5 log
un e -
co nc ica
Ac Fina lS
cie
3-
4
& nc
-P
es
ed
hy
4-
sic
ici
3-
M
ne
s
4-M
Psy
ate
4-C
4 - Earth Scien
4 - Environmental
r
c
ial
ho
athe
hem
s
log
Sc
ien
istry
y
ce
atic
Sciences
s
ces
Source: Sowter.
Range of measures
Global rankings and league tables, while extremely popular and accessible
have fundamental limitations that are not specific to particular methodologies
used, but rather pervade all such exercises (HEFCE, 2008). In the main, these
are imposed by the lack of globally available and comparable data for the key
aspects of university performance that might most importantly be measured.
2. sacrifice the inclusion of intended measures due to lack of data for cer-
tain subject institutions.
12
12
10
10
9
8
8
8
7
6
6
6
2
2,00128
1,85455
1,8323
1,82025
1,84243
1,63488
1,56538
1,61359
1,5458
1,50317
0
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
• does not depend on every institution gathering and submitting data, but
instead only on those that wish to be evaluated;
• does not evaluate performance relative to the moving goalposts of others’
parallel progress, but against pre-set and well-understood standards;
• does not automatically favour large, comprehensive institutions, but
identifies the best in each niche category; and
• presents a range of grouped or banded outcomes rather than an ordinal list.
In response to this need, QS and its Academic Advisory Board have devised
QS Stars. This is a rating system akin to a ‘Michelin Guide’ for universities.
The first audits, using a range of well over twenty indicators, are complete
and awards have been made.
User-driven results
Such is the case with university rankings. The expert panel assigning the
criteria and weightings may even effectively navigate the average viewpoint
Since 2003, when the Shanghai rankings first emerged, the level of quantita-
tive information available on an average university’s own website has seen
a remarkable improvement. The emergence of central and government-
sponsored data collection exercises has accelerated. There has also been
greater acceptance among university leadership that measurement and
evaluation are early steps on a route to performance improvement against
their own individual missions and goals.
It seems a natural step that rankings and league tables themselves be subject
to similar scrutiny and be expected to provide open access to what is ‘under
the hood’. Transparency is not only about access to the data; it involves
detailed data definitions, complete access to the methodology and any sta-
tistical techniques, the data itself, the ability to search, filter and manipulate
the results, and the necessary health warnings highlighting appropriate use
and misuse along with potential confidence analysis. Arguably there is no
provider of global league tables currently providing the complete collec-
tion of information and tools required to represent ‘complete’ transparency.
In the case of the QS, this is not down to a philosophical or commercial
Times Higher Education World University Times Higher Education and Thomson Commercial/Media
Rankings Reuters
Conclusion
In her closing remarks at the UNESCO Global Forum on Rankings and
Accountability in Higher Education, Stamenka Uvalic-Trumbic, the then Chief
of the UNESCO Section for Higher Education, laid out projections estimating
that global higher education will need to find space for almost 100 million
additional students by 2025. This is equivalent to opening a large compre-
hensive university every two weeks. In reality, the majority of that demand
will be met by increased capacity at existing universities and advancements
in online and distance learning.
Either way, there are some dramatically clear implications. For instance,
government funding can only go so far so in absorbing these additional students.
More and more universities will escalate their fees. Some countries where fees
have not previously been in evidence are beginning to introduce them. This
is a harsh economic reality. Introducing substantial financial liabilities to the
decision-making process will inevitably influence the behaviour of prospective
students. They will begin to look more like customers, demanding a certain
level of service, expecting a solid return on investment and reacting to good
deals. There will be increasing pressure for them to make the best possible
decision. They will require information that is easy to access and understand in
order to at least sift the options to a manageable level. Global league tables are
pioneering this on an international scale. A range of new tools in development
by QS and other providers will serve to further augment the picture currently
being painted by their results.
While many criticisms of league tables may be valid, there can be little
question that they have been a key driver for transparency and account-
ability among institutions, and have paved the way for a rich and growing
References
HEFCE (Higher Education Funding Council for England). 2008. Counting What is Measured
or Measuring What Counts? League Tables and their Impact on Higher Education
Institutions in England. London: HEFCE.
IREG (International Observatory on Academic Ranking and Excellence). 2011. IREG Ranking Audit
(15 December 2011): www.ireg-observatory.org/index.php?option=com_content&task=vi
ew&id=187&Itemid=162 (Accessed 10 May 2012.)
OECD (Organisation for Economic Co-operation and Development). 2010. Education at a Glance.
Paris: OECD.
Sowter, B. 2011b. Financial Measures Tricky for International University Evaluations. (2 June 2011):
www.iu.qs.com/2011/06/02/financial-measures-tricky-for-international-university-
rankings/ (Accessed 10 May 2012.)
United Nations. 2004. World Population to 2300. New York: United Nations.
Implications and
Applications
Chapter 4
World-class universities
or world-class systems?
Rankings and higher
education policy choices
Ellen Hazelkorn
In today’s world, it has become all too familiar for policy-makers and higher
education leaders to identify and define their ambitions and strategies in
terms of a favourable global ranking for their universities/university. But is
it always a good thing for a university to rise up the rankings and break into
the top 100? How much do we really know and understand about rank-
ings and what they measure? Do rankings raise standards by encouraging
competition or do they undermine the broader mission of universities to
provide education? Can rankings measure the quality of higher education?
Should students use rankings to help them choose where to study? Should
rankings be used to help decide education policies and the allocation of
scarce resources? Are rankings an appropriate guide for employers to use
when recruiting new employees? Should higher education policies aim to
develop world-class universities or to make the system world-class?
This chapter discusses the rising attention accorded to global rankings and
their implications for higher education. It is divided into five sections: the
first explores the growing importance accorded to rankings; the second
discusses what rankings measure; the third asks whether rankings measure
what counts; and the fourth reflects on how the use and abuse of rank-
ings is influencing policy choices. Finally, the fifth section addresses a key
policy question: should governments focus on building the capacity of a few
world-class universities or on the capacity of the higher education system
as a whole, in other words, building a world-class higher education system?
• First is the rapid creation of new knowledge creation and its application,
which has become a foundation for individual and social prosperity, be
it cultural or economic. People who complete a high-school education
tend to enjoy better health and quality of life than those who finish at
the minimum leaving age. Those completing a university degree can
look forward to a significantly greater gross earnings premium over their
lifetime compared with someone who only completes secondary school.
Graduates are also more likely to be engaged with their community
and participate in civil society. Successful societies are those with the
capacity to ensure its citizens have the knowledge and skills to contribute
• Fourth, students (and their parents) have become very savvy consumers,
especially as evidence continues to show that graduate outcomes and
lifestyle are strongly correlated with education qualifications and career
opportunities. Students are now much more focused on employability as
opposed to employment. They assess their choice of an institution and
education programmes as an opportunity-cost – balancing the cost of
tuition fee and/or cost of living and the career and salary opportunities. As
the traditional student market declines, competition for high achieving
students is rising. The balance of consumer power is shifting in favour of
discerning talented students.
National rankings have existed in many countries, most notably the United
States, for decades. Since 2003, with the publication of the Shanghai Jiao
Tong Academic Ranking of World Universities (ARWU), global rankings have
become very popular. Knowledge about and use of rankings has continued
apace in the aftermath of the 2008 Global Financial Crisis (GFC), reflecting
the realization that in a global knowledge economy, national pre-eminence is
no longer enough. Today, rankings exist in every part of the world. There are
eleven global rankings – albeit some are more popular than others (see Box 1).
Over sixty countries have introduced national rankings especially in emerging
economies (Hazelkorn, 2012b), and there are a number of regional, specialist
and professional rankings. While undergraduate, domestic students and their
parents were the initial target audience for rankings, today, they are used by
a myriad of stakeholders (e.g. governments and policy-makers; employers
and industrial partners; sponsors, philanthropists and private investors; aca-
demic partners and academic organizations; the media and public opinion).
Postgraduate students, especially those seeking to pursue a qualification in
another country, are the most common target audience and user.
Many national rankings, such as the US News and World Report Best College
rankings (USN&WR), measure student entry levels on the basis that high
entry scores are a proxy for academic quality. This is based on the view
that student grades can be used to predict future achievement, and hence,
more high-achieving students equate with higher quality. But as Hawkins
(2008) says, ‘many colleges recruit great students and then graduate
great students [but is] that because of the institution, or the students?’
International evidence repeatedly shows that student-learning outcomes
Faculty/staff ratio also has very different meanings for public and private
institutions and systems, and may say more about the funding or efficiency
level. Class size in and of itself can be a hollow indicator especially when
used to measure the learning environment for high-achieving students.
Ultimately, the simplicity of the indicator does not tell us very much about
what affect the faculty/student ratio has on actual teaching quality or the
student experience (Brittingham, 2011).
The level of expenditure or resources is often used as a proxy for the quality
of the learning environment. This is captured, inter alia, by the total amount
of the HEI budget or by the size of the library collection. USN&WR says that
‘generous per-student spending indicates that a college can offer a wide
variety of programmes and services’ (US News Staff, 2010); this is sometimes
interpreted as expenditure per student. For example, Aghion et al. (2007)
argue that there is a strong positive correlation between the university
budget per student and its research performance as demonstrated in the
ARWU ranking. However, many HEIs are competing on the basis of substan-
tial resources spent on dormitories, sports and leisure facilities, and so on; it
is not clear what impact these developments – worthy as they are – have on
the actual quality of the educational or learning experience. This approach
can also penalize ‘institutions that attempt to hold down their expenditures’
(Ehrenberg, 2005: 33) and it provides ‘little or no information about how
often and how beneficially students use these resources’ (Webster, 1986: 152).
For example, because the costs associated with building a new library for a
developing country or new HEI can be very significant. Many institutions
have switched to electronic access or sharing resources with neighbouring
institutions. There is a danger that looking simply at the budget ignores the
question of value vs. cost vs. efficiency (Badescu, 2010), and that the indica-
tor is essentially a measure of wealth (Carey, 2006b). Indeed, while many
policy observers look to the US, ‘if value for money is the most important
consideration, especially in an age of austerity, the American model might
well be the last one... [to] be emulating’ (Hotson, 2011).
Measuring research
There are other more consequential problems that arise from this method.
By focusing only on peer-reviewed articles in particular journals, it assumes
that journal quality is equivalent to article quality. Articles may be quoted
because of errors, not necessarily because of a breakthrough. This has led to
the controversial practice of ranking academic journals (Hazelkorn, 2011b).
Measuring reputation
The real question is: can university presidents or any other stakeholders
know sufficiently about a wide range of other institutions, around the world,
in order to score them fairly? In other words, rankings are a self-replicating
mechanism that reinforces the position of already known universities, rather
than those that are excellent.
Policy choices
Since the arrival of global rankings, it is not uncommon for governments
to gauge national global competitiveness and positioning within the world-
order in terms of the rank of their universities, or to attribute national ambi-
tions to a position in the rankings. The ongoing global economic crisis has
The price tag to get one Nigerian university into the global top 200 is
put at NGN 5.7 billion [€31 m] annually for at least ten years
(National Universities Commission, Nigeria).
The world-class university has become the panacea for ensuring success
in the global economy, based on the characteristics of the top 20, 50 or
100 globally ranked universities. China, Finland, France, Germany, India,
Japan, Latvia, Malaysia, Russia, Singapore, South Korea, Spain, Taiwan
and Viet Nam – among many other countries – have launched initia-
tives to create world-class universities. Individual US states (e.g. Texas
and Kentucky) have similarly sought to build or boost flagship universi-
ties, elevating them to what is known as Tier One status, a reference
to USN&WR College Rankings. In contrast, countries such as Australia,
Ireland and Norway are emphasizing the importance of the system being
‘world class’.
Field Field Field Field Field Field Field Field Field Field
1 2 3 4 5 6 7 8 9 10
PhDs and research
intensive
Masters and some
Institution 1
Institution 2
Institution 3
Institution 4
Institution 5
research
Baccalaureates
and scholarship
Diplomas and
extension services
Source: Gavin Moodie, pers. comm. 7 June 2009.
Given this effect, many governments use rankings, inter alia, to classify and
accredit HEIs, allocate resources, drive change, assess student learning and
learning outcomes and/or evaluate faculty performance and productivity,
at the national and institutional level. They are used as an accountability or
transparency tool, especially in societies and institutions where this culture
and practices are weak or immature.
Many myths are promulgated about the value of rankings for policy-making
or strategic decision-making. But, rankings should be used cautiously – and
only as part of an overall quality assurance and assessment or benchmarking
system and not as a stand-alone evaluation tool. Four examples will suffice:
2. Rankings provide good measures for research. Despite criticism about the
disproportionate focus on research, the choice of indicators is usually
considered meaningful or ‘plausible’. However, as discussed above, the
data primarily reflect basic research in the bio- and medical sciences.
As a consequence, some disciplines are valued as more important
than others, and research’s contribution to society and the economy
is seen primarily as something which occurs only within the academy.
In this way, rankings misrepresent the breadth and dynamism of the
research-innovation process and higher education’s role as part of the
innovation eco-system – what the European Union calls the ‘knowledge
triangle’ of education/learning, research/discovery and innovation/
engagement. This narrow conceptualization of research is helping to
drive a wedge between teaching and research at a time when policy-
makers and educators advocate the need for more research-informed
teaching (Hazelkorn, 2009).
Rankings are only one form of comparison; they are popular today because of
their simplicity. However, their indicators of success are misleading. Rather
than using definitions of excellence designed by others for other purposes,
what matters most is whether HEIs fulfil the purpose and functions that
governments and society want them to fulfil.
Adams, J. and Gurney, K. 2010. Funding Selectivity, Concentration and Excellence – How Good is the
UK’s Research? London: Higher Education Policy Institute: www.hepi.ac.uk/455-1793/
Funding-selectivity-concentration-and-excellence – how-good-is-the-UK’s-research.html
(Accessed 7 June 2010.)
Aghion, P., Dewatripont, M., Hoxby, C., Mas-Colell, A. and Sapir, A. 2007. Why reform Europe’s
universities? Bruegel Policy Brief, September (4): www.bruegel.org/uploads/tx_
btbbreugel/pbf_040907_universities.pdf (Accessed 30 April 2010.)
Altbach, P.G. 2006. The dilemmas of ranking. International Higher Education, 42: www.bc.edu/
bc_org/avp/soe/cihe/newsletter/Number42/p2_Altbach.htm (Accessed 4 April 2010.)
Altbach, P.G. 2012. The globalization of college and university rankings. Change, January/
February: 26–31.
Becher, T. and Trowler, P.R. 2001. Academic Tribes and Territories (2nd edn). Buckingham, UK:
SRHE/Open University Press.
Bremner, J., Haub, C., Lee, M., Mather, M. and Zuehlke, E. 2009. World Population Prospects: Key
Findings from PRB’s 2009 World Population Data Sheet. Washington DC: Population
Reference Bureau, United Nations: www.prb.org/pdf09/64.3highlights.pdf
(Accessed 26 April 2010.)
Carey, K. 2006b. College Rankings Reformed: The Case for a New Order in Higher Education,
Education Sector Reports, September.
Carr, K. 2009. Speech by Federal Minister for Innovation, Industry, Science and Research to
Universities Australia Higher Education Conference, Australia: http://minister.innovation.
gov.au/Carr/Pages/UNIVERSITIESAUSTRALIAHIGHEREDUCATIONCONFERENCE2009.
aspx (Accessed 30 March 2010.)
Dill, D.D. and Soo, M. 2005. Academic quality, league tables and public policy: a cross-national
analysis of university ranking systems. Higher Education, 49(4): 495–537.
Ehrenberg, R.G. 2005. Method or madness? Inside the U.S. News & World Report college rankings.
Journal of College Admission. Fall.
Hawkins, D. 2008. Commentary: Don’t Use SATs to Rank College Quality, CNN:
http://edition.cnn.com/2008/US/10/17/hawkins.tests/index.html
(Accessed 27 March 2010.)
Hazelkorn, E. 2011a. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. Houndmills, Basingstoke: Palgrave McMillan.
Hazelkorn, E. 2011b. The futility of ranking academic journals. The Chronicle of Higher Education.
(20 April): http://chronicle.com/blogs/worldwise/the-futility-of-ranking-academic-
journals/28553 (Accessed 20 April 2010.)
Hazelkorn, E. 2012b. Striving for excellence: rankings and emerging societies. D. Araya and
P. Marbert (eds) Emerging Societies. Routledge.
Hotson, H. 2011. Don’t look to the Ivy League. London School of Books. (19 May).
Jones, D.A. 2009. Are graduation and retention rates the right measures? Chronicle of Higher
Education. (21 August): http://chronicle.com/blogPost/Are-GraduationRetention/7774/
(Accessed 30 March 2010.)
Kuh, G.D. and Pascarella, E.T. 2004. What does institution selectivity tell us about educational
quality? Change, September/October: 52–58.
Lawrence, J.K. and Green, K.C. 1980. A Question of Quality: The Higher Education Ratings Game.
Report No. 5. Washington DC: American Association of Higher Education.
Martin, M. and Sauvageot, C. 2011. Constructing an Indicator System or Scorecard for Higher
Education. A Practical Guide. Paris: UNESCO International Institute for Educational
Planning (IIEP).
Moed, H.F. 2006. Bibliometric Rankings of World Universities. Leiden, Netherlands: CWTS, University
of Leiden.
OECD (Organisation for Economic Co-operation and Development). 2010. Education at a Glance.
Paris: OECD.
Rauhvargers, A. 2011. Global University Rankings and Their Impact. Brussels: European University
Association.
Sadlak, J. and Liu, N.C. (eds). 2007a. The World-Class University and Ranking: Aiming Beyond Status.
Bucharest: Cluj University Press.
Saisana, M. and D’Hombres, B. 2008. Higher Education Rankings: Robustness Issues and Critical
Assessment. How Much Confidence Can We Have in Higher Education Rankings? Centre
for Research on Lifelong Learning (CRELL), European Communities, Luxembourg:
http://crell.jrc.ec.europa.eu/Publications/CRELL%20Research%20Papers/EUR23487.pdf
(Accessed 20 November 2010.)
Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 19(2): 31–68.
Sheil, T. 2009. Moving Beyond University Rankings: Developing World Class University Systems.
Presentation to 3rd International Symposium on University Rankings, University of
Leiden.
Usher, A. 2006. Can our schools become world-class? Globe and Mail (30 October 2006):
www.theglobeandmail.com/servlet/story/RTGAM.20061030.URCworldclassp28/
BNStory/univreport06/home (Accessed 11 October 2009.)
Usher, A. 2012. The Times Higher Education Research Rankings (19 March 2012):
http://higheredstrategy.com/the-times-higher-education-research-rankings/
(Accessed 1 April 2010.)
Usher, A. and Medow, J. 2009. A global survey of university rankings and league tables. B.M. Kehm
and B. Stensaker (eds) University Rankings, Diversity, and the New Landscape of Higher
Education. Rotterdam: Sense Publishers, pp. 3–18.
Usher, A. and Savino, M. 2006. A World of Difference: A Global Survey of University League Tables.
Toronto: Educational Policy Institute: www.educationalpolicy.org/pdf/world-of-
difference-200602162.pdf (Accessed 19 January 2009.)
Usher, A. and Savino, M. 2007. A global survey of rankings and league tables. College and University
Ranking Systems – Global Perspectives American Challenges. Washington DC: Institute of
Higher Education Policy.
US News Staff. 2010. How U.S. news calculates the college rankings. US News and World Report
(17 August 2010): http://www.usnews.com/education/articles/2010/08/17/how-us-news-
calculates-the-college-rankings?PageNr=4 (Accessed 12 July 2010.)
van Raan, A.F.J. 2007. Challenges in the ranking of universities. J. Sadlak and N.C. Liu (eds) The
World-Class University and Ranking: Aiming Beyond Status. Bucharest: Cluj University
Press, pp. 87–12.
Webster, D.S.A. 1986. Academic Quality Rankings of American Colleges and Universities. Springfield,
Mass.: Charles C. Thomas.
• First, eighteen years ago the management guru Peter Drucker predicted
that in thirty years the big university campuses would be relics – yet twelve
years before his deadline many seem as vibrant than ever and few appear
to be on their last legs.
• Second – which may partly explain the continuing ebullience of the sector
– enrolment growth has been consistently underestimated, particularly as
concerns women. The desire for access to higher education among most of
the world’s population is stronger than ever.
• Third, when higher education was declared to be a tradable commodity
under the General Agreement on Trade in Services (GATS) a decade ago
there was panic in academia about imminent commercialization – yet most
of the world’s universities are still public institutions with an educational
ethos.
• Fourth, some observers claimed that today’s young students are a new
breed of digital natives who would create a generational divide in study
habits – yet recent research on thousands of students of all ages finds no
such divide.
• Fifth, the hype around the dotcom frenzy in 1999–2000 claimed that all
education would soon go online – yet to date it seems that universities
have absorbed the virtual world rather than allowing it to absorb them.
• Sixth, despite the efforts of some governments to limit the research function
to a limited number of institutions, most universities continue to conduct
research and aspire to expand it.
This list appears to support the notion that higher education develops by
evolution rather than revolution. However, it is too soon to dismiss all these
In 1971 Richard Nixon asked the Chinese leader Zhou Enlai what he thought
had been the impact of the French revolution. Zhou replied that it was too
early to tell. Most commentators assumed that the leaders were referring
to the storming of the Bastille in 1789 and seized on the story as a tell-
ing illustration of China’s talent for long-term thinking. Nixon’s interpreter,
however, insists that Zhou was actually referring to the much more recent
1968 student uprising in France (les événements de mai 1968), which makes
more sense (McGregor, 2011).
When the University of Paris erupted in 1968 the protests inspired some
of the most memorable slogans and graffiti of the mid-twentieth century,
although the students were much less succinct about the reforms they actu-
ally sought. In 1971 it was indeed much too early for Zhou to provide Nixon
with an impact analysis. Four decades later, however, higher education has
changed considerably, although not in the ways that the students cam-
paigned for, nor as a result of their actions. That is because the two drivers of
change in contemporary higher education on which we shall focus, rankings
and online learning, were absent in 1968.
Online learning
The first driver of change is the internet and the online world that it has
created. A year after the Paris riots, when he launched the UK Open University
in 1969, the founding Chancellor, Lord Geoffrey Crowther, said that:
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 97
be examined to see how it can be used to raise and broaden the level
of human understanding. (Northcott, 1976)
In his report 2011 Outlook for Online Learning and Distance Education, Bates
(2011) identified three key trends in US higher education. It is fair to assume
that other countries will follow similar paths as connectivity improves.
The first trend is the rapid growth of online learning. Enrolment in fully
online (distance) courses in the United States expanded by 21 per cent
between 2009 and 2010 compared to a 2 per cent expansion in campus-
based enrolments.
Bates’ second finding is that, despite this growth, institutional goals for
online learning in public sector higher education are short on ambition.
He argues that the intelligent use of technology could help higher educa-
tion to accommodate more students, improve learning outcomes, provide
more flexible access and do all this at less cost. Instead, he found that costs
are rising because investment in technology and staff is increasing with-
out replacing other activities. There is no evidence of improved learning
outcomes and a failure to meet best quality standards for online learning
in some institutions. In general, the traditional US public higher educa-
tion sector seems to have little heart for online learning. Many institutions
charge higher fees to online students, even though the costs of serving
them are presumably lower, suggesting that they would like to discourage
this development.
A third finding should stimulate the public sector to take the rapidly grow-
ing demand for online learning more seriously. The US for-profit sector
has a much higher proportion of the total online market (32 per cent)
compared to its share of the overall higher education market (7 per cent).
Seven of the ten US institutions with the highest online enrolments are
for-profits. For-profits are better placed to expand online because they
do not have to worry about resistance from academic staff, nor about
exploiting their earlier investment in campus facilities. Furthermore, the
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 99
Rankings: compounding the problem
At a time when public universities should be getting organized to expand
online teaching of quality in response to student demand, they are falling
for the temptation to expend energy and resources on gaining higher places
in university rankings, which is a probably a more congenial goal for most
university presidents and faculty.
The papers presented at UNESCO’s May 2011 Forum on Rankings, which are
the substance of this book, suggest that rankings have reached the stage,
both nationally and internationally, of encouraging a thousand flowers to
bloom. This may blunt the disruptive effect of rankings on higher education
systems because the emergence of rankings based on a wide range of criteria
helps different types of institutions within diverse higher education systems
compare themselves usefully with their peers.
Wildavsky cites Jamil Salmi’s book, The Challenge of Establishing World Class
Universities, which analyses what makes for a top university (Salmi, 2009).
Here again, the designation refers to only a tiny fraction of the world’s uni-
versities, but some countries are lavishing funds on favoured institutions
in a probably futile attempt to get them into the list of the top 100 – or
top 300 – research universities. Having identified the trend, Salmi is now
sounding a warning note. He now writes of Nine Common Errors in Building
a World Class University and cautions those focusing on boosting one or two
institutions not to neglect ‘full alignment with the national tertiary educa-
tion strategy and to avoid distortions in resource allocation patterns within
the sector’ (Salmi, 2010).
This is happening at a time when the demand for higher learning is bur-
geoning in much of the world. Thirty per cent of the global population is
Sadly, however, universities are much less eager to be ranked on the quality
of their teaching than on the quality of their research. Between 1995 and
2004 the UK’s Quality Assurance Agency conducted assessments of teaching
quality, discipline by discipline, in all universities. A number of disciplines
were assessed nationwide each year on six dimensions, giving a maximum
score of 24 per discipline. The press was not slow to construct rankings based
on each university’s aggregate score and these evolved annually as more
disciplines were assessed. Table 1 shows the nine most highly ranked institu-
tions in 2004 when the teaching assessment process was terminated, alleg-
edly because the major research universities, unhappy with their standing
in this type of ranking, lobbied at the highest political level for its abolition.
The placing of the Open University just above Oxford is a testimony to the
potential of open, distance and technology-mediated learning to offer qual-
ity teaching and a sign of changing times.
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 101
Table 1. Britain’s top nine universities
1 Cambridge 96%
2 Loughborough 95%
3 London School of Economics 88%
4 York 88%
5 The Open University 87%
6 Oxford 86%
7 Imperial College 82%
8 University College London 77%
9 Essex 77%
…and tops for student satisfaction
Source: The Sunday Times University Guide 2004.
The United States is an extreme case, but since 1986 college fees there have
risen by 467 per cent compared to inflation of 107 per cent in the economy
overall. The impact of the post-2008 recession on US household incomes,
combined with public concern about the heavy debt burdens on students
and graduates, is finally putting downward pressure on fee levels and
These American economists write only about the US experience, but the
principles and arguments they evoke have broad relevance. They situate
the higher education enterprise in the context of the wider economy and
make some careful comparisons with the evolution of prices in a range
of other industries over more than fifty years. In real terms the prices of
manufactures have gone down; those of many services, such as hairdress-
ing, have stayed roughly constant; whereas the prices of personal services
by professionals with high training requirements have risen in real terms.
They cite academics, dentists, horn players and stockbrokers as examples
in this last category.
It is not surprising that the price of dentistry rises by more than inflation
because, despite the use of increasingly sophisticated equipment, it remains
a personal service with little scope for automation. Horn players (as examples
of orchestral musicians) are a more debatable case. They are unquestionably
a rare and specialized breed, but their productivity has increased dramati-
cally in recent decades simply because most people now listen to horn play-
ers, with equal or greater enjoyment and at much lower cost, on iPods and
CDs instead of going to concert halls. The most interesting comparison is
with stockbrokers. Their prices went up more rapidly than those of higher
education until the 1980s and then fell steadily to a relatively much lower
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 103
level. This was because brokerage services went online, giving the individual
client much more control.
This evidence that, at any age, a good attitude to technology correlates with
good study habits is also important in giving the lie to the view that online
learning tends to trivialize learning. Instead, as we argued earlier, the intel-
ligent use of technology can improve the quality of learning.
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 105
Again, the United States is the best place to see trends emerging. Although
US tuition fees have risen faster than inflation for decades, there are signs
that the situation has reached a tipping point. The fees bubble will not
suddenly burst, but lower-cost alternatives to the current model of high-
fee programmes will steadily take market share. Already some US states
(e.g. Texas) are pressuring institutions to cut costs and fees, some major
public institutions (e.g. the University of California) are finally taking online
learning seriously, and models such as Best’s Academic Partnerships will
gain ground. The success of the Western Governors University, which was
viewed as a rather peculiar initiative when it was created in the late 1990s,
is an indicator of how things can change. This institution, which charges
fees of US$5,000 per annum and makes no demands on public funds, is
attracting increasing numbers of students. The for-profit sector has ample
room to cut fees and still make good profits. Currently this sector makes
high profits because it operates a lower-cost model of provision, but can
set fees comparable to those of the public sector with its higher cost base.
As the public sector starts to cut fees the for-profit sector will be able to
lead a downward trend.
What are the implications of an expanded role for the for-profit sector? In
countries where tax codes and charitable status are clearly defined the dis-
tinction between private for-profit and private non-for-profit provision is
easy to make. In most of the world, however, the distinction is not so clear
and we shall use the term ‘private’ to designate both types of institutions. All
private providers try to make a surplus and appear much the same on the
ground, especially in developing countries. The private sector can be either
homegrown or international. Note here that all providers, public or private,
become private for-profit providers once they spread their wings outside
their country of origin and offer programmes across borders. A public uni-
versity is a private, for-profit provider when operating in another country,
even though it may not initially repatriate its profits. Developing countries
sometimes claim that the ethical standards of public institutions operating
outside their home jurisdictions can be lower than those of avowedly com-
mercial providers.
But even without the impact of online learning, the growth of private
provision is essential to the expansion of postsecondary education in
many countries. No government can fund all the post-secondary educa-
tion its citizens want, so the choice is between either a public-sector
monopoly giving inadequate provision or meeting the demand through
a diversity of public and private institutions. This is changing patterns of
provision.
The key question is whether the private sector can be regulated without
strangling it. Is it possible to develop some common principles of
accountability and transparency for all providers of higher education?
Quality assurance (QA) is a relatively recent concern in higher education in
some countries. The issue is whether public and private institutions should
be treated the same for QA purposes. Is the distinction between good and
bad simply the dichotomy of corporate or public ownership? Ownership is
important for the tax authorities but is not, in principle, relevant to quality.
There are good and bad actors in the public sector and there should be the
same quality thresholds for all.
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 107
Legitimate for-profit institutions welcome strong quality assurance frame-
works, but ask that they be applied fairly across the whole higher education
sector. Legitimate areas for regulation are the avoidance of excessive student
loans, ground rules for acquiring accredited institutions, and processes for
eliminating bad actors. The main plea is for a level playing field.
Some institutions already have policies that encourage the use of OER
so that each teacher does not have to re-invent the wheel in each of
their courses. Once academics in the Education Faculty at the Asia eUni-
versity in Malaysia have agreed on course curriculum outlines they do not
need to develop original learning materials – good quality OER for all the
topics they require is already on the Web and they simply adapt them to
their precise needs. Likewise, Canada’s Athabasca University will not approve
development of a course until the proposing department has shown that it
has done a thorough search for relevant openly licensed material that can
be used as a starting point. But some would go much further. Paul Stacey
(2011), of Canada’s BC Campus, has outlined the concept of The University
Open. He points out that the combination of open source software, open
access publishing, open educational resources and the general trend to open
government creates the potential for a new paradigm in higher education. In
February 2011 the Open Education Resource Foundation convened a meeting
in New Zealand to operationalize the Open Educational Resource University,
a concept developed from this thinking.
The idea is that students find their own content as OER; get tutoring from
a global network of volunteers; are assessed, for a fee, by a participating
As regards the first step in this ladder, open educational resources are
unquestionably being used. Literally millions of informal learners and stu-
dents are using the open educational resources put out by MIT, the UK Open
University and others to find better and clearer teaching than they are get-
ting in the universities where they are registered. Thirty-two small states of
the Commonwealth are working together within a network called the Virtual
University for Small States of the Commonwealth to develop open educational
resources that they can all adapt and use (Daniel and West, 2009).
The interest is considerable. The UKOU’s OpenLearn site has 11 million
users and hundreds of courses can be downloaded as interactive eBooks.
Furthermore, with 300,000 downloads per week, the UKOU alone accounts
for 10 per cent of all downloads from iTunesU. And we must not forget
the worldwide viewing audience of hundreds of millions for OU/BBC TV
programmes.
Martin Bean (2010), the UKOU vice-chancellor, argues that the task of
universities today is to provide paths or steps from this informal cloud
of learning towards formal study for those who wish to take them. Good
paths will provide continuity of technology because millions of people
around the world first encounter higher education institutions such as
the UKOU through iTunesU, YouTube, TV broadcasts or the resources on
various university websites. The thousands who then elect to enrol as stu-
dents in these institutions will find themselves studying in similar digital
environments.
What are the implications of this concept? The institutions best equipped
to make a success of the Open Education Resource University are probably
institutions in the public sector that already operate successfully in parts of
this space and award reputable credentials. Such institutions must also have
the right mindset. It would be difficult for a university that has put scarcity
at the centre of its business model suddenly to embrace openness. In the
coming years some universities will have to ask themselves whether they
can sustain a model based on high fees and restricted access as other parts
of the sector cut fees and widen access.
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 109
To examine how the OERU would work we can juxtapose Martin Bean’s
remark about leading learners step by step from the informal cloud of
learning to formal study with Jim Taylor’s representation of the steps in the
Open Educational Resource University. The first step, namely access to open
educational resource learning materials, is increasingly solid. The pool of OER
is growing fast and it is progressively easier to find and retrieve them. The
solidity of the top step, credible credentials, depends on the involvement of
existing, reputable, accredited institutions that resonate with this approach.
What about the three intermediate steps? For the first, student support,
distance-teaching institutions already have the skills necessary. They manage
extensive networks of tutors or mentors. SUNY’s Empire State College has
unique skills for this task given that students will often not be working with
material created by the institution, but with OER they have discovered for
themselves. Its unusual mentoring model is well suited to this.
James Taylor (2011), one of the leading planners of the OERU, envisages the
emergence of a body rather like Médecins sans Frontières or Engineers without
Borders, which he calls Academic Volunteers International. That may work
in some places, but having students buy support on a pay-as-you-go basis
would also work and might make for a more sustainable model. Furthermore,
social software is greatly enriching the possibilities for student support and
interaction. For example, the UKOU’s OpenLearn website is not just a reposi-
tory of OER, but also a hive of activity involving many groups of learners.
Digital technology is breathing new life into the notion of a community of
scholars, and social software gives students the opportunity to create aca-
demic communities that take us well beyond the rather behaviourist forms
of online learning that give some online learning a bad name. Some of this
social learning activity involves various forms of informal assessment that
can be most helpful in preparing students for the formal kind.
Conclusion
We have argued that higher education systems will experience major dis-
ruptions in the coming years. The major cause will be the growing demand
from students to learn online. Public-sector institutions are not responding
adequately to this trend, partly because the long tradition of faculty indi-
vidualism in teaching is not well suited to this mode of learning and partly
because many universities place greater priority on improving their research
rankings. This is creating a major opportunity for the private for-profit sector
to expand its role in teaching. The cost structure of online learning creates
a new business model that will put substantial downward pressure on fees,
causing further challenges to a public sector that has acquired the habit of
hiking fees faster than inflation. Some public institutions are fighting back
with a very low-cost model, the Open Educational Resource University,
although is too early judge its appeal to either students or institutions.
References
Archibald, R.B. and Feldman, D.H. 2010. Why Does College Cost So Much? Oxford: Oxford University
Press.
Bates, A.W. 2011. 2011 Outlook for Online Learning and Distance Education, Sudbury: Contact
North: http://search.contactnorth.ca/en/data/files/download/Jan2011/2011%20Outlook.
pdf (Accessed April 2012.)
Baumol, W.J. and Bowen, W.G. 1965. On the performing arts: the anatomy of their economic
problems, The American Economic Review, 55(1/2): 495–502.
Chapter 5. Rankings and online learning: a disruptive combination for higher education? 111
Bean, M. 2010. Informal Learning: Friend or Foe? www.Cloudworks.ac.uk/cloud/view/2902
(Accessed April 2012.)
Bowen, W.G. 2011. Foreword, T. Walsh, Unlocking the Gates: How and Why Leading Universities are
Opening Up Access to Their Courses. Princeston, NJ: Princeton University Press, pp. vii–xvi.
Boyer, E. 1990. Scholarship Reconsidered: Priorities of the Professoriate. New York: Carnegie
Foundation for the Advancement of Teaching.
Carnegie Commission on Higher Education. 1968. Quality and Equality: New Levels of Federal
Responsibility for Higher Education. New York: McGraw-Hill.
Daniel, J.S. 2010 Mega-Schools Technology and Teachers: Achieving Education for All. New York/
London: Routledge.
Daniel, J.S. and West, P. 2009. The virtual university for small states of the Commonwealth, Open
Learning: The Journal of Open and Distance Learning on OERs, 24(2): 67–76.
HEFCE (Higher Education Funding Council for England). 2011. Collaborate to Compete: Seizing
the Opportunity for Online Learning for UK Higher Education, Report of the Task Force on
Online Learning, London: www.hefce.ac.uk/pubs/hefce/2011/11_01/
(Accessed April 2012.)
Jones, C. and Hosein, A. 2010. Profiling university students’ use of technology: where is the
net generation divide? The International Journal of Technology Knowledge and Society,
6(3): 43–58.
McGregor, R. 2011. Zhou’s Cryptic Caution Lost in Translation, Financial Times (10 June 2011):
www.ft.com/intl/cms/s/0/74916db6-938d-11e0-922e-00144feab49a.html#axzz1jI1U7N91
(Accessed April 2012.)
Northcott, P. 1976. The Institute of Educational Technology, the Open University: structure and
operations, 1969-1975. Innovations in Education and Training International, 13(4): 11–24.
Prahalad, C.K. 2004. The Fortune at the Bottom of the Pyramid. Wharton School Publishing.
Salmi, J. 2009. The Challenge of Establishing World-Class Universities. Washington DC: World Bank.
Salmi, J. 2010. Nine Common Errors when Building a New World-Class University, Inside Higher
Education, CIHE, August.
Stacey, P. 2011. Musings on the EdTech frontier: the University of Open: http://edtechfrontier.
com/2011/01/04/the-university-of-open/ (Accessed April 2012.)
Taylor, J. 2011. Towards an OER University: Free Learning for All Students Worldwide:
http://wikieducator.org/Towards_an_OER_university:_Free_learning_for_all_students_
worldwide
Walsh, T. 2011. Unlocking the Gates: How and Why Leading Universities are Opening Up Access to
Their Courses. Princeston, NJ: Princeton University Press.
Wildavsky, B. 2010. The Great Brain Race: How Global Universities are Reshaping the World.
Princeston, NJ: Princeton University Press.
Twenty-five years ago there were almost no – formal – rankings. Of course, there
were national statistics about, and well-established hierarchies within, most
national higher education systems – and, less categorically, informal ratings of
leading universities in a global (or, at any rate, North Atlantic) context. These
hierarchies have now been made explicit by what is best described as a ‘rankings
industry’ in which newspapers and other publishers, national policy-makers and
institutional leaders are all complicit – and which opponents, whether tradition-
alists who believe in the ‘privacy’ of universities or radicals who believe rankings
privilege the already privileged, have been largely powerless to resist (Brown,
2006). In the process traditional hierarchies have been both reinforced, as these
radicals fear. But, as traditionalists fear, these hierarchies have also been sub-
verted by the exponential growth of rankings that emphasize different aspects
of performance and rely on different methodologies. Rankings are certainly ‘fun’
– unlike funding perhaps, the other big topic that agitates higher education
policy-makers (Altbach, 2010). But it is only recently that they have become the
object of serious attention and attempts have been made to identify their origins
and drivers, their impact on national policy-making and institutional manage-
ment, and their evolution and typology (Dill and Soo, 2005; Hazelkorn, 2011;
Sadlak and Liu, 2007; Salmi and Saroyan, 2007).
• A second set of, closely related, reasons is related more profoundly to the
phenomenon of mass higher education of which wider participation is
only one aspect (Scott, 1995). In mass systems the balance shifts from the
‘private world’ of science, scholarship and elite undergraduate education
to the ‘public world’ of more pervasive social engagement, not simply in
terms of who is considered eligible to benefit from higher education (mass
participation), but also of more accessible forms of knowledge production
(impact, advocacy and multiple forms of translation and application)
(Gibbons et al., 1994). This makes higher education everyone’s business.
• A third set of reasons reflects the erosion of those qualities of trust and
hierarchy that characterized those elite university systems. Paradoxically
perhaps, it is the erosion of the formerly near-unchallengeable consensus
about which were the ‘best’ (and, by extension, the ‘less good’) universities
that has fuelled the appetite for formal rankings. Read in this way, rankings
are related to unease about standards, to volatility of missions, and to the
potential at any rate of establishing new criteria of quality and excellence. It
may not be a coincidence that their popularity has taken place at the same
time that the idea of ‘risk society’ has developed.
• A fourth set of reasons concern the global trend, although much stronger in
some countries than others, to promote the ‘market’ in higher education at
the expense of older notions of public service, social purpose or academic
solidarity. One effect of the market has been to encourage greater competition
among, between and within universities; another has been to place greater
emphasis on marketing techniques, including ‘playing’ the league tables.
• A final set of, admittedly, more speculative reasons may even hint at
suggestive links between university rankings and ‘celebrity culture’. The
development of rankings – in health outcomes, environmental impacts,
customer satisfactions and (almost) everything else in addition to higher
education – ostensibly designed to drive up performance disconcertingly
mirrors the obsession with ‘winners’ and ‘losers’, through merit or chance,
in the mass media.
National policy-making
1. The first is that, the greater the international reach of a university, the
more its ‘products’ – whether highly skilled graduates or research out-
puts – become globally available, so undermining any purely national
advantage. Global or ‘world-leading’ universities belong by definition
to everyone; the scientific and social capital they generate cannot easily
be monopolized. The advantages conferred on a nation by having more
than its ‘share’ of such universities probably contributes more to national
socio-political prestige than to economic effectiveness. In other words it
is a reputational game.
The first real league tables were created in the late 1970s and 1980s by journal-
ists and without sanction from the leaders of higher education systems or
institutions. Often the higher education establishment was actively opposed to
these embryonic ‘league tables’, which were regarded as a vulgar intrusion into
their, ideally private, affairs. The few academics that helped with the construc-
tion of these early rankings were treated as, at best, mavericks and, at worst, as
collaborators. During this first phase rankings took two main forms:
• The first were in effect a version of the public opinion polls that, not
coincidentally, were becoming more generally popular at the same time.
The methodology was to survey the, inevitably subjective, opinions of
heads of department and other subject specialists, secondary school
teachers and the like. There was little attempt to collect more objective
or empirical data. In essence the compilers of league tables were merely
attempting to capture the tacit knowledge about ‘reputation’ and silent
hierarchies about the performance of universities that already existed.
• The second form taken by rankings in effect was a response to the increasing
transparency of policy instruments: funding allocations initially and
predominantly, but also the first tentative measurements of institutional
position and performance on which more formulaic funding allocations
depended, and, of course, quality assessment indicators (although assessing
threshold rather than comparative quality was of limited use to the
compilers of rankings). Newspapers reported these new policy instruments,
collating their results into simple ‘lists’. It is possible to regard these ‘lists’ as
one aspect of the emergence of systems of higher education that implied
some ordering of the institutions these systems comprised.
In the second phase the first of these trends, the capture of tacit knowledge
and the codification of silent hierarchies that already existed, became less
important. In that sense rankings ceased to be – however remotely and
perversely – collegial in their sources. ‘Opinion polls’ among deans and
heads of department, or among secondary school teachers, disappeared or
became only minor elements in the more elaborate rankings that began to
appear in the 1990s. The main reason for this change was the availability of
other data that apparently were more ‘objective’. These data still included
the results of peer-review processes. But these were now mediated through
formal systems of assessment of both teaching and, especially, research.
These trends provided the raw material for the construction of new kinds of
ranking. Firstly, explicit policy frameworks were established covering most or
all types of higher education institutions, not simply universities. Because of
their scale and breadth, these frameworks could no longer rely on informal
knowledge networks but required ever more elaborate and transparent systems.
Secondly, there was an inexorable shift from transparency to accountability.
Even without the elaborate assessment regimes that accompanied the
growth of the so-called ‘audit society’, transparent policy instruments made
it possible to make comparisons and create lists. Finally, the size, complexity
and heterogeneity of higher education institutions required the development
In the third and contemporary phase, all these trends – transparent policy
instruments, accountability regimes and management information systems –
have intensified. As a result the source material for constructing rankings
has become even more wide-ranging. But these have been supplemented by
two phenomena that are closely linked:
• The first is the shift towards the market. Tuition fees have been introduced
and increased, usually justified by the alleged need to share costs
between taxpayers and users of higher education. Competition between
institutions has been positively encouraged, as has greater differentiation
between missions. Higher fees have led to pressure on institutions to
publish more detailed ‘public information’ (to assist student choice), while
greater competition has encouraged them to devote more attention and
resources to marketing (both the advertising of academic ‘products’ and
more active ‘brand’ management). However, it is important to recognize
that in most countries this shift towards the market has been politically
mandated, so there has been no decline in assessment and accountability
systems that grew up in the context of the coordination of public systems
of higher education.
Critiques of rankings
The rise of rankings has not gone unchallenged. Earlier criticisms that rank-
ings were ‘intrusive’ have become less persuasive. The scale and cost of mass
systems, and their engagement in processes of social change and economic
development, have made it impossible any longer to regard higher educa-
tion as a ‘private’ domain ruled by scientists, scholars and teachers. At the
same time the pressure on all kinds of organization, private as well as public,
to demonstrate their accountability to those who buy their products or use
their services has increased, despite doubts about the long-term impact of
this ‘audit culture’ on the independence of civil society institutions from both
the state and the market.
• Firstly, most of the data on which rankings rely are not collected for the
purposes of compiling rankings. Often they are linked to the distribution of
resources. In other words their essence is not the crude ‘scores’ – the only
element used in rankings – but the complex algorithms used to allocate
resources. The same thing happens at the institutional level. For example, in
the case of research assessment in the United Kingdom, some universities
seek to maximize reputation and others income.
• Secondly, input data are easier to obtain and are probably more reliable
than outcome measures, some of which are under at least the partial
control of the institutions that are being ranked. Inputs and outcomes are
clearly linked. More generously funded universities are not only able to
offer their students a high-quality experience, they also tend to be the most
privileged in terms of their historical prestige and student profiles. As a
result employment rates among their graduates are likely to be superior.
• Thirdly, the most relevant data are rarely the most auditable and
verifiable. Journalists, in particular, are used to ‘going with what we’ve
got’. The shift from active journalism – for example, by organizing opinion
polls among academic leaders – to passive reporting of published data
Principles
The first question is generally dismissed on the grounds that higher educa-
tion cannot escape some measure of accountability, and for such account-
ability to be effective there must be appropriate tools. However, a number
of issues need to be addressed before this first question can be answered
out-of-hand. One is the precise degree of accountability. Open societies
depend on vigorous and independent intermediary institutions, such as
universities. Democratic states, therefore, must set limits to the extent of
accountability, even when it is carried out to ensure that the ‘will of the
people’ is respected. In other words, surveillance can be excessive – espe-
cially so in the case of institutions such as universities that embody values
of critical enquiry and open-ended research. A second issue is the nature
of accountability. While it may be necessary to measure performance, it
may nevertheless be undesirable to rank it according to an absolute (and
The third question is the most difficult of all. To produce valid comparisons,
which excite newspaper readers or inform higher education applicants,
institutions must be ranked according to common criteria. Ranking cannot
be produced from a mass of incommensurable data. But to be accurate
(and fair) rankings must take into account the different missions of institu-
tions, especially within mass system enrolling up to and in excess of half
the relevant age population. The criteria based on the Berlin Principles
include the need to take account of these different missions and goals,
acknowledgement of ‘linguistic, cultural, economic and historical contexts’
and recognition that quality is multi-dimensional and multi-perspective.
The U-Map approach is not without its difficulties. On the one hand, it
adopts some of the same success criteria as the Shanghai and THE rank-
ings: research performance and international orientation. Within the cur-
rently dominant paradigm of the knowledge society and the free-market
economy there is perhaps no alternative, although the internal contradic-
tions of that paradigm (not simply in terms of equity but also of efficiency)
and the challenges to it posed by new social movements may soon create
a space in which alternatives can be conceived. On the other hand, U-Map
does not produce ‘lists’ that can be translated into newspaper headlines,
‘best buy’ guides or political initiatives to boost the number of ‘world
leading’ universities. Nevertheless, U-Map offers a more sophisticated
and nuanced approach to assessing success in modern higher education
systems composed of institutions with multiple missions and of different
types of institution.
Brown, R. 2006. ‘League tables – do we have to live with them?’, Perspectives, 10.2: 33–38.
Dill, D. and Soo, M. 2005. ‘Academic quality, league tables and public policy: a cross-national
analysis of university rankings systems’, Higher Education, 49, 495–533.
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. and Trow, M. 1994. The New
Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies.
London: Sage.
Hazelkorn, E. 2011. Rankings and the Reshaping of Higher Education: The Battle for World-Class
Excellence. London: Palgrave Macmillan.
IREG (International Rankings Expert Group) – Observatory on Academic Ranking and Excellence
(2010), IREG-Audit: Purpose, Criteria and Procedure.
Brussels: IREG: www.ireg-observatory.org (Accessed 1 November 2012.)
King, R. and Locke, W. 2008. Counting What is Measured or Measuring What is Counted? League
Tables and their Impact on Higher Education Institutions in England (Issues Paper):
Bristol: Higher Education Funding Council for England.
Lundby, K. (ed.). 2009. Mediatization: Concept, Change and Consequences. New York: Peter Lang.
Rauhvargers, A. 2011. Global University Rankings and their Impact. Brussels: European University
Association.
Sadlak, J. and Liu, N.C. 2007. Introducing the topic: expectations and realities of world-class
university status and rankings practices, J. Sadlak and N.C. Liu (eds) The World-Class
University and Rankings: Aiming Beyond Status. Bucharest: Cluj University Press.
Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses, Higher
Education Management and Planning, 19.2: 31–68.
Scott, P. 1995. The Meanings of Mass Higher Education. Buckingham: Open University Press.
van Vught, F. 2009. Mapping the Higher Education Landscape: Towards a European Classification of
Higher Education. Dordrecht, Netherlands: Springer.
van Vught, F., Kaiser, F., File, J., Gaethgens, C., Peter, R. and Westerheijden, D. 2010. U-Map: the
European Classification of Higher Education Institutions. Enschede: Centre for Higher
Education Policy Studies (University of Twente): www.u-map.eu
(Accessed 1 November 2012.)
Rankings, new
accountability tools and
quality assurance
Judith Eaton
Quality assurance and accreditation in higher education have experienced a
major expansion over the past twenty years.1 This growth has accompanied the
expansion or ‘massification’ of higher education as more and more countries
around the world focus on access to colleges and universities as vital to the
future success of societies.
Until recently, quality assurance efforts around the world shared a small number
of key characteristics. They focused on either entire colleges or universities or on
specific programmes such as engineering or law or medicine. Quality assurance
is peer-based, with academics reviewing academics, relying on self-evaluation. It
is standards-based, with academics setting expectations of institutional or pro-
gramme performance. It is, in general, a formative evaluation, concentrating on
how to strengthen academic performance rather than making an up-or-down
judgment. Standards are primarily qualitative, calling for professional judgment.
It is also evidence-based, with institutions or programmes expected to provide
information about the extent to which standards are met.
During the past ten years, however, the demands on higher education to provide
evidence of quality have gone beyond these familiar characteristics. Increasingly,
the focus has been on direct accountability to the public, with diminishing
investment in relying exclusively on an academically driven, peer-based system.
Societies are increasingly reluctant to rely only on the professionals in the field
to make judgments about quality. This is accompanied by diminished interest in
quality improvement.
1 There are many definitions of these terms. For the purposes here, they refer to external quality review
of higher education that typically involves self- and peer review culminating in judgments about both
threshold quality and needed improvement.
Rankings are one of these new accountability tools – the most prominent
and the most controversial. ‘Rankings’ refer to a hierarchical ordering and
comparing of the performance, effectiveness and characteristics of higher
education institutions based on specifically chosen indicators. The indica-
tors may differ significantly and can include, for example, research, funding,
endowment and student characteristics. They are the means by which, for
instance, US News and World Report or the Shanghai Rankings, determine
the rank at which institutions find themselves. More than fifty countries now
use rankings, accompanied by ten international and some regional rank-
ings. The number of ranking systems is likely to increase, expanding to larger
numbers of countries and regions and, according to some experts, becoming
a standard worldwide accountability tool. Ranking systems may be devel-
oped by government, by private bodies or the media.
International standards for student learning are the newest of the exter-
nal accountability tools. The Organisation for Economic Co-operation and
Development has been developing international indicators of student
achievement, the Assessment of Higher Education Learning Outcomes,
which may be a prototype for perhaps standardizing international judgment
of what students learn and can do.
While these tools have not been developed within accreditation and higher
education, there is, nonetheless, considerable discussion and debate sur-
rounding them. The Council for Higher Education Accreditation (CHEA), a
non-governmental, national institutional membership organization of 3,000
degree-granting colleges and universities, has worked over the past several
years to bring US colleagues together with academics from a range of other
countries to explore the new accountability tools and the work of quality
assurance in other countries. Accrediting organizations themselves have
been part of many international discussions with regard to, for example,
rankings and qualifications frameworks.
That these tools are emerging from the business or governmental sector is
not, however, discouraging to the public that is using them with increasing
Others have opted for more of an adaptive role for accreditation and qual-
ity assurance. They acknowledge that the calls for public accountability –
evidence of student learning and greater transparency both for institutions
and quality assurance – require a robust response in today’s world. A key
question here is whether to accommodate the new tools or to address public
accountability through traditional processes augmented by, for example,
greater transparency and additional attention to student learning. At the
same time, a strong commitment to the desirable elements of the traditional
Yet others working in higher education have engaged with the tools, seeing
them as useful additions to traditional practice. For example, in the United
States, the Lumina Foundation’s Degree Qualifications Profile, mentioned
above, is a framework to develop and judge expectations of what students
are learning at the baccalaureate, masters and doctoral levels. The Profile
has attracted some accrediting organizations, associations and institutions
that are piloting its application to address student learning and account-
ability. Europe, as indicated above, has developed both regional and national
frameworks, as have a number of other countries. These frameworks are
addressed either as part of ongoing quality assurance practice or in tandem
with traditional efforts. Governments often prescribe them.
Summary
The likelihood of these external accountability tools, especially rankings, dis-
appearing is low. There is much that the public now wants to routinely know
about higher education as compared to the past. Society now believes that it
Whatever the response to the new accountability tools and their implications
for traditional quality assurance and accreditation, future conversations will
centre on the role of traditional practices and these tools:
• What are various ways in which traditional quality assurance and new
accountability tools may be aligned to enhance service to students and
society?
The new accountability tools have already had a profound impact on higher
education. Their impact will continue to be felt as these conversations take
place and lead to additional change within colleges and universities.
Altbach, P.G., Reisberg, L. and Rumbley, L.E. 2009. Trends in Global Higher Education: Tracking an
Academic Revolution. A Report Prepared for the UNESCO 2009 World Conference on
Higher Education. Chestnut Hill, MA: Boston College Center for International Higher
Education.
ENQA (European Association for Quality Assurance in Higher Education). 2011, March 4.
ENQA position paper on transparency tools.
Hazelkorn, E. 2011, March 13. Questions abound as the college-rankings race goes global.
The Chronicle of Higher Education: http://chronicle.com/article/Questions-Abound-as-
the/12669/ (Accessed 1 November 2012.)
Lewis, R. 2009. Qualifications Frameworks. Presentation to the Council for Higher Education
Accreditation 2009 International Seminar: www.chea.org/Research/index.asp
(Accessed 1 November 2012.)
OECD (Organisation for Economic Co-operation and Development). n.d. Testing Student and
University Performance Globally: OECD’s AHELO: www.oecd.org
United States Department of Education. National Center for Education Statistics. College
navigator: http://nces.ed.gov.collegenavigator/ (Accessed 1 November 2012.)
International
Perspectives
Chapter 8
An African perspective
on rankings in
higher education
Peter A. Okebukola
Introduction
The African higher education system has grown significantly over the past
twenty years in response to demand for admission spaces by secondary
school leavers. From about 700 universities, polytechnics, colleges of educa-
tion and other post-secondary institutions classified within the higher edu-
cation group in the early 1990s, the system now has well over 2,300 of such
institutions. The growth of the system with respect to enrolment is judged to
be one of the fastest in the world (UIS, 2010).
Since the 1960s, ranking of universities in Africa has been conjectural rather
than empirical. Two indicators have typically featured. These are the age of
the institution and employers’ perceptions of the quality of graduates. As
reported by Taiwo (1981), in the minds of Kenyans, the University of Nairobi
(established 1956) should be better in quality of training than Kenyatta
University (established in 1965). The same order of ranking emerges when
employers rank these universities on the assumption that graduates of
University of Nairobi should be better than graduates of other universities
in Kenya. Nairobi graduates may have been tried and tested and adjudged
From early 2000, conjectural ranking began to yield for the empirical. Global
rankings provided a template for more transparent and more objective data
collection, analysis and reporting. They also provided a menu of indicators that
can be adapted or adopted for local context. The first Times Higher Education
ranking in 2004, which showed the big names in the higher education sys-
tem in Africa by the conjectural ranking not listed in the Times league tables,
jolted stakeholders. Governments, university managers, students and parents
reacted angrily. The call to improve quality and hence global ranking was thick
in the air. This call has persisted and has been a major driver for improving
the delivery of higher education in the region. The next section of this chapter
presents a national example of ranking of higher education institutions, while
the section that follows this describes a regional rating scheme. The concluding
section positions Africa within a global context and suggests ways by which
African universities can achieve better ranking on global league tables.
4. Foreign content (staff ): proportion of the Academic staff of the university who
are non-Nigerians: This indicator is designed to measure how well the
university is able to attract expatriate staff. The indicator is important
in a globalizing world and within the context of a university being an
5. Foreign content (students): proportion of the students of the university who are
non-Nigerians: This indicator measures how well the university is able to
attract foreign students. As stated for the staff component, the indicator
is important in a globalizing world and within the context of a university
being a universal institution where students from all over the world are
free to enrol. It is derived as the percentage of the quotient obtained by
dividing the number of non-Nigerian students in the university by the
total number of students.
10. Ph.D. graduate output for the year: This indicator combines the postgradu-
ate standing of a university with the internal efficiency of postgraduate
education. It is computed by dividing the number of PhDs graduated in
2004 by the total number of postgraduate students in that year and mul-
tiplying by 100.
By 2009, the NUC indicated its intention to revise the indicators for ranking
as shown in Table 1 (Okebukola, 2010).
Unique
1 Percentage of academic programmes of the university with full accreditation status
2 Proportion of academic staff of the university at full professorial level
Standard No. of
rating items
1 Institutional governance and management 9
2 Infrastructure 8
3 Finance 7
4 Teaching and learning 8
5 Research, publications and innovations 8
6 Community/societal engagement 8
7 Programme planning and management 8
8 Curriculum development 7
9 Teaching and learning (in relation to curriculum) 7
10 Assessment 6
11 Programme results 4
The AQRM was piloted in 2010 in institutions that fall within Regional
Economic Communities (RECs). An AQRM survey questionnaire, an eighty-
item instrument with fifteen parts, was used for collecting data. Items
in parts 1 to 13 cover demographic features of institutions and detailed
data on students, staff, facilities and processes. Part 14 is in two parts, the
first of which requires self-rating of faculty/college characteristics such
as management, infrastructure, recruitment, admission and selection,
research output, learning materials, curriculum and assessment. The
second requires that the programmes of the institution be ranked from
first to fifth. Part 15 is the institutional self-rating. The entire institution
is to be rated on a three-point scale covering excellent performance,
satisfactory performance and unsatisfactory performance on the eleven
clusters of standards.
Congruence between AQRM and three global rankings can also be seen in the
indicators. Four of the clusters of indicators that are common to the three
global rankings feature directly or indirectly in AQRM. These are research and
publications, teaching and learning, infrastructure and community/social
engagement. The other two – governance and management, and finances –
are not directly measured by the global rankings. The programme-level
criteria of AQRM do not directly match the indicator clusters of ARWU,
THE and Webometrics, except of course for teaching and learning, which
is indirectly related. It is safe to assume, therefore, that the measures on
AQRM are proximal to the measures on the three global ranking schemes.
It can be predicted that a well-performing institution on AQRM will have a
respectable rank on global league tables if a rigorous verification process is
applied to the AQRM methodology.
Some details of the Nigerian survey showed that over 68 per cent of students
seeking admission into Nigerian universities between 2003 and 2006 and
The effect of ranking has been largely positive. A striking effect is the
improvement in funding to universities to improve facilities for teaching,
learning and research. In Nigeria, this increase is over 30 per cent over a ten-
year period. In Africa, national quality assurance agencies have increased in
number from ten in 2003 to twenty-two in 2012 in response among other
things to the desire to improve quality and hence bolster global ranking or
regional rating. The third effect is the slow but steady increase in the quality
of delivery of higher education in the region. While this improvement cannot
be adduced solely to ranking, it is obvious that the competition induced by
ranking is spurring efforts at improving quality.
One of the major influencing factors for education policies is the national
vision. In 2009, Nigeria signed on to Vision 20-2020 with the aspiration
to be one of the twenty leading world economies by 2020. Botswana,
Ghana, Kenya, Lesotho, Namibia and South Africa are other examples
where national visions are set to guide development. The common thread
through these national vision statements is ranking – as the thrust to
emerge among top-ranked economies by a set target date. Consequently,
education policies deriving from these visions look to another form of
ranking – of universities.
As stated earlier, this chapter looks at three of the major global ranking
schemes: the Academic Ranking of World Universities (ARWU), Times Higher
Education ranking and Webometrics ranking. The Academic Ranking of
World Universities focuses on academic or research performance (Liu, 2011).
Ranking indicators include alumni and staff winning Nobel Prizes and Fields
Medals, highly cited researchers in twenty-one broad subject categories,
articles published in Nature and Science, articles indexed in Science Citation
Index-Expanded and Social Science Citation Index, and academic perfor-
mance with respect to the size of an institution.
1. Size: refers to the number of pages recovered from four engines (Google,
Yahoo!, Live Search and Exalead).
4. Scholar: Google Scholar provides the number of papers and citations for
each academic domain; these results from the Scholar database repre-
sent papers, reports and other academic items.
One of the stiffest indicators for African universities on the ARWU scheme
is alumni and staff winning Nobel Prizes and Fields Medals. The environ-
ment for conducting groundbreaking research is largely lacking, hence
steps should be taken to elevate the facilities, especially research labora-
tories, to a level that will permit contributions with the potential to win a
Nobel Prize. While facilities represent one side of the coin, the other side is
the capacity of African researchers to undertake top-quality research and
sustain this over time, as is often characteristic of Nobel-winning studies.
Significant efforts should be invested in capacity building of researchers
and fostering partnerships with renowned researchers outside Africa.
Tutelage under Nobel Prize winners is another pathway. Training graduates
from African higher education institutions under the wings of Nobel Prize
winners will foster cultivation of the research methodologies, attitudes and
values needed to be a prize winner. The Association of African Universities
(AAU) and national quality assurance agencies need to undertake a study
of the institutional location of Nobel prizewinners and seek partnership
with such institutions and centres where the laureates are serving. Bright
graduates, preferably first-class degree holders, can be carefully selected
to undertake postgraduate education in such centres. We should begin
to fade out the vogue of partnerships with little known universities and
laser focus on one or two outstanding universities and programmes where
Nobel Prize winners serve.
African higher education institutions can improve the scores on the diver-
sity indicator through improvement in their salaries and work environ-
ments with the aim of attracting international staff. In a market-driven
economy, attraction of international staff occurs towards where maximum
benefit can be derived in terms of salary and other conditions of service.
Salaries of university staff should be made internationally competitive.
Work environments including facilities for quality teaching and research
should be significantly improved. Special accommodation facilities should
be provided with due attention paid to security and regular supply of water
and electricity.
• Contents – create: A large web presence is made possible only with the
effort of a large group of authors. The best way to ensure this is to
allow a large proportion of staff, researchers or graduate students to
be potential authors. A distributed system of authoring can operate at
several levels:
Language, especially English: The Web audience is truly global, so you should
not think locally. Language versions, especially in English, are mandatory
not only for the main pages, but also for selected sections and especially
for scientific documents.
Rich and media files: Although html is the standard format of webpages,
sometimes it is better to use rich file formats like Adobe Acrobat pdf or MS
Word doc, as they allow a better distribution of documents. PostScript is
a popular format in certain areas (physics, engineering, mathematics), but
it can be difficult to open, so it is recommended to provide an alternative
version in pdf format.
Conclusion
This chapter reviewed developments in higher education ranking in Africa
with a special focus on Nigeria and the African Quality Rating Mechanism
(AQRM). It examined the potential impact of ranking on improving the
quality of delivery of university education. It highlighted the efforts made
for a regional rating of higher education institutions. Some suggestions are
provided on how African higher education institutions can take steps to
improve their ranking on global league tables.
In the early 1960s, opportunities existed for students and teachers to cross
national boundaries within Africa to participate in teaching, learning and
research. The University of Ibadan in Nigeria and the University of Ghana,
both in West Africa, had active collaboration in teaching and research with
universities in East and Central Africa. Between 1970 and 1984, there was a
sizeable traffic of students from the University of Nairobi, the University of
Tanzania and the University of Cameroun to the University of Ibadan, espe-
cially for postgraduate degrees. Teachers in these institutions collaborated
actively in research. Universities in francophone Africa, especially in Cote
d’Ivoire, Mali and Senegal, have a fairly long history of collaboration in teach-
ing and research. These interactions did not exist within a formal regional
References
Aguillo, L. 2008. Webometric Ranking of World Universities: Introduction, methodology, and future
developments. Madrid: Webometrics Lab.
AU (African Union Commission). 2008. Developing an African Quality Rating Mechanism. Addis
Ababa: AU Press.
Federal Government of Nigeria. 2006. National Policy on Education: Lagos: NERDC Press.
Materu, P. 2007. Higher Education Quality Assurance in Sub-Saharan Africa: Status, Challenges,
Opportunities, and Promising Practices. World Bank Working Paper, No. 124.
Washington DC: World Bank.
Okebukola, P.A.O. 2006. Accreditation as Indicator for Ranking. Paper presented at the World Bank
conference on ranking of higher education institutions, Paris, 23–24 March.
Okebukola, P.A.O. 2010. Trends in Academic Rankings in the Nigerian University System and the
emergence of the African Quality Rating Mechanism. Paper presented at the 5th Meeting
of the International Rankings Expert Group (IREG-5), The Academic Rankings: From
Popularity to Reliability and Relevance, Berlin, 6–8 October.
Okebukola, P.A.O. 2011. Nigerian Universities and World Ranking: Issues, Strategies and Forward
Planning. Paper presented at the 2011 Conference of the Association of Vice-Chancellors
of Nigerian Universities, Covenant University, Ota, 27–30 June.
Okebukola, P.A.O. and Shabani, J. 2007. Quality assurance in higher education: perspectives from
sub-Saharan Africa. GUNI (ed.) State of the World Report on Quality Assurance in Higher
Education, pp. 46–59.
Oyewole, O. 2010. African Quality Rating Mechanism: The Process, Prospects and Risks. Keynote
address presented at the Fourth International Conference on Quality Assurance on
Higher Education in Africa, Bamako, Mali, 5–7 October.
Salmi, J. 2011. If Ranking is the Disease, is Benchmarking the Cure? Keynote address presented at the
UNESCO Global Forum on University Rankings, Paris, 16–17 May.
Salmi J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 19(2): 24–62.
Taiwo, C.O. 1981. The Nigerian Education System: Past, Present and Future. Lagos: Thomas Nelson.
UIS (UNESCO Institute of Statistics). 2010. Trends in Tertiary Education: Sub-Saharan Africa. UIS
Factsheet No. 10. December.
World Bank. 2008. Accelerating Catch-up: Tertiary Education for Growth in Sub-Saharan Africa.
Washington DC: World Bank.
Institutional-level criteria
1. The institution has a clearly stated mission and values with specific
goals and priorities.
Infrastructure
Finances
Community/societal engagement
Programme-level criteria
5. The mode of delivery takes account of the needs and challenges of all
targeted students.
6. Staff teaching on the programme has the appropriate type and level of
qualification.
Curriculum development
Assessment
Programme results
4. Expert peers and/or professional bodies review the relevance and quality
of learning achieved by students.
However, by the end of the 1980s universities and academics had become
more sensitive to their global positioning in research performance. Japanese
universities had become fully involved in global competition in research,
especially in the fields of science, technology, engineering and mathematics
(STEM). Based on the strong economy and success in production technol-
ogy at that time, Japan witnessed a ten-fold increase in the acceptance of
international students, which went from 10,428 in 1983 to 109,508 in 2003.
The change in position of the University of Tokyo (from 67th in 1988 to
43rd in 1998) in the Gourman Report, a US-based university ranking that
started in 1967, was referred to in the Diet.1 At the end of the 1990s, a Hong
Kong-based magazine, Asiaweek, published university rankings in Asia. The
attitudes towards this ranking varied among Japanese universities as well
as among Asian universities. It was said that Chinese and Taiwanese uni-
versities once resisted collaborating with Asiaweek. The University of Tokyo
also refused to be ranked, having recognized that university rankings did
not express the exact value of university activities (Yonezawa, Nakatsui and
Kobayashi, 2002). The majority of universities, especially those with a strong
1 The Committee of Education and Science, the House of Councillors, the National Diet, 25 April 2002:
http://kokkai.ndl.go.jp/SENTAKU/sangiin/154/0061/15404250061008a.html
Asahi Shimbun has also tried to provide visual and detailed information on
specific universities and academic fields in their book series entitled AERA
Mook. For example, the AERA Mook on the Tokyo Woman’s Christian University
is a 130-page volume with various visual images containing interviews and
portrayals of its faculty, students and alumni, from advanced academic
contents to the contents of a student’s lunchbox. In 2011, Asahi Shimbun
implemented a survey to obtain detailed information from universities in
collaboration with Kawaijuku, which also provides data on entrance exami-
nations for Gakken, another publisher. Such collaboration among ranking
bodies, media and information providers has been frequently found in the
history of university rankings in Japan.
Recruit Ltd. has been another important player in the history of university
rankings in Japan. The main task of this company has been to provide
At the same time, especially for university managers and staff with exper-
tise knowledge in the fields of science and technology, the methodology
utilized for current world rankings is apparently rough, incomplete and
biased. Japanese university leaders and their staff have been ambiguous
on the reliability and viability of world university rankings, and have con-
tinued to make an effort to request further improvement of the method-
ology. For example, when the indicator of reputation by employers was
introduced in the QS rankings from 2005, Yoshihisa Murasawa, a specially
appointed professor of the University of Tokyo, identified that only two
employers responded to the QS and that the questionnaires were not dis-
tributed among Japanese companies (Kobayashi, 2007).
2 The RU11 member universities are nine national universities (Hokkaido University, Tohoku University,
Tsukuba University, the University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto
University, Osaka University, Kyushu University) and two private universities (Keio University and
Waseda University).
Two business journals, Toyo Keizai and Diamond, have also published
university rankings as special issues. Toyo Keizai (see Appendix) issued
their ranking based on an employers’ survey in 1996, and began publish-
ing periodic special issues entitled ‘Truly strong universities’ from 2001.
Adding to the employers’ review, Tokyo Keizai listed their ranking based
on the financial performance of major universities along with details of
their financial and management profiles. The Tokyo Keizai ranking has
developed into a comprehensive ranking that now includes performance
with regard to finance, management innovation, research, education and
employability. On the other hand, the Diamond ranking has focused more
on employability based on the opinions of directors of human resource
management divisions. Diamond also published a comprehensive ranking
in 2003 based on student selectivity, education and research performance,
and the careers of graduates. However, Diamond has not issued any peri-
odic rankings since 2006.
At the same time, the change of ranking position has been utilized as a
political tool for both budgetary requirements and budgetary cuts. The
Democratic Party of Japan (DPJ) became a ruling party in 2009 and intro-
duced an expenditure review to examine the effectiveness of prospective
budgetary items as a process open to the general public. At this point, MEXT
Arai, K. and Hashimoto, A. (eds). 2005. Koko to Daigaku no Setsuzoku [High School to College
Articulation]. Tokyo: Tamagawa University Press [in Japanese].
Dore, R. 1997. The Diploma Disease: Education, Qualification and Development (2nd edn). London:
Institute of Education.
Kobayashi, M., Cao, Y. and Shi, P. 2007. Comparison of Global University Rankings. Tokyo: Center
for Research and Development of Higher Education, University of Tokyo.
Kobayashi, M. (ed.) 2011. Daigaku Benchmarking ni yoru Daigaku Hyoka no Jissho teki Kenkyu
[Positive Research on University Evaluation through Benchmarking]. Tokyo: Center for
Research and Development of Higher Education, University of Tokyo [in Japanese].
Kobayashi, T. 2007. Nippon no daigaku [Japan’s Universities]. Tokyo: Kodansha [in Japanese].
Marginson, S., Kaua, S. and Sawir, E. (eds). 2011. Higher Education in the Asia-Pacific. Dordrecht,
Netherlands: Springer.
Monbusho. 1992. Gakusei Hyakunen Shi. [One Hundred Year History of Japan’s School System].
Tokyo: Monbusho [in Japanese].
Provisional Council of Educational Reform. 1985. Dai Ichiji Toshin [First Report]. Tokyo: Cabinet
Office [in Japanese].
Taki, N. 2000. 2000 Nen no Nyushi Jokyo no Sokatsu. [Summary of University Admission in 2000].
IDE 421: 61–66 [in Japanese].
Yonezawa, A., Nakatsui, I. and Kobayashi, T. 2002. University rankings in Japan. Higher Education
in Europe, 17(4): 373–82.
Yonezawa, A., Akiba, H. and Hirouchi, D. 2009. Japanese university leaders’ perceptions of
internationalization: the role of government in review and support. Journal of Studies in
International Education, 13: 125–42.
Information disclosure
Yield rate in admission
Dropout rate
Review by university presidents
Review by high schools
Administrative staff
World university rankings
Tuition fees
Learning environments
Newly established universities
Local public universities
Women’s universities
Religious universities
Small-sized universities
University libraries
University repository
International volunteer
Contests
Study abroad programmes
Rate of proceeding to a graduate programme
Number of doctoral degrees issued
Share of female students
Share and number of international students
Universities from which the university presidents graduated
Newly recruited faculty members
Average age of faculty members
Share of alumni of faculty members
Share of faculties with doctoral degrees
Share and number of female faculties
Share and number of international faculties
Share of adult students
Participation in parents meetings
Student life
Appearance in fashion magazines
Job placement support staff
Internship
Governmental responses
The initial strong governmental reaction was not unexpected. At that time the
country was implementing the Ninth Development Plan in which research
universities were to be created to contribute to the transformation of the
country into a knowledge-based economy. The government recognized that
universities, through education, research and development, play a key role
in driving economic growth and global competitiveness by creating, applying
Until recently, much such initiatives had ‘ranking’ overtones. This was visible
in the references to ranking in the National Higher Education Action Plan
(2007–2010) accompanying the National Higher Education Strategic Plan:
Beyond 2020 that was launched in 2007. While the Strategic Plan, through
seven strategic thrusts, aimed at substantially transforming and making
higher education institutions in the country comparable to the best in the
world, the Action Plan details critical implementation mechanisms that
include five critical agenda to catalyse systemic change. One of the critical
agenda projects is the creation of an APEX university as the means towards
achieving world-class status.
As the issues surrounding ranking have become clearer the government has
taken a more holistic view about ranking. The Minister of Higher Education
has expressly stated that universities should not be ‘obsessed with ranking’
(Khaled Nordin, 2011). The government realizes too that the fiscal require-
ments may go far beyond national budgets. For example, the top 20 univer-
sities in the QS World University Rankings, on average, have about 2,500
academic faculty members, are able to attract and retain top personnel
(high selectivity), and have approximately US$1 billion in endowments and a
US$2 billion annual budget.
Many of the initiatives that have been put in place are aimed at restructuring
higher education for national competitiveness in the global economy and
increasing their share of scientific advancement. In the long term this will
strengthen the performance of Malaysian universities in ranking indicators,
and while resources are not being allocated purely for improving ranking per
se, indirectly ranking has become a tool for driving excellence.
Institutional response
The National University of Malaysia (UKM) has always maintained that
rankings are here to stay. They are used not only in education, but also to
compare anything from business competitiveness to innovation, corruption,
web attractiveness and even the world’s richest. Today there are over fifty
national and ten global rankings of educational institutions, including the
European Union’s planned U-Multirank.
However, lessons from comparative data are only useful if they are utilized
to devise institutional changes that will ensure a genuine and sustainable
improvement in the quality of universities over the medium to long term.
Malaysia is very much aware that the citation ratio, which indicates the quality
and strength of the research culture at its universities, is far from satisfactory.
According to the UK Royal Society (Economist, 2011) the United Kingdom and
the United States accounted for 38.5 per cent of global citations in 2004–2008.
Using the metaphor of a tern flying high in a balanced and focused way
towards its transformative goal, we devised a UKM Knowledge Ecosystem to
incorporate all strategies comprehensively. The backbone of the bird repre-
sents the core processes of education, research and service, each containing
both international and national dimensions. Also in the backbone are the
elements of an efficient delivery system.
The wings of our bird represent cross-cutting driver projects that push us
forward faster. The right wing represents specific projects that support our
mission as a national university – making the Malay language attractive
globally and contributing to the development of a united nation. The left
represents projects that advance UKM on the global stage.
At the tail end of the bird is the transformation machinery, containing pro-
jects, structures and processes that will help us manage change in a stable
manner. Key performance indicators are identified in six pillars of excellence.
Targets are set and monitored for good governance, leadership and succes-
sion planning, talent management, teaching and learning, research and
innovation as well community engagement.
The university has subjected itself to an institutional audit and has been
granted self-accreditation status and a reaffirmation of its research univer-
sity status. It recently submitted to a good governance audit as an account-
ability measure for greater autonomy. With autonomy it is expected that the
road to excellence will be expedited.
Validity issues also still dominate every discussion about the measures used
to assess research and teaching quality. How do we account for the dramatic
There is also heavy reliance on the qualitative peer review and recruiter
survey, which together comprise 50 per cent of the scores. Such judgments
are known to be influenced by factors such as legacy and traditions, which
may confer advantages to older institutions with wide subject coverage.
There is also concern regarding the commercial interest in ranking and how
educational budgets might be diverted to playing the ranking game, such
as participating in promotional tours and buying advertisement space on
ranking websites. Conversely, there are factors that bring global recognition
to a university, but which may not be considered in ranking. UKM has part-
nered with QS (Third QS World-Class Seminar, 2008) to showcase how UKM’s
research in the geology and biodiversity of the flora and fauna of Langkawi
and its collaboration with the local authority has culminated in the island
being declared the world’s 52nd Geopark in the UNESCO Global Geopark
Network, and the first in Southeast Asia. Langkawi’s Geoforest park is now a
Conclusion
Rankings are here to stay but this does not mean that governments should
initiate policies targeted at creating ‘world-class’ universities as a panacea for
success in a global economy. Instead they should focus on making their edu-
cation system world class. International rankings do provide useful compara-
tive data that can be a driving force for a university to examine its research
and teaching quality, and hence design appropriate strategies and actions
for continuous quality enhancements in building a research culture and the
foundation for a great university. It is even more useful if ranking methodol-
ogy evaluates the deeper contextual level to enable forward planning for
institutional changes that will ensure a genuine and sustainable improve-
ment in the quality of universities in the medium to long term. A university’s
worth is more than the criteria used in ranking. Although innovations in
stakeholder engagement are currently missing in ranking, the omission
should not deter us from continuing to value these activities. The genuine test
of a university’s mettle is how it continuously anticipates and leads change
through innovations that create new value and give social, environmental
and financial returns for the university, the nation and region. We need to
devise better indicators and methods for assessing the impact on business
innovation, socio-cultural promotion and environmental development of a
region. In the words of Einstein, ‘Not all that counts can be counted’.
Economist, The. 2011. Global science research: paper tigers (29 March 2011): www.economist.com/
blogs/dailychart/2011 (Accessed 1 June 2011.)
Ministry of Higher Education. 2007. National Higher Education Strategic Plan: Beyond 2020.
Malaysia.
Ministry of Higher Education. 2007. National Higher Education Action Plan: 2007-2010. Malaysia,
p. 36.
Yusuf, S. and and Nabeshima, K. 2007. How Universities Promote Economic Growth. Washington DC:
World Bank.
Establish panel of
management and
external experts to
College/School Level consider anomalous data
or representations from
departments. Strategy
can then be developed
to address issues of
Departmental Level accountability and
improve quality.
Annual assessment based on quantitative performance indicators for
learning and teaching, research, and knowledge transfer.
Source: author.
% Graduates
% Faculty % Faculty
% Outbound with FT
% International Average Entry to Total with PhD or
Exchange Employment
Students A-Level Score Academic Professional
Students (within 6 months
Staff Accreditation
of completion)
% Self- Number of
Average Entry % International % Student with Internship
financed Students per
English Score Faculty Experience
Students Faculty
Source: author.
For example, one of the indicators for the Input Quality Index is the per-
centage of international students studying full time in that department (this
is data increasingly required by governments for competitive bidding and
funding allocation purposes), and is one indicator of international outlook.
Clearly, some departments will not, by the very nature of their programmes,
be able to attract large numbers of international students, and this is a fac-
tor which can be considered by the annual review panel. The Input Quality
Index also provides a baseline for longitudinal measurement of a selection
of performance indicators via the gap between input data and data from
the Output Quality Index. A typical indicator for output quality might be
the percentage of students with an internship or placement experience
over thirty days in total. This might be an important factor in terms of
the academic direction of those institutions where employers of graduates
have indicated that they increasingly want graduates who leave university
with transferable ‘functioning’ knowledge rather than just ‘declarative’ or
The Staffing and Resources Index contains data related to staff: student ratio,
staff grades, IT provision, and the percentage of international staff attracted
to work with a particular HEI, as well as other indicators. A serious look at
this data from my own institution’s viewpoint has allowed us, among other
initiatives, to make significant cuts in self-financed programmes over the
past three years, freeing up more time for faculty to spend with students
and engage in research-related work. It has also helped us begin to create
the right staffing profile within departments in terms of grades and the
ratio between academic faculty and administrative staff.
% Self-financed Students 0% 0%
Conclusion
This chapter has considered an often-neglected aspect of the new rankings
culture, namely the benefits individual institutions can gain from the rank-
ing concept. A fairly pragmatic view has been taken which acknowledges that
rankings are here to stay, and have in fact been with us long before the advent
of the Shanghai Jiaotong or QS rankings. Are rankings propelling us towards
the commoditization of HEIs and their offerings, or merely providing at least
some comparative measures of an institution’s global standing and a catalyst
All rankings inevitably invite criticism (Downing, 2012) and it is often easier to
concentrate on what is wrong with them, than try to identify how they might
be used to bring about practical positive, strategic change which will benefit all
stakeholders, not least the ultimate product of our endeavours – the quality of
our graduates and our research output. The author believes that rankings have
encouraged many institutions to take a new approach that concentrates on ana-
lysing and identifying appropriate performance indicators (in consultation with
all stakeholders), which provide evidence of improvements to the core activities
of learning and teaching, research and knowledge transfer. Consequently, rank-
ings have helped create a global environment where it is now easier to make
and provide evidence of real and positive qualitative improvements in these
areas. If the result of these improvements is a significant rise in the institution’s
score on one or more of the ranking criteria then that should be regarded as
a bonus. Rankings do provide reasonable, broadly comparative measures of
an institution’s global standing and can be used to help foster healthy com-
petition among the best universities. They are also useful self-evaluation tools
that enable universities to appropriately benchmark and bring about positive
strategic change, which ultimately benefits all stakeholders, not least students.
References
Biggs, J. 1999. Teaching for Quality Learning in University. London: The Society for Research into
Higher Education/ Open University Press.
Downing, K. 2012. Do rankings drive global aspirations at the expense of regional development?
M. Stiasny and T. Gore (eds) Going Global: The Landscape for Policy Makers and
Practitioners in Tertiary Education. Bingley, UK: Emerald Group, pp. 31–39.
Marginson, S. 2007. Global university rankings: Implications in general and for Australia. Journal of
Higher Education Policy and Management, 29(2): 131–42.
Marginson, S. and van der Wende, M. 2007. To rank or be ranked: The impact of global rankings in
higher education. Journal of Studies in International Education, 11(3/4): 306–29.
A decade of international
university rankings:
a critical perspective from
Latin America
Imanol Ordorika and Marion Lloyd
A decade ago, education researchers at Shanghai Jiao Tong University set
out to determine how far Chinese institutions lagged behind the world´s
top research universities in terms of scientific production (Liu and Cheng,
2005). The result was the Academic Ranking of World Universities (ARWU,
2003),1 the first hierarchical classification of universities on a global scale.
Despite the relatively narrow focus of the ranking methodology, the results
were widely viewed as a reflection of the quality of an individual institution,
or at least, the closest possible approximation. Other international univer-
sity rankings quickly followed, creating a ripple effect with far-reaching
consequences for higher education institutions worldwide.
1 The Academic Ranking of World Universities (ARWU) has been produced annually since 2003 by
the Institute of Higher Education at Jiao Tong University in Shanghai. It compares 1,200 universities
worldwide and classifies 500 on the basis of their scientific production, taking into account the
following criteria: the number of Nobel Prize and Field Medal winners among the university´s alumni
and staff; the number of highly cited researchers in twenty-one subject categories; articles published in
the journals Science and Nature, and the number of publications listed in Thomson Reuters (ISI) Web of
Knowledge (ISI Wok), one of two competing bibliometric databases of peer-reviewed scientific journals;
and per capita scientific production, based on the previous indicators.
Such is the case in Latin America which, despite a 500-year tradition of higher
education, has fewer than a dozen universities represented among the top
500 in the main rankings. The shortage of funding for higher education and
research, in particular, is partly to blame for the region’s limited presence. But
there is another explanation: the rankings do not take into account the full
range of roles and functions of Latin American universities, which extend far
beyond teaching and research. Public universities, in particular, have played
a vital role in building the state institutions of their respective countries
and in solving their nations’ most pressing problems, to say nothing of the
wide array of community service and cultural programmes that they offer
(Ordorika and Pusser, 2007; Ordorika and Rodríguez, 2010). The largest pub-
lic universities act as what Ordorika and Pusser have termed ‘state-building
universities’ (2007), a concept that has no equivalent in the English-speaking
world (Ordorika and Pusser, 2007). However, the rankings do not take into
account the huge social and cultural impact of these institutions of higher
education in Latin America and elsewhere. Instead, such universities often
feel pressure to change in order to improve their standing in the rankings, in
2 Denmark classifies candidates for work and residency permits according to a point system, which takes
into account the candidate’s level of education, among other factors. In evaluating post-secondary
degrees, it relies on the results of the QS World University Rankings, produced by the British-based
educational services company, Quacquarelli Symonds. Graduates of universities ranked among the top
100 universities receive 15 points (out of a total of 100); graduates of institutions in the top 200 receive
10 points; and those in the top 400, 5 points, according the following government immigration website:
www.nyidanmark.dk/en-us/coming_to_dk/work/greencard-scheme/greencard-scheme.htm
The rankings reflect the evolving battle on a global level for control over the
flow of knowledge: the system of knowledge prestige, exemplified by the
rankings, tends to reproduce the status quo, in which universities that have
traditionally dominated in the production of scientific knowledge ratify their
position in the global hierarchy, and a minority of emerging institutions
attempt, and occasionally succeed, in establishing a competitive presence
(IESALC, 2011; Marginson and Ordorika, 2010). ‘Rankings reflect prestige and
power; and rankings confirm, entrench and reproduce prestige and power’
(Marginson, 2009: 13). The pressure to follow the leader results in an expen-
sive ‘academic arms race’ for prestige, measured mostly in terms of research
production in the sciences, medicine and engineering (Dill, 2006).
3 The Times Higher Education ranking was originally published by the higher education supplement of the
Times newspaper, one of Britain´s leading dailies. From 2004 to 2009, the THE rankings were compiled
by Quacquarelli Symonds, a private educational services company based in London. The ranking
classifies the universities throughout the world on the basis of a combination of indicators related to
scientific production, as well as the opinions of academic peers and employers.
4 Starting in 2004, Quacquarelli Symonds began producing international rankings of universities for the
Times Higher Education Supplement (THE). However, in 2009, QS ended its agreement with THE and
began producing its own rankings, using the methodology it previously employed for THE. Since 2009, it
has produced annual versions of the Ranking of World Universities, as well as expanding its production
to include rankings by region and by academic area. The most recent are the QS Ranking of Latin
American Universities and the QS World University Rankings by Subject, both of which were introduced
for the first time in 2011. The latter ranking classifies universities on the basis of their performance in
five areas: engineering, biomedicine, natural sciences, social sciences, and arts and humanities.
5 The Webometrics Ranking of World Universities has been produced since 2004 by Cybermetrics Lab
(CCHS), a research group belonging to the High Council for Scientific Research (Consejo Superior
de Investigación Científica) (CSIC) in Spain. Webometrics classifies more than 4,000 universities
throughout the world on the basis of the presence of their webpages on the internet.
6 Since 2009, the SCImago Research Group, a Spanish consortium of research centers and universities
– including the High Council for Scientific Research (CSIC) and various Spanish universities – has
produced several international and regional rankings. They include the SIR World Report, which
classifies more than 3,000 universities and research centres from throughout the world based on their
scientific production, and the Ibero-American Ranking, which classifies more than 1,400 institutions
in the region on the basis of the following indicators: scientific production, based on publications in
peer-reviewed scientific journals; international collaborations; normalized impact and publication
rate, among others. SCImago obtains its data from SCIverse Scopus, one of the two main bibliometric
databases at the international level.
7 The ranking of the scientific production of twenty-two universities in European Union countries was
compiled in 2003 and 2004 as part of the Third European Report on Science & Technology Indicators,
prepared by the Directorate General for Science and Research of the European Commission.
8 The Leiden Ranking, produced by Leiden University´s Centre for Science and Technology Studies
(CWTS) is based exclusively on bibliometric indicators. It began by listing the top 100 European
universities according to the number of articles and other scientific publications included in
international bibliometric databases. The ranking later expanded its reach to include universities
worldwide.
9 The US News and World Report College and University ranking is the leading classification of colleges and
universities in the United States and one of the earliest such system in the world, with the first edition
published in 1983 (Dill, 2006). It is based on qualitative information and diverse opinions obtained
through surveys applied to university professors and administrators. See: www.usnews.com/rankings
10 The Top American Research Universities, compiled by the Center for Measuring University Performance,
has been published annually since 2000. The university performance report is based on data on
publications, citations, awards and institutional finances. See: http://mup.asu.edu/research.html
11 See Good Universities Guide, at: www.gooduniguide.com.au/
12 See The Complete University Guide, at: www.thecompleteuniversityguide.co.uk/
13 See The Guardian University Guide, at: http://education.guardian.co.uk/universityguide2005
One area in which institutional evaluation practices converge with the rank-
ings is in the use of the results from student exams, as well as information
related to the fulfillment of other parameters and performance indicators.
One such instrument is the National Student Performance Exam (ENADE),
administered by the National Institute of Educational Research and Studies
(INEP) in Brazil, as well as the State Higher Education Quality Exams (ECAES),
administered by the Colombian Institute for the Support of Higher Education
(ICFES) (Ordorika and Rodríguez, 2008; 2010).
This situation has sparked intense debates, studies, analyses and criticisms
regarding the limits and risks of the hierarchical classification systems. Among
controversial aspects of comparing institutions of higher education are: the
selection and relative weight of the indicators; the reliability of the informa-
tion; and the construction of numeric grades on which the hierarchies are
based. There has also been criticism surrounding the homogenizing nature of
the rankings, the predominance of the English language, and the reductionist
manner in which a single evaluation of the quality of an institution, which is
in turn based solely on its scientific production, is taken as definitive (Berry,
1999; Bowden, 2000; Federkeil, 2008a; Florian, 2007; Ishikawa, 2009; Jaienski,
2009; Ordorika, Lozano Espinosa and Rodríguez Gómez, 2009; Provan and
Abercromby, 2000; Van Raan, 2005; Ying and Jingao, 2009).
However, while the criticisms of the rankings have had little practical
impact, they have generated a space for constructive discussion of the ben-
efits and limitations of the classification systems. In this regard, there are
numerous proposals that seek to define adequate standards and practices,
in the interest of improving the transparency, reliability and objectivity of
existing university rankings. Such proposals would benefit both the rank-
ings administrators and their users (Carey, 2006; Clarke, 2002; Diamond
and Graham, 2000; Goldstein and Myers, 1996; Salmi and Sorayan, 2007;
Sanoff, 1998; Vaughn, 2002; Van der Wende, 2009). The most well-known of
these initiatives is the one proposed by the International Ranking Experts
Group (IREG).21
21 The IREG was established in 2004 as part of the Follow-up Meeting for the Round Table entitled ‘Tertiary
Education Institutions: Ranking and League Table Methodologies.’ The meeting was jointly sponsored by
the UNESCO European Centre for Higher Education (CEPES) and the Institute for Higher Education
Policy (IHEP).
22 See: www.ireg-observatory.org/
23 The conference, the Fourth Meeting of University Networks and Councils of Rectors of Latin America
and the Caribbean, was sponsored by UNESCO´s International Institute for Higher Education in Latin
America and the Caribbean (IESALC). An English translation of the document, Position of Latin America
and the Caribbean with regard to the Higher Education Rankings, is available on the IESALC website:
www.iesalc.UNESCO.org.ve/dmdocuments/posicion_alc_ante_rankings_en.pdf
In the World Conference held again ten years later, in 2008, the Latin
American delegation successfully advocated for higher education to be
defined as a social public good, access to which should be guaranteed and
free of discrimination. At the suggestion of the region, the final commu-
niqué lists social responsibility as the first of five general components of
the mission of higher education (IESALC, 2011). The declaration states that
‘higher education must not only develop skills for the present and future
world, but also contribute to the education of ethical citizens committed to
a culture of peace, the defense of human rights, and the values of democ-
racy’ (IESALC, 2011).
At various points in its long history, UNAM has played a major role in the
creation of such essential state institutions as public health ministries and the
Mexican judicial system. The national university has also played a key role in
The ranking organizations are aware of the problem, however they tend to
downplay its significance. In 2007, Quacquarelli Symonds, which at the time
was producing the rankings for the Times Higher Education Supplement, cited
the more extensive coverage of non-English journals within the Scopus
database as justification for switching to the latter; at the time, 21 per cent
24 For more details on the reasoning behind QS’ decision to switch databases, see Why Scopus? at: www.
topuniversities.com/world-university-rankings/why-scopus.
25 Based at UNAM, Latindex is a cooperative bibliographic information system, which was co-founded
in 1995 by Brazil, Cuba, Venezuela and Mexico. Housed at UNAM, it acts as a kind of regional
clearinghouse for scientific publications. It maintains a database of more than 20,000 publications
from throughout Latin America, the Caribbean, Spain and Portugal, with articles written in Spanish,
Portuguese, French and English.
26 Based in Brazil, SciELO (Scientific Electronic Library Online) is a bibliographic database and open-
access online scientific archive, which contains more than 815 scientific journals. It operates as a
cooperative venture among developing countries, with support from the Brazilian federal government,
the government of São Paulo state, and the Latin American and Caribbean Center on Health Sciences
Information.
27 CLASE (Citas Latinoamericanas en Ciencias Sociales y Humanidades) is a bibliographic database that
specializes in Social Sciences and the Humanities. Created in 1975 and housed at UNAM´s Department
of Latin American Bibliography, it contains nearly 270,000 bibliographic references to articles, essays,
book reviews and other documents published in nearly 1,500 peer-reviewed journals in Latin America
and the Caribbean, according to the database’s website: http://biblat.unam.mx/
28 PERIÓDICA (Índice de Revistas Latinoamericanas en Ciencias) was created in 1978 and specializes in
science and technology. It contains approximately 265,000 bibliographic references to articles, technical
reports, case studies, statistics and other documents published in some 1,500 peer-reviewed scientific
journals in Latin America and the Caribbean.
Given its relative longevity, the Shanghai ranking provides a good example
of the degree to which the universities’ standings can change over time,
even within the same ranking. In the case of the nine Latin American
universities that appear in the ranking’s top 500 list, São Paulo was the
favorite last year. But it has fluctuated between the 166th and 115th posi-
tion – a difference of 49 places – while the Universidad de Buenos Aires
has ranged from 309th to 159th position, a difference of 150 places. UNAM,
2006
2009
2008
2004
2003
2005
2007
2010
2011
University
Universidade de São Paulo 166 155 139 134 128 121 115 119 129
Universidad de Buenos Aires 309 295 279 159 167 175 177 173 179
Universidad Nacional Autónoma de México 184 156 160 155 165 169 181 170 190
Universidade Estadual de Campinas 378 319 289 311 303 286 289 265 271
Universidade Federal do Rio de Janeiro 341 369 343 347 338 330 322 304 320
Universidade Estadual de São Paulo 441 419 334 351
Universidade Federal de Minas Gerais 453 381 368 347 359
Pontificia Universidad Católica de Chile 423 410 413
Universidad de Chile 382 395 400 401 425 436 449 416
There can even be variations within the same year in rankings produced by
the same company. Such is the case with the QS World University Rankings
and the first QS Latin America University Rankings, in 2011. While UNAM
tied with USP as the top-ranked Latin American university in the world-
wide ranking, it placed fifth in the Latin American rankings. Meanwhile,
the Universidade Estadual de Campinas was far behind UNAM in the global
ranking, but two places ahead in the Latin America ranking (Table 2).
QS officials argue that the discrepancy in the results between the two rank-
ings is due to the differing methodologies employed, and that in the case
of Latin America, ‘the methodology has been adapted to the needs of the
region’ (QS, 2011/2012). According to its producers, the methodology includes
an ‘extensive’ survey of academics and institution leaders in the region, and
takes into account ‘student satisfaction, and the quality, number and depth
of relationships with universities outside the region’ (QS, 2011/2012: 4). It is
unclear, however, how such perceptions are measured. More importantly,
according to its creators, the regional ranking is more exact than the world-
wide version, which calls into question not only the methodology employed
in the larger ranking, but also the methodology of the rankings as a whole.
The differences among the universities’ positions in both rankings serve to
underscore this point.
LAR2011
Country
WR2010
WR2011
Institution
Source: QS World University Rankings (2010, 2011), Latin America University Ranking (2011).
Conclusion
Given the limitations and problems present in the current rankings, there is
a growing trend towards alternative comparative systems that provide hard
data in lieu of hierarchical lists. One such effort is the Comparative Study of
Mexican Universities,31 produced by the Directorate General for Institutional
Evaluation at UNAM. The study, known by its Spanish acronym ECUM and
accessible through an interactive, online database, provides official indica-
tors in a broad range of academic and research areas. Statistics are available
for each of more than 2,600 individual universities and research centres, as
well as by type of institution (e.g. technological institutes or multicultural
universities) and by sector (public or private). While the study allows users
to rank institutions on the basis of individual indicators, it does not enable
them to generate an overall hierarchy – a deliberate omission on the part of
31 See: http://www.ecum.unam.mx/node/2
However, while such alternatives are growing in popularity, they have yet to
gain sufficient critical mass to impact the predominant ranking paradigm or
to undermine its influence. As a result, there is an urgent need for policy-
makers at the university and government levels to change the way they
perceive the rankings. In the case of Latin America, they should also demand
that producers of rankings and comparisons take into account the most sali-
ent features and strengths, as well as the broad range of contributions, of the
region’s universities to their respective countries and communities, such as
those outlined in this chapter.
The rankings should not be confused with information systems, nor should
they be taken at face value, given their limited scope and the heavily biased
nature of their methodologies. At best, they may serve as guides to which
institutions most closely emulate the model of the elite, US research uni-
versity. At worst, they prompt policy-makers to employ wrongheaded deci-
sions – such as diverting funding from humanities programmes in order
to hire Nobel Prize laureates in the sciences, solely in order to boost their
standing in the rankings.
Ackerman, D., Gross, B.L. and Vigneron, F. 2009. Peer observation reports and student evaluations
of teaching: who are the experts? Alberta Journal of Educational Research, 55(1): 18–39.
Altbach, P.G. 2006. The dilemmas of ranking, International Higher Education, 42: www.bc.edu/
bc_org/avp/soe/cihe/newsletter/Number42/p2_Altbach.htm (Accessed 11 April 2012.)
Berhdahl, R.M. 1998. The Future of Flagship Universities, graduation speech at Texas A&M
University, (5 October 1998): http://cio.chance.berkeley.edu/chancellor/sp/flagship.htm
(Accessed 11 April 2012.)
Berry, C. 1999. University league tables: artifacts and inconsistencies in individual rankings, Higher
Education Review, 31(2): 3–11.
Beyer, J.M. and Snipper, R. 1974. Objective versus subjective indicators of quality in graduate
education, Sociology of Education, 47(4): 541–57.
Bolseguí, M. and Fuguet Smith, A. 2006. Cultura de evaluación: una aproximación conceptual,
Investigación y Postgrado, 21(1): 77–98.
Bowden, R. 2000. Fantasy higher education: university and college league tables, Quality in Higher
Education, 6: 41–60.
Borgue Grady, E. and Bingham Hall, K. 2003. Quality and Accountability in Higher Education:
Improving Policy, Enhancing Performance. Westport, Conn.: Praeger.
Brennan, J. 2001. Quality management, power and values in European higher education,
J.C. Smart (ed.) Higher Education: Handbook of Theory and Research, vol. XVI, Dordrecht,
Netherlands: Kluwer Academic Publishers, pp. 119–45.
Buelsa, M., Heijs, J. and Kahwash, O. 2009. La calidad de las universidades en España: elaboración
de un índice multidimensional. Madrid: Minerva ediciones.
Carey, K. 2006. College rankings reformed: the case for a new order in higher education,
Education Sector Reports. Washington DC: www.educationsector.org/usr_doc/
CollegeRankingsReformed.pdf (Accessed 11 April 2012.)
Cave, M., Hanney, S., Henkel, M. and Kogan, M. 1997. The Use of Performance Indicators in Higher
Education: The Challenge of the Quality Movement. London: Jessica Kingsley Publishers.
Cheng, Y. and Liu, N.C. 2008. Examining major rankings according to the Berlin Principles, Higher
Education in Europe, 33(2–3): 201–08.
Clarke, M. 2002. Some guidelines for academic quality rankings, Higher Education in Europe, 27(4):
443–59.
Cyrenne, P. and Grant, H. 2009. University decision making and prestige: an empirical study,
Economics of Education Review, 28(2): 237–48.
DGEI. 2012. La complejidad del logro académico: Estudio comparativo sobre la Universidad Nacional
Autónoma de México y la Universidad de Sao Paulo. Preliminary document. Mexico City:
DGEI-UNAM.
Diamond, N. and Graham, H.D. 2000. How should we rate research universities? Change, 32(4):
20-33.
Díaz Barriga, Á., Barrón Tirado, C. and Díaz Barriga Arceo, F. 2008. Impacto de la evaluación en
la educación superior mexicana. Mexico City: UNAM-IISUE; Plaza y Valdés.
Dill, D. 2006. Convergence and Diversity: The Role and Influence of University Rankings. Keynote
Address presented at the Consortium of Higher Education Researchers (CHER) 19th
Annual Research Conference, 9 September 2006, University of Kassel, Germany.
Dill, D. and Soo, M. 2005. Academic quality, league tables, and public policy: a cross-national analysis
of university ranking systems, Higher Education Review, 49(4): 495–533.
Espeland, W.N. and Sauder, M. 2007. Rankings and reactivity: how public measures recreate social
worlds, American Journal of Sociology, 113(1): 1–40.
Ewell, P.T. 1999. Assessment of higher education quality: promise and politics, S.J. Messick (ed.)
Assessment in Higher Education: Issues of Access, Quality, Student Development, and Public
Policy. Mahwah, NJ: Lawrence Erlbaum Associates, pp. 147–56.
Federkeil, G. 2008b. Rankings and quality assurance in higher education, Higher Education in
Europe, 33(2/3): 219–31.
Filip, M. (ed.) 2004. Ranking and League Tables of Universities and Higher Education Institutions.
Methodologies and Approaches. Bucarest: UNESCO-CEPES: www.cepes.ro/publications/
pdf/Ranking.pdf
Florian, R.V. 2007. Irreproducibility of the results of the Shanghai Academic ranking of world
universities, Scientometrics, 72(1): 25–32.
Goldstein, H. and Myers, K. 1996. Freedom of information: towards a code of ethics for
performance indicators, Research Intelligence, 57: 12–16.
Hazelkorn, E. 2007. Impact and influence of league tables and ranking systems on higher
education decision-making, Higher Education Management and Policy, 19(2): 87–110.
Hazelkorn, E. 2008. Learning to live with league tables and ranking: the experience of institutional
leaders, Higher Education Policy, 21(2): 193–215.
IESALC (International Institute for Higher Education in Latin America and the Caribbean). 2011.
The Position of Latin America and the Caribbean on Rankings in Higher Education. Fourth
Meeting of University Networks and Councils of Rectors in Buenos Aires, 5–6 May 2011,
IESALC.
Kogan, M. (ed.) 1989. Evaluating Higher Education. Papers from the International Journal of
Institutional Management in Higher Education. Paris: OECD.
Liu, N.C. and Cheng, Y. 2005. The academic ranking of world universities, Higher Education in
Europe, 30(2): July.
Lloyd, M. 2010. Comparative study makes the case for Mexico´s public universities, The Chronicle
of Higher Education, 11 November 2010.
Marginson, S. 2007. Global university rankings: implications in general and for Australia, Journal of
Higher Education Policy and Management, 29(2): 131–42.
Marginson, S. 2009. University rankings, government and social order: managing the field of
higher education according to the logic of the performative present-as-future, M. Simons,
M. Olssen and M. Peters (eds) Re-reading Education Policies: Studying the Policy Agenda of
the 21st Century. Rotterdam: Sense Publishers, pp. 2–16.
Marginson, S. and Van der Wende, M. 2006. To Rank or to be Ranked: The Impact of Global Rankings
in Higher Education. University of Twente-Centre for Higher Education Policy Studies:
www.studiekeuzeenranking.leidenuniv.nl/content_docs/paper_marginson_van_der_
wende.pdf (Accessed 11 April 2012.)
Marginson, S. and Ordorika, I. 2010. Hegemonía en la era del conocimiento: competencia global en la
educación superior y la investigación científica. Mexico City: SES/UNAM.
McCormick, A.C. 2008. The complex interplay between classification and ranking of colleges and
universities: should the Berlin Principles apply equally to classification?’ Higher Education in
Europe, 33(2/3): 209–18.
Merisotis, J. and Sadlak, J. 2005. Higher education rankings: evolution, acceptance, and dialogue,
Higher Education in Europe, 30(2): 97–101.
Ordorika, I. and Pusser, B. 2007. La máxima casa de estudios: Universidad Nacional Autónoma
de México as a state-building university, P.G. Altbach and J. Balán (eds) World Class
Worldwide: Transforming research universities in Asia and America. Baltimore: Johns
Hopkins University Press, pp. 189–215.
Ordorika, I. and Rodríguez, R. 2008. Comentarios al Academic Ranking of World Universities 2008,
Cuadernos de Trabajo de la Dirección General de Evaluación Institucional, year 1, no. 1.
Ordorika, I., Lozano Espinosa, F.J. and Rodríguez Gómez, R. 2009. Las revistas de investigación
de la UNAM: un panorama general, Cuadernos de Trabajo de la Dirección General de
Evaluación Institucional, year 1, no. 4.
Odorika, I. and Rodríguez, R. 2010. El ranking Times en el mercado del prestigio universitario,
Perfiles Educativos, XXXII(129).
Palomba, C.A. and Banta, T.W. 1999. Assessment Essentials: Planning, Implementing and Improving
Assessment in Higher Education. San Francisco: Jossey-Bass.
Provan, D. and Abercromby, K. 2000. University League Tables and Rankings: A Critical Analysis.
Commonwealth Higher Education Management Service (CHEMS), document no. 30:
www.acu.ac.uk/chems/onlinepublications/976798333.pdf
Puiggrós, A. and Krotsch, P. (eds) 1994. Universidad y evaluación. Estado del debate. Buenos Aires:
Aique Grupo Editor/Rei Argentina/Instituto de Estudios y Acción Social.
QS. 2011/2012. QS University Rankings: Latin America 2011/2012. QS Intelligence Unit: http://
content.qs.com/supplement2011/Latin_American_supplement.pdf (Accessed 11 April
2012.)
Roberts, D. and Thomson, L. 2007. Reputation management for universities: university league
tables and the impact on student recruitment, Reputation Management for Universities,
Working Paper Series, no. 2. Leeds: The Knowledge Partnership.
Rodríguez Gómez, R. 2004. Acreditación, ¿ave fénix de la educación superior?, I. Ordorika (ed.) La
academia en jaque. Perspectivas políticas sobre la evaluación de la educación superior en
México. Mexico City: UNAM/Miguel Ángel Porrúa, pp. 175–223.
Rowley, D.J., Lujan, H.D. and Dolence, M.G. 1997. Strategic Change in Colleges and Universities:
Planning to Survive and Prosper. San Francisco: Jossey-Bass.
Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses, Higher
Education Management and Policy, 19(2): 24–62.
Sanoff, A.P. 1998. Rankings are here to stay: colleges can improve them, Chronicle of Higher
Education, 45(2): http://chronicle.com/
Siganos, A. 2008. Rankings, governance, and attractiveness of higher education: the new French
context, Higher Education in Europe, 33(2-3): 311–16.
Strathern, M. 2000. The tyranny of transparency, British Educational Research Journal, 26(3):
309–21.
Thakur, M. 2008. The impact of ranking systems on higher education and its stakeholders, Journal
of Institutional Research, 13(1): 83–96.
Turner, D.R. 2005. Benchmarking in universities: league tables revisited, Oxford Review of
Education, 31(3): 353–71.
UNAM (Universidad Nacional Autónoma de México). 2011a. Agenda Estadística. Mexico City:
UNAM: www.planeacion.unam.mx/Agenda/2011/ (Accessed 11 April 2012.)
UNAM. 2011b. Un Siglo de la Universidad Nacional de México 1910-2010: Sus heullas en el espacio a
través del tiempo. Mexico City: Instituto de Geografía-UNAM.
UNESCO. 1998. World Declaration on Higher Education for the Twenty-First Century: Vision and
Action. World Conference on Higher Education, UNESCO, Paris, 9 October 1998.
Van der Wende, M. 2009. Rankings and classifications in higher education: a European perspective,
J.C. Smart (ed.) Higher Education: Handbook of Theory and Research, vol. 23. New York:
Springer, pp. 49–72.
Van Raan, A.F.J. 2005. Fatal attraction: conceptual and methodological problems in the ranking of
universities by bibliometric methods, Scientometrics, 62(1), 133–43.
Vaughn, J. 2002. Accreditation, commercial rankings, and new approaches to assessing the
quality of university research and education programmes in the United States, Higher
Education in Europe, 27(4): 433–41.
Webster, D.S. 1986. Academic Quality Rankings of American Colleges and Universities. Springfield,
Mass.: Charles C. Thomas.
Ying, Y. and Jingao, Z. 2009. An empirical study on credibility of China’s university rankings: a case
study of three rankings, Chinese Education and Society, 42(1): 70–80.
Alternative
Approaches
Chapter 13
However, attempts to measure and analyse what works at the tertiary educa-
tion level have emphasized the performance of individual institutions, for
example, in terms of the competitiveness of admissions, research output and
employability of graduates, among other factors. International rankings have
focused on the relative standings of countries, using the position of their top
universities as a proxy for the performance of the entire tertiary education
system. But while rankings may provide information about individual insti-
tutions in comparison to others, they do not provide an adequate measure
of the overall strength of a country’s tertiary education system. This chapter
explores, therefore, the appropriateness of rankings as a measure of perfor-
mance of tertiary education systems. After looking at the uses and abuses
of rankings, it explains the difference between rankings and benchmarking
methodologies. Finally, it presents the World Bank’s benchmarking tool,
which is currently under construction.
4 Government
3 Independent
Agency
2
Media
1
0
Eastern Europe Asia & Latin Africa & Western North
& Central Asia Pacific America Middle East Europe America
The expansion of league tables and ranking exercises has not gone
unnoticed by the various stakeholders and the reaction they elicit is
rarely benign. Such rankings are often dismissed by their many critics
as irrelevant exercises fraught with data and methodological flaws,
they are boycotted by some universities angry at the results, and they
are used by political opponents as a convenient way to criticize gov-
ernments (Salmi and Saroyan, 2007: 80).
The U.S. News and World Report doesn’t just compare U.C. Irvine,
the University of Washington, the University of Texas-Austin, the
University of Wisconsin-Madison, Penn State, and the University of
Illinois, Urbana-Champaign – all public institutions of roughly the
same size. It aims to compare Penn State – a very large, public, land-
grant university with a low tuition and an economically diverse stu-
dent body, set in a rural valley in central Pennsylvania… with Yeshiva
University, a small, expensive, private Jewish university whose under-
graduate programme is set on two campuses in Manhattan (one in
mid-town for the women, and one far uptown for the men).
Despite the controversy surrounding rankings, there are good reasons why
rankings persist. These include the benefits of information provided to stu-
dents who are looking to make a choice between various institutions, either
domestically or for studies abroad. As a consequence, rankings and informa-
tion about student engagement and labour market outcomes in the country
of interest are valuable. Further, rankings promote a culture of transparency,
providing institutions with incentives to collect and publish more reliable
data. Finally, rankings promote the setting of stretch goals by the institution.
In so doing, institutions may find themselves analysing key factors explaining
OTHER ASIA,
6
UK, 8
Table 1. ARWU ranking of countries taking their population into consideration, 2009
Thousands of people
Population
Country No. top 500s required to produce each
(in thousands)
top 500 institution
Sweden 11 9 394.13 854.01
New Zealand 5 4370.7 874.14
Finland 6 5 362.61 893.77
Israel 7 7577 1 082.43
Switzerland 7 7 790.01 1 112.86
Austria 7 8 381.78 1 197.40
Norway 4 4 882.93 1 220.73
Australia 17 22 327.2 1 313.36
Denmark 4 5 565.02 1 391.26
Ireland 3 4 451.31 1 483.77
2 In their book, The Spirit Level, Wilkinson and Picket develop their own index of income inequality and
social mobility (probability that an individual will have a better socio-economic position than his/her
parents) expressed on a 0–100 scale.
40%
Chile, 38%
35%
30%
25%
Enrolment rate
20% Brazil, 24 %
15%
10%
5%
0%
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9
System performance
System performance can be measured by looking at the key outcomes of a
tertiary education system. Reflecting the various missions of tertiary educa-
tion, the benchmarking tool includes the following outcomes:
Research output
number of citations per 100,000 inhabitants
Technology
Transfer number of patents per 100,000 inhabitants
Values
proportion of voting age people who actually vote
PERFORMANCE
OF TERTIARY governance
location
EDUCATION & regulatory
INSTITUTIONS
articulation &
quality assurance
information
& enhancement
mechanisms
resources
& incentives
This analytical framework translates into specific inputs and process indica-
tors that measure ‘system health’ in the following way:
• objective measure
Quantitative
• obective description
Qualitative-observed
• value judgement
Qualitative-interpreted
The following indicators are relied upon to measure these various dimensions:
• Total funding for tertiary education reflects the level of public commitment
to tertiary education, as well as the success of resource mobilization
efforts (cost-sharing, donations, research, consultancy and training
contracts, etc.).
To provide a starting point for the comparison of Chile and Brazil, it is impor-
tant to first assess their relative performance on attainment rates. As shown
in Figures 6 and 7 depicting the growth in attainment rates at the primary,
secondary and tertiary levels, Chile has made greater gains in secondary and
tertiary enrolment from 1980 onwards. In 2010, Chile had tertiary attain-
ment rates of 11.6 per cent for the population aged 25–65 years of age, while
Brazil had just 5.6 per cent for the same age group.
5.2
2010 32.4
73.8
5.3
21.2
2000
61.6
3.7
1980 7.9
16.8
1.1
1960 4.8
22.9
Brazil
0 10 20 30 40 50 60 70 80
11.6
2010 55.2
82.1
9.5
2000 46.7
72.5
3.3
1980 22.1
60.1
1.8
1960 13.1
46.7
Chile
0 20 40 60 80 100
Note: Code: green = share of adult population that has completed tertiary education
blue = share of adult population that has completed secondary education
red = share of adult population that has completed primary education
Figure 8 below can help explain why Chile has a higher tertiary enrolment
rate. As seen here, the stock of candidates eligible to enter post-secondary
studies is much greater. Chile has a 55 per cent completion rate of secondary
school in 2010, while Brazil has about a 32 per cent completion rate.
60
Chile 59.0%
50
Enrolment rate
40
LAC avg 45.8
30 Brazil 33.7
20
10
0
0 10.0 20.0 30.0 40.0 50.0 60.0 70.0
60
50 1.7 Chile
1. 20 LAC avg
Enrolment rate
40
Brazil 0.8
30
20
10
0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
Figure 10 shows that private enrolment share of both Brazil and Chile are
comparable. The LAC average primary enrolment is significantly lower than
these countries at around 30 per cent. Both Brazil and Chile have used enrol-
ment in private institutions as a key element of their expansion strategy.
60
50 77.00 Chile
33.00 LAC avg
Enrolment rate
40
Brazil 73.00
30
20
10
0
0 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00
60
50 40.0 Chile
Enrolment rate
40
8.0 Brazil 27.00 LAC avg
30
20
10
0
0 10.0 20.0 30.0 40.0 50.0 60.0 70.0
60
50
22.0 Chile
Enrolment rate
40
1.7 Brazil
30
20
10
0
0 5.0 10.0 15.0 20.0 25.0
Public spending on student aid (grants and loans)
as a % of total tertiary education budget
Conclusion
The world is interested in rankings in every walk of life. Countries are ranked
for their performance in all possible domains, from the Olympics to the
quality of life. It is not surprising then, that in the present tertiary education
world characterized by increased global competition for talented academics
and students, the number of league tables of universities has grown rapidly
in recent years.
The stakes are high. Governments and the public at large are ever more pre-
occupied with the relative performance of tertiary education institutions and
getting the best-perceived value as consumers of education. Some countries
are striving to establish ‘world-class universities’ that will spearhead the
development of a knowledge-based economy. Others, faced with a shrink-
ing student population, struggle to attract increasing numbers of fee-paying
foreign students. Just as scarcity, prestige, and having access to ‘the best’
increasingly mark the purchase of goods such as cars, handbags and blue
jeans, the consumers of tertiary education are also looking for indicators
that enhance their capacity to identify and access the best universities.
At the same time, these rankings are insufficient to measure the actual
performance of entire tertiary education systems. Beyond the results of
individual universities, it is important to be able to assess how a country
is faring along key dimensions of performance at the tertiary level such as
access and equity, quality and relevance, research productivity and technol-
ogy transfer. The benchmarking tool that the World Bank is in the process of
developing seeks to fulfill this role, by offering a web-based instrument that
policy-makers can use to make comparisons across countries on the vari-
ables and indicators of their choice. While the tool is not meant to give policy
prescriptions, it should provide a platform for facilitating diagnosis exercises
and the exploration of alternative scenarios for reforming and developing
tertiary education.
References
Aghion, P., Dewatripont, M., Hoxby, C.M., Mas-Colell, A. and Sapir, A. 2009. The Governance and
Performance of Research Universities: Evidence from Europe and the U.S. National Bureau of
Economic Research (NBER) Working Paper 14851.
Gladwell, M. 2011. The order of things: what college rankings really tell us. The New Yorker
(14 February 2011).
OECD (Organisation for Economic Co-operation and Development). 2002. Tertiary Education for
the Knowledge Society. Paris: OECD.
Salmi, J. 2011. The road to academic excellence: lessons of experience. P. Altbach and J. Salmi
(eds) The Road to Academic Excellence: the Making of World-Class Research Universities.
Washington DC: World Bank.
Salmi, J. and Saroyan, A. 2007. League tables as policy instruments: uses and misuses. Higher
Education Management and Policy, 119(2). Paris: OECD.
Shanghai Jiao Tong University. 2010. ARWU (Academic Ranking of World Universities). Center
for World-Class Universities and Institute of Higher Education of Shanghai Jiao Tong
University, China: www.arwu.org/index.jsp (Accessed 1 April 2012.)
Smolentseva, A. 2010. In search for world-class universities: the case of Russia. International Higher
Education, 58, Winter: 20–22.
UIS (UNESCO Institute for Statistics). 2010. Statistics: Data Centre. Montreal: UIS: http://stats.uis.
unesco.org (Accessed 1 April 2012.)
Wilkinson, R.G. and Picket. K. 2010. The Spirit Level: Why More Equal Societies Almost Always Do
Better. London: Penguin.
World Bank. 2002. Constructing Knowledge Societies: New Challenges for Tertiary Education.
Washington DC: World Bank.
World Bank. 2009. World Development Indicators. Washington DC: World Bank: http://data.
worldbank.org/data-catalog/world-development-indicators (Accessed 1 April 2012.)
U-Multirank: a user-driven
and multi-dimensional
ranking tool in global
higher education and
research
Frans van Vught and Frank Ziegele
Introduction
The U-Multirank project1 encompassed the design and testing of a new
transparency tool for higher education and research. More specifically, the
focus was on a transparency tool to enhance understanding of the multiple
performances of different higher education and research institutions across
the diverse range of activities they are involved in.
1 The project ‘U-Multirank’ has been funded with the support of the European Commission. This chapter
reflects the views of the authors and the Commission cannot be held responsible for any use that may
be made of the information therein. For more information on the project, see: www.u-multirank.eu.
This chapter uses results and conclusions from the final report of the project (http://ec.europa.eu/
education/higher-education/doc/multirank_en.pdf) and from a recently published book: F. van Vught
and F. Ziegele (eds), Multi-dimensional Ranking, the Design and Development of U-Multirank, Dordrecht:
Springer, 2012.
Despite the serious critique of existing rankings – and particularly the major
global rankings – outlined above, our comprehensive review of the current
situation found a number of important examples of good practice that we
have carried forward into the design of a new transparency instrument.
These include:
• The Centre for Science and Technology Studies (CWTS) of Leiden University
publishes a ranking aiming at comparison of research performance with
impact measures that take the differences in institutions and disciplines into
account. On the basis of the same publication and citation data, different
types of impact-indicators can be constructed, for instance, one in which
the size of the institution is taken into account. (Rankings are strongly
influenced by the size-threshold used to define the set of universities
for which the ranking is calculated. Smaller universities that are not
present in the top 100 in size may take high positions in impact ranking
if the size threshold is lowered.) A major advantage compared to other
global rankings is the use of field-normalized citation rates that control
for different citation cultures in different fields. CWTS has also started to
develop new bibliometric methods allowing a link between publications
and the dimensions of regional engagement, internationalization and
knowledge transfer by analysing regional, international and university-
industry co-publications. All of these developments have been included
in U-Multirank allowing it to progress beyond existing rankings in the
methods of bibliometric research performance measurement.
Design principles
Based on our analyses of existing transparency instruments and on clear
epistemological and methodological principles we formulated a set of design
principles for U-Multirank:
• The fourth principle is that higher education rankings should reflect the
multi-level nature of higher education. With very few exceptions, higher
education institutions are combinations of stronger and less-strong
Conceptualization
These design principles have underpinned the conceptualization of a new
ranking instrument that is user-driven, multi-dimensional and methodo-
logically robust. This new instrument must enable its users to identify insti-
tutions and programmes that are sufficiently comparable (through U-Map)
and to undertake both institutional and field level performance analyses.
The selection of these dimensions and indicators has been based on two
processes:
During the design process all potential dimensions and indicators were clearly
described and discussed in stakeholder workshops. After a first validity and
reliability check, we suggested comprehensive lists of possible indicators
derived from the literature and from existing practice (including from areas
beyond rankings). In addition, we designed a number of new, sophisticated
indicators, particularly bibliometric indicators for the research dimension.
On the basis of data gathered on these indicators across the five performance
dimensions, U-Multirank could provide its users with the online functionality
to create two general types of rankings:
% international students
Regional co-publication
Student internship in
CPD courses offered
Student staff ratio
Graduation rate
Citation index
Startup firms
Institution 4
Institution 8 - -
Institution 3 -
Institution 5 -
Institution 1 - -
Institution 9
Institution 7 - - -
Institution 2 -
Institution 6 -
The intention of the project was to test the feasibility of the multi-dimen-
sional ranking on an initial group of 150 institutions drawn from Europe
and beyond. In most cases institutions needed to be active in one or
both of the fields of (mechanical and electrical) engineering and business
that were identified as the pilot disciplines for the field-based rankings.
Institutions also needed to be chosen to ensure that the diversity of insti-
tutions in participating countries was represented to the extent possible in
the initial pilot group.
Our analysis of potential indicators in the design phase showed that most
of the required data would need to be gathered at the institutional level as
national and international databases included very little of the information
we needed (bibliometric and patent databases were the two exceptions),
or did not provide comparable data. Four online survey instruments were
therefore designed to gather information:
The pilot study data collection phase started in November 2010 and closed
at the end of March 2011. One hundred and nine higher education insti-
tutions completed both the U-Map and institutional questionnaires; 83 of
The outcomes
The major objectives of the project were to develop an alternative to existing
global rankings, to avoid their problems and to test if such an instrument is
feasible. Our general conclusions from the two-years’ project work are:
Our single caveat concerns the global aspect of the feasibility study. The
prospects for European coverage are encouraging. In addition, institu-
tions from a number of countries not always visible in existing global
rankings, were enthusiastic about the project (including Australia, Japan
and a number of Latin American and Middle East countries). However,
the large amount of data to be collected suggests that U-Multirank cannot
easily achieve extensive coverage levels across the globe in the short-term
and in one step. Thus, in the short term a comprehensive coverage of
European institutions plus a limited extension beyond Europe is realistic.
Additionally, the pilot test proved that it was particularly difficult to recruit
institutions in China and the United States. Higher education and research
institutions in the United States showed only limited interest in the study,
while in China formal conditions appeared to hamper the participation of
institutions. Special attention to launch U-Multirank in these systems will
be needed. Nevertheless, in the pilot project worldwide participation from
higher education institutions could be realized. We believe that there will
be continuing interest from outside Europe from institutions wishing to
benchmark themselves against European institutions and that there are
opportunities for a targeted recruitment of groups of institutions from
outside Europe.
The third aspect of feasibility explored in the pilot study was the ques-
tion of the operational feasibility of up-scaling. Our experience with the pilot
study suggests that while a major ‘up-scaling’ will bring significant logisti-
cal, organizational and financial challenges, there are no inherent features
of U-Multirank that rule out the possibility of such future growth to a
worldwide level. The field-based ranking showed a substantial overlap of
relevant indicators in the pilot study fields between ‘business studies’ and
‘engineering’. Furthermore, the identification of a number of field-specific
indicators was also achieved. The overlap points to the fact that an exten-
sion to further fields may be assumed to be feasible. For the development
of field-specific indicators, a stakeholder consultation with field organiza-
tions and experts proved to be successful. Therefore, it is realistic to expect
that U-Multirank can be extended to more fields rather easily.
Aside from these major aspects of feasibility and virtues of U-Multirank sev-
eral other lessons learned for successful rankings could be derived from the
pilot project:
IREG (International Ranking Expert Group). 2006. Berlin Principles on Ranking of Higher Education
Institutions. IREG.
Graduation rate
Focused
ranking
Interdisciplinarity of programmes
Relative rate of graduate (un)employment
Time to degree
Student–staff ratio
Graduation rate
Percentage graduating within norm period
Field-based
ranking
Research
Percentage of expenditure on research
Percentage of research income from competitive sources
institutional ranking
International orientation
Percentage of programmes in foreign language
Percentage of international academic staff
International doctorate graduation rate
institutional
Focused
ranking
Regional engagement
Percentage of graduates working in the region
institutional
ranking
Towards an international
assessment of higher
education learning
outcomes: the OECD-led
AHELO initiative
Richard Yelland and Rodrigo Castañeda Valle
Introduction
The lasting value of education for individuals and societies has been amply
demonstrated. Education gives people the social capital and knowledge
to invest in their own futures and to improve their well-being. Education
offers governments a way to invest in human capital and participate in
the knowledge economy. And it has proven positive social effects. For
these reasons we have seen fifty years of growth and change in higher
education as governments have set out to develop their higher education
systems and provide opportunities to more of their citizens. As demand
has grown, the ways of meeting it have diversified. Different types of
institutions work in different ways, meeting the needs of an increasingly
diverse student body. Technology has brought new opportunities for
online and blended learning. Students now have the opportunity to study
in institutions outside their country of residence and to combine mod-
ules from more than one institution. Growth and diversification have
brought with them a growing focus on the outcomes of higher education,
on the returns that both societies and individual can expect to see from
the increasing investments they make in higher education. Students seek
better and more transparent ways of assessing the quality of providers
in a complex higher education market. Governments want to maximize
the impact of spending in higher education as they seek to balance the
demands of competing sectors. Institutions want to understand how to
improve their teaching and learning to improve their services and their
reputations.
This chapter briefly sets out the key trends in higher education that have
led OECD countries, and others, to work together to develop AHELO, and
describes progress to date with the work.
Massification
The rise in investment in national education systems has had the general
tendency of increasing the number of students enrolling or to be enrolled
in higher education (HE) (see Appendix, chart 1). According to UNESCO,
about 125 million students worldwide will have to be accommodated in
higher education by 2025 (UNESCO, 2011). Indeed, the majority of students
that graduate from secondary education in OECD countries do so in pro-
grammes aimed at providing access to tertiary education (OECD, 2011: 47,
indicator A2). In addition, between 1995 and 2009, entry rates for tertiary
Internationalization
In 2009, about 3.7 million students were enrolled in HEIs outside their coun-
try of origin (OECD, 2011: 318, indicator C3). The countries that attract most
students in the world are the G20 countries, and among them, HEIs within
OECD countries are seen as the most appealing, attracting about 77 per cent
of all foreign students enrolled in tertiary programmes outside their coun-
tries of citizenship (OECD, 2011: 318, indicator C3). For these reasons, inter-
national students represent an increasingly important fraction of the entry
rates in HE programmes worldwide, as shown in chart 4. The number of
students that cross borders with the intention of studying is a phenomenon
that has been gradually augmenting the entry rates in many HE systems. In
Australia alone, international students increase the entry rates by approxi-
mately 29 per cent (OECD, 2011: 312, indicator C2).
Thus, the social and individual value of education has led to a growth in
the higher education. This growth has become politically and strategically
important for governments.
These were among the reasons that led the OECD Education Ministers
meeting in Athens in June 2006 to devote their agenda to higher education
(OECD, 2006). At this meeting, governments highlighted their satisfaction
with the growth of higher education, and its increasing diversity, but
expressed concern about quality and accountability. Precisely as a result of
growing awareness of the value of HE to generate powerful individual and
social capital, ministers were keen to discuss better policies for improving
the access of individuals to HE and its proper and feasible funding. The
need to provide students, families and employers, as well as the institutions
themselves, better information on the quality of HE was clear (OECD,
2006: 84).
Among the whole range of assessments, perhaps the most prominent are the
following (see Rauhvargers, 2011):
Despite these different approaches, given the complex and varied contexts in
which higher education is provided, with widely different cultures, languages,
population sizes, economic development and education systems, what reli-
ance can be placed on international rankings as a source of information on
the quality of teaching? The answer is very little, as these rankings are based
largely on research output. That being the case, how can we assess the qual-
ity of teaching in higher education taking account of different approaches to
teaching and learning in different contexts?
The OECD PISA assessment and others have demonstrated that it is pos-
sible to perform reliable and internationally comparable evaluations of what
school students have learned and can do. Perhaps more importantly, PISA
has demonstrated the practical possibility and usefulness of international
comparisons between education systems to inform policy development
and teaching practice. PISA assesses 15 year-olds. The question of whether
Around the world, there are many different institutions, different disciplines,
different languages and different approaches to teaching and learning. All
these contextual and process factors complicate the task of international
comparison. Therefore, we must aim to evaluate students’ capacity for apply-
ing their knowledge in real life situations (e.g. what sorts of skills have been
acquired and how those skills are used). To add value, and to avoid stifling
diversity, which can be dangerous, such assessments need to go beyond testing
knowledge. They must test students’ capacity to reason in complex and applied
ways, and to use skills and competencies effectively in different settings. The
assessments need to be sophisticated and align with the forms of thinking and
professional work in which most graduates will engage. They need to employ a
wide range of methods, provide for a more balanced view of higher education
quality, and tap into capabilities that both educators and professionals rec-
ognize as important for educational success) (Coates and Richardson, 2011: 5)
AHELO
The aim is not to assess the student’s knowledge of the specific curricular
content of these strands. Rather, AHELO aims to test what is above content,
meaning the student’s ability to use specific concepts of the discipline and
their own analytical thinking to solve real-life problems. Therefore, instead
of weighting a student’s curricular knowledge gain, AHELO measures the
student’s capacity to master ‘the language’ of the discipline. AHELO aims to
demonstrate that it is possible to devise a set of test instruments applicable
across a range of different institutions, cultures and language, and that the
practical implementation of these tests is feasible.
Generic
Economics Engineering
Frameworks Skills
Framework Framework
PHASE 1 Framework
Initial proof
of concept Instrument Generic Engineering
development Economics
Skills Instrument
& small-scale Instrument
validations Instrument
AHELO has the potential to become a powerful ally for quality assurance of
higher education through the assessment of learning outcomes throughout
the world. It is an innovative endeavour intending to assist the improvement
of teaching and learning in HE and its internationalization. Further, it has
the intention to make the performance of HEIs and its added value to the
individual and social capitals more transparent.
• Importantly, it will help fill the information void and enable students to
make more informed choices about their futures. Far too many students
drop out of higher education programmes before completing them: this
can be not only a personal setback, but also a waste of valuable resources.
If AHELO can help students better match programmes to their expectations
and aspirations it may offer higher education stakeholders the possibility
of determining general standards for HEIs across national borders.
Conclusion
Higher education brings significant individual and social benefits.
Massification, globalization and internationalization are challenging higher
education systems. New approaches to the assessment of quality and perfor-
mance are demanded both by governments and individuals. The OECD has
taken the initiative in developing an international assessment of learning
outcomes in higher education. Experience to date is encouraging but there
is still some way to go before definitive results will be known.
Rauhvargers, A. 2011. Global University Rankings and their Impact. EUA report on rankings 2011,
European University Association, Brussels.
OECD (Organisation for Economic Co-operation and Development). 2006. Education Policy
Analysis: Focus on Higher Education. Paris: OECD.
UNESCO. 2011. UNESCO Global Forum on Rankings and Accountability in Higher Education:
Uses and Misuses, 16–17 May 2011, UNESCO, Paris.
Chart 1
Upper secondary graduation rates ( 2009)
Chapter 15. Towards an international assessment of higher education learning outcomes:
ea
ia
l
d
ay
nd
er d
nd
ic
ic
es
ge
ile
ico
l
ly
ey
ae
ga
nd
in
ur
pa
ai
e
an
an
an
ag
ar
ar
en
bl
bl
do
da
I ta
at
rw
Ch
ra
d
Ko
rk
la
la
Is r
ex
Sp
Ch
tu
bo
nm
ng
pu
pu
la
nl
al
e
Ja
rm
er
ov
St
na
ng
I re
Ice
Po
ve
Tu
No
Sw
M
r
ze
Fi
av
m
Hu
Re
Re
Po
Sl
d
Ge
Ca
De
0a
Ki
itz
i te
xe
w
CD
h
ak
d
G2
Ne
Sw
Lu
ec
Un
i te
ov
OE
Cz
Un
Sl
1. Year of reference 2008.
Countries are ranked in descending order of the upper secondary graduation rated in 2009.
Source: OECD. China: UNESCO Institute for Statistics (World Education Indicators Programme)Table A2.1. See annex 3 for notes (www.oecd.org/edu/eag2011).
295
296
Chart 2
Source: OECD, Education at a glance, 2011, Indicator A3, Chart A3.1, Page 60
Rankings and Accountability in Higher Education: Uses and Misuses
Chart 3
Chapter 15. Towards an international assessment of higher education learning outcomes:
the OECD-led AHELO initiative
Entry rates into tertiary-type A [Bachelor’s] education: Impact of international students (2009)
Source: OECD, Education at a glance, 2011, Indicator C2, Chart C2.3, Page 312
Chart 4
Rankings and Accountability in Higher Education: Uses and Misuses
1. The entry rates at tertiary-type A level include entry rates at tertiary-type B level
2. Year of reference 2008
Countries are ranked in descending order of adjusted entry rates for tertiary-type A education in 2009
Trends in international education market shares (2000, 2009)
Source: OECD, Education at a glance, 2011, Indicator C3, Chart C3.3, Page 322
Chart 5
Chapter 15. Towards an international assessment of higher education learning outcomes:
1. Data relate to international students defined on the basis of thir country residence
2. Year of reference 2008
Countries are ranked in descending order of 2009 market shares.
Source: OECD and UNESCO Institute for Statistics for most data on non-OECD countries. Table C3.6, available online. See annex 3 for notes (www.oecd.org/edu/eag2011)
299
Notes on
contributors
Phil Baty is Editor of the Times Higher Education Rankings, and Editor-at-
Large of Times Higher Education magazine. Baty has been with the magazine
since 1996, as reporter, news editor and deputy editor. He was named among
the top 15 ‘most influential in education’ 2012 by The Australian newspaper
and received the Ted Wragg Award for Sustained Contribution to Education
Journalism in 2011, part of the Education Journalist of the Year Awards, run
by the Chartered Institute of Public Relations. In 2011, Times Higher Education
was named Weekly Magazine of the Year and Media Brand of the Year
(business category) by the Professional Publishers’ Association.
Nian Cai Liu took his undergraduate study in chemistry at Lanzhou University
of China. He obtained his doctoral degree in polymer science and engineer-
ing from Queen’s University at Kingston, Canada. He is currently the Director
of the Center for World-Class Universities and the Dean of Graduate School
of Education at Shanghai Jiao Tong University. His research interests include
world-class universities, university evaluation and science policy. The
Academic Ranking of World Universities, an online publication of his group,
has attracted attentions from all over the world. His latest book is Paths to a
World-Class University: Lessons from Practices and Experiences.
Ben Sowter leads the QS Intelligence Unit, which is fully responsible for the
operational management of all major QS research projects including the QS
Top MBA Applicant and Recruiter Research, the QS ‘World University Rankings’
and the QS Asian University Rankings. He graduated from the University of
Nottingham with a BSc. in Computer Science. Upon graduation he spent two
Frank Ziegele is Director of the CHE Centre for Higher Education, Gütersloh
(Germany), and Professor for Higher Education and Research Management
at the University of Applied Sciences Osnabrück. He trained as economist
and his research and publications focus on higher education finance,
governance, strategic management, contract management, ranking and
controlling. In these areas he also acts as consultant and trainer. He has
contributed with about 100 publications to the field of higher education
policy and management and realized more than eighty projects in the same
field, for instance as co-leader of the U-Multirank feasibility study.
Education
Sector
United Nations
-
(GXFDWLRQDO6FLHQWL¿FDQG
Cultural Organization
www.unesco.org/publishing