AU Pure



Ranking universities within a globalised world of competition states: to what purpose, and with what implications for students? Susan WrightAbstractUniversities invented the grading and ranking of students, but now this technology is being applied to rank universities in numerous global, national and disciplinary league tables and ranking lists. This chapter traces this transition and examines how world rankings are used as a form of governance in a world made up of ‘competition states’ striving for competitive advantage in a ‘global knowledge economy’. The role of such states is to create the framework where institutions and individuals use their own agency to compete on a global scale. Rankings take on two roles in this form of governance: as an instrument of accountability to government; and as a guide for student choices. After assessing rankings’ fitness for either of these two purposes, the chapter concludes by raising some educational implications of this form of governance.IntroductionSome academics would argue we’ve brought it on ourselves. Turning something as complex as education into a number and then publishing the numbers from top to bottom in a list signifying performance was, after all, invented by universities to grade and rank students’ work. This, in Foucault’s terms, is a political technology (Dreyfus and Rabinow 1982: 196) and it is one that has travelled through time and space to come back to universities with a transformed purpose. I will briefly sketch out that history in order to show the current purpose of grading and ranking universities. I will argue that ranking is a technology of the ‘competition state’, where the role of the state is to invest in its institutions’ capacity to compete on a global scale. Seen from that light, a university’s position in global rankings is a form of accountability to the state, as an indication that the university has put the state’s public investment to good effect. But the point of performing on a global scale is to compete in markets, not least for international students. In this trade, universities are just one organisation amidst a vast array of companies organising the branding and marketing of universities and the recruitment and international movement of students (Robertson et al. 2012). Seen from this light, rankings are important for reputation management, with the danger that brand and appearance assume greater significance than education and experience. When universities are party to this transformation of the technology, from grading students’ work to grading, ranking and marketing whole institutions, what image of the student and of education are universities portraying?From ranking students to ranking universities In 1817, the new principal of the West Point Military Academy, Sylvanus Thayer, instituted an educational system, which he borrowed from the Ecole Polytechnique in France, based on arithmetic grading. Thayer established a hierarchical structure at the Academy, down which rules and regulations passed to the students, and up which flowed regular and systematised reports including students’ grades. The authors of this study explain:This is a total accountability system, where all aspects of performance, academic and behavioural, are constantly measured, evaluated and recorded in a joint numerical-linguistic language which is also a currency ADDIN EN.CITE <EndNote><Cite><Author>Hoskin</Author><Year>1988</Year><RecNum>312</RecNum><Suffix>: 49</Suffix><record><rec-number>312</rec-number><ref-type name='Journal Article'>17</ref-type><contributors><authors><author>K W Hoskin</author><author>R H Macve</author></authors></contributors><titles><title>The genesis of accountability: the West Point connection</title><secondary-title>Accounting, Organisations and Society</secondary-title></titles><periodical><full-title>Accounting, Organisations and Society</full-title></periodical><pages>37-73</pages><volume>13</volume><number>1</number><keywords><keyword>Universities, governance, accounting</keyword></keywords><dates><year>1988</year></dates><accession-num>SW</accession-num><urls></urls></record></Cite></EndNote>(Hoskin and Macve 1988: 49).According to Hoskin and Macve, every student’s subject knowledge was tested daily, weekly and half-yearly and marked according to a standardised, 7-point numerical scale. Students’ aptitude, study habits, and whether their conduct was sufficiently ‘military’ were also recorded in weekly, monthly and half-yearly reports, and given a grade on a 7-point descriptive scale from ‘excellent’ to ‘indifferent’. Both sets of reports went up the hierarchy. The marks were used to divide each year into 4 graded classes. Each student knew his place and what he had to do to move up the ranking. This wasan exhaustive hierarchical reflexive system of command and communication, ...which (ideally) made every individual in the institution constantly visible and accountable for his behaviour ADDIN EN.CITE <EndNote><Cite><Author>Hoskin</Author><Year>1988</Year><RecNum>312</RecNum><Suffix>: 59</Suffix><record><rec-number>312</rec-number><ref-type name='Journal Article'>17</ref-type><contributors><authors><author>K W Hoskin</author><author>R H Macve</author></authors></contributors><titles><title>The genesis of accountability: the West Point connection</title><secondary-title>Accounting, Organisations and Society</secondary-title></titles><periodical><full-title>Accounting, Organisations and Society</full-title></periodical><pages>37-73</pages><volume>13</volume><number>1</number><keywords><keyword>Universities, governance, accounting</keyword></keywords><dates><year>1988</year></dates><accession-num>SW</accession-num><urls></urls></record></Cite></EndNote>(Hoskin and Macve 1988: 59).The West Point students were made into calculative and accountable selves. They learnt the norms against which they were marked and they knew what they had to do to improve their grades. Their final mark determined how prestigious their first appointment would be, and their record accompanied them throughout their military career and beyond.This system produced the best civil engineers in the country. It also produced some of the best managers of the armouries, the railroads and the newly forming industrial corporations. They imported into these organisations a hierarchy down which passed meticulous regulations and up which passed written reports with number-based, normalising judgements. These reports graded each employee’s productivity, and were the currency for comparing units, so that everyemployee ‘felt and often remarked that the eyes of the company were always on them through the books’ (Chandler 1997: 267-8 quoted in Hoskin and Macve 1988: 67). In short, the organisation of corporate America relied heavily on the West Point graduates’ reflexive knowledge about how to create a system of organisation and discipline that turned managers and workers into calculative, accountable selves.This management system spread across corporate America and perhaps found its apotheosis under Robert McNamara as president of the Ford Motor Company (Martin 2010). In the 1950s, he used the new IBM computers to feed numbers into spreadsheets. The manager of each section was given targets and their performance was measured by a higher bureaucracy, creating a task-driven, fiercely competitive culture, where each section competed with every other, and instrumentally played the system to protect their institutional position. This system became counter-productive when concern for internal competition and intrigue far outweighed any overall vision of the quality of the car or the satisfaction of the customer. As Tom Peters (one of the authors of the famous management book In Search of Excellence) complained:Start with Taylorism, add...a dose of McNamaraism, and by the late 1970’s you had the great American corporation that was being run by bean counters (2001: 88).McNamara transferred this system, as US Secretary of Defence under President Kennedy, to the running of the Vietnam war, where ‘body counts’ proved a disastrous substitute for ‘our profound ignorance of the history, culture and politics’ (McNamara quoted in Martin 2010: 16). Regardless of this failure both in industry and in the military, numerical measures of the performance of complex organisations and activities were transferred to the public sector in the 1980s as an important feature of ‘New Public Management’. Key Performance Indicators (KPIs) were devised as measures of the quality, value for money and efficiency of most public services. The performance of schools, services for the elderly, hospitals, and most other public services were reduced to a number and ranked in ‘league tables’ (Shore and Wright 2000). The practices of players at the top of the league were distilled and de-contextualised as ‘best practice’ to be spread to the others, notably those ‘named and shamed’ at the bottom of the league. Universities were not immune. In Britain in 1986 the first Research Assessment Exercise (RAE) involved submissions from 2,598 departments (from 173 universities). Each department’s research output was read by a committee of peers from the relevant discipline and marked and graded on a 7-point scale. Initially only 14% of the government’s budget for university research was allocated according to the results, but this increased to 30% in 1989 and was 100% from 1992 onwards. By 2001 75% of research resources were concentrated on the top tier of departments (Wright 2009). Shortly after the RAE started, a second system of Teaching Quality Assessment was introduced, partly to re-balance the impact of the RAE which had tilted the focus of academic time and effort towards research, and partly to reassure government that teaching loads could be increased and funding reduced without affecting the quality of teaching (Shore and Wright 1999). Finally a third system of ‘institutional audit’ assessed the overall administration efficiency and effectiveness of each university. As systems of grading and ranking migrated historically from universities into private sector management and from there to public sector management and back to the universities again, important shifts took place. Strathern (1997) identified one shift when discussing the UK’s institutional audit: systems for grading performance, which had started as a currency for individual examination at universities, and had travelled into business accountancy systems, now returned to universities as a way to examine and grade their performance as organisations and hold them publicly to account. A further shift came when these grading and ranking technologies were used, in Hoskin and Macve’s terms, to create a total accountability system that was used not just to govern the individual and the organisation but the sector as a whole. The Organisation of Economic Cooperation and Development (OECD) documented numerous ways governments had found for ‘steering by numbers’ (Fr?lich 2008) and setting up competition between their universities. Some adapted the method of the UK’s RAE for measuring and ranking the performance and quality of their universities. (e.g. Hong Kong in 1993 and New Zealand’s Performance-Based Research Fund (PBRF) in 2003 and subsequently). Others focused on bibliometric measures, which used commercially produced citation indexes and impact factors, or gave grades for publications in ranked lists of journals. Most famously this system of using ranked lists of journals was incorporated into the Australian assessment system called Excellence Research for Australia (ERA) until, at the last minute, the Minister cancelled it because of evidence that institutional research managers were setting academics targets for publications in top ranked journals which he called ‘ill-informed and undesirable behaviour in the management of research’ (Carr 2011). In Scandinavia this system is referred to as ‘the Norwegian model’, and was adapted as one measure for competitive allocation of funding in Denmark. Contrary to the stance taken in Australia, in Denmark seemingly the intention was that the ‘bibliometric points system’ would be used to steer academics to focus on publishing in ‘top’ journals (Wright 2011). Of course, any such system of grading and ranking has ‘skewing effects’ as academics also know from the literature on ‘teaching to the test’ and the washback effect of any examination system (Cheng et al. 2004). Accounts of the skewing effects of systems of measuring and grading universities’ research output are legion and are so well established as to have names such as ‘salami slicing’ (cutting research results into small chunks, each published as a separate journal article), ‘rush to press’ (publishing partial results as soon as they are available rather than making a mature and considered analysis), plagiarism, and ‘gamesmanship’ (including making lists of approved journals that reinforce the dominance of narrow disciplinary interests, as alleged in the Association of Business Schools list, Willmott 2011: 436, or cartels dominating top journals, Macdonald and Kam 2011). The UK’s House of Commons’ Science and Technology Committee (2004: 21) called the RAE a ‘morass of fiddling, finagling and horse trading’ which was ‘starting to lack credibility’ (Wright 2009). Similarly, a report by the prestigious British Academy Policy Centre (Foley and Goldstein 2012) warns of the perverse effects of using aggregated measures and rankings punitively to name and shame, rather than developmentally to internally diagnose and remedy problems.These skewing effects, and, as the Australian minister said, the ‘ill-informed’ target-setting practices of managers might be undesirable when compared with academic values, but this is exactly how governance through grading and ranking performance is meant to work.One steering technology, such as getting West Point students to compete for grades on subject knowledge and military deportment permeated all aspects of the organisation. The grades were turned into frequent rankings which pitched the students against each other, in a permanent record on which future job prospects relied. Ranking universities by counting the number of publications in ‘top’ journals and using that measure as a basis for competitive allocation of funding is aimed to be a similarly effective steering technology. If so, it would prompt the whole sector to be reorganised in search of competitive advantage, and the management of each organisation to focus on setting targets and incentives, so that the self-management of every individual concentrates on ‘what counts’. Governance through numbers is meant to operate on and across all scales to create a total accountability system.The shift to a global scale and the competitive state The early 1990s marked a shift in the scale on which institutions were expected to act – and be graded and ranked – from the national to the global. Cameron and Palan (2004) identify three coeval narratives that re-imagined the spatial construction of the world. First, a globalised ‘offshore’ economy in which state regulations were ‘hollowed out’ and strategic use of time and space in industrial production, trade and financial markets opened new opportunities for capital accumulation. Second, and at the other extreme, in localised pockets of deprivation, often called ‘neighbourhoods’ or ‘communities’, the poor were stuck and marginalised and excluded from these opportunities. Third and meditating the other two, was what they called the competition state (2004: 109 following Cerny 1990). In a world envisaged as consisting of competing units on every scale - countries, regions, cities and individuals - the role of the state was no longer to apply universal welfare services to a homogeneous population ADDIN EN.CITE <EndNote><Cite><Author>Jessop</Author><Year>2002</Year><RecNum>2064</RecNum><record><rec-number>2064</rec-number><foreign-keys><key app="EN" db-id="zwffaa5w55sp57e2sf65fdsuztxfda5tv0p5">2064</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Bob Jessop</author></authors></contributors><titles><title>The Future of the Capitalist State</title></titles><dates><year>2002</year></dates><pub-location>Cambridge</pub-location><publisher>Polity Press</publisher><urls></urls></record></Cite></EndNote>(Jessop 2002). Instead, the role of the competition state was to mobilise all possible productive resources and deploy them to competitive advantage (Pedersen 2011). This entailed the state providing the legal, regulatory and financial framework for opening up the new frontiers for capital in the ‘offshore’, reforming the organisation and steering of educational and other services so that they contributed to economic competitiveness, and enabling every individual to optimise their skills and their position in a global labour market, with the idea that the country as a result would prosper.What was the competition about? Widespread discussions about new forms of industrial and social organisation focused on ‘knowledge’ as a new resource. In the late 1990s, the Organisation for Economic Cooperation and Development (OECD) consolidated these discussions into a definition of the ‘global knowledge economy’. This they portrayed as an inevitable and fast approaching future in which competitive advantage lay in the speed at which new knowledge could be generated and converted into innovative products or new ways of organising operations. If its member countries were, as the Danish government (2006) said in its globalisation strategy, to retain their position as the richest countries in the world, they needed a high skills population and an efficient system of generating knowledge and transferring it to industry. Universities were thrust centre stage, as the agents mobilised by the competition state for the country to succeed in the global knowledge economy.Universities, as reconfigured by the competitive state, were expected to operate in all three imagined spaces in a globalised world. They were to position themselves as corporations in the ‘offshore’ global space themselves, competing for international trade in students and research contracts; they were to service the national economy by providing industry with its new natural resource – knowledge – and with students equipped with the latest knowledge, networks and skills needed to be self-starting, team working, mobile workers in the new flexible labour market; and, through equal access policies, universities were to help overcome the inherently divisive effects of the ‘high skills’ economy by holding out hope for people seeking to leave the archipelago of ‘exclusion’ so that they too could join the globalised labour market (Reich 1991). In all regards, universities were to promote the image of their competitive state, region or city. In less than 20 years, universities shifted from being an institution ring-fenced from economic and political interests, to institutions ‘driving’ the global knowledge economy through providing the nation with knowledge for innovation and a high skills population, and then to being part of that global economy themselves.Denmark had already embarked on a major reform of governance and, in 2003, universities were included in these reforms. The Ministry of Finance had promoted reforms so that state no longer administered public services through large bureaucracies, but instead government set the political aims and budgetary and legal frame for a service, and contracted out policy delivery to parts of the bureaucracy and public sector that were turned into ‘free agencies’, or in the case of universities, ‘self-owning institutions’ (Wright and ?rberg 2008). Whereas the Ministry of Finance proposed this ‘aims and frame’ steering system as a way to tighten centralised control over what happened ‘on the ground’ – or at the chalk face – and especially to make contractors respond quickly to changes in policy direction, the Ministry of Research seemed to think it would give universities the capacity to respond quickly and flexibly to national needs and international markets. Although Denmark did not push the process of outsourcing policy implementation as far as the privatisation and competitive tendering for service delivery contracts in the UK, the legal, funding and accounting system in Denmark enables the cost and performance of public services, including universities, to be compared with each other and with private suppliers (Wright 2012).In this worldview and this form of governance, ranking took on two new purposes. First, the competition state ‘set universities free’, as the minister said in Denmark, as agents responsible for their own actions yet responsible to the state for their performance (Wright and ?rberg 2009). The minister set targets for the universities, and gave them greater funding, which was increasingly targeted and based on competition. These, he argued, were the framework conditions universities needed to succeed as actors on a global stage. The minister would measure whether universities had used their increased trust and funding responsibly by their standing in global rankings. For example the then-Prime Minister’s ‘Danish dream’ included: ‘In 2020, Denmark will have a least one university in the top 10 in Europe, as measured by the reputable annual report in Times Higher Education’ (Danish Government 2010). This maybe looked possible in 2009 when two Danish universities were in the Times Higher Education’s top 100, and Copenhagen University was placed 15th in Europe, but in 2011 they all fell out of the THE’s top 100 (Information 2011). Ministers of Education throughout the world were by this time all saying that they wanted to see one of their universities in the THE’s top 100 in what Hazelkorn calls ‘the academic arms race’ (2008: 209).The second purpose of university rankings connects to the competition state’s view of its citizens. As Pedersen (2011) explains, each historical formation of the state entails a view of human nature (statens mennesksyn), that is, a vision of the way that citizens should see themselves and conduct themselves. In the competition state, citizens are seen as individuals, responsible for their own life project, and acquiring the bundle of skills (f?rdigheder) needed to maximise their own life chances. This image of the individual chimes with the ideas in ‘new human capital theory’ (Brown et al. 2007) whose development was coincident with the shift from the welfare state to the competition state. Instead of the state providing education as a public good, this theory argued that individuals should be responsible themselves for acquiring the education and skills they need to gain employment, and for continually reinvesting in lifelong learning to keep themselves in a fast moving labour market. Individuals are meant to invest in themselves as a project, which they can market through their CV, and this involves making many choices. One of the major choices is which degree to take at which university. University rankings are meant to enable people to make decisions about where to invest their time and energy, or as the English phrase goes, where to ‘learn to earn’.The next two sections will discuss whether and how rankings fulfil these two purposes as a form of accountability and as a basis for making individual choices.Rankings as accountability or gamesmanship? There are upwards of 15 world ranking lists and many more for individual countries or specific disciplines and their methods vary widely and are much discussed. The same 10 universities tend to come out on top of all global rankings, whatever measures are used. Indeed Times Higher Education refers to the 6 ‘global superbrands’ as ‘a special zone beyond ordinary competition’. The grades in the middle ranks are then so similar to each other that the slightest change in the definition or weighting of one measure can move universities up and down the list quite substantially. Usher and Savino (2006) explain that the Jiao Tong University at Shanghai gives most weight to research (90%) based on the citation index produced commercially by Thomson Reuters (a corporation whose website describes itself as ‘ the world's leading source of intelligent information for businesses and professionals’). The Jiao Tong authors consider this citation index is only accurate for the hard sciences, so they take the social sciences and humanities very little into account in their grading (ibid.). In contrast, the Times Higher Education’s (THE) World University Ranking uses citations as only one among a raft of measures:THE’s World University Ranking: measures and weightingsMeasureWeightingTeaching30%Research – measured by volume, income and reputation 30%Citations – as an indicator of research influence 30%Industrial income – as an indicator of innovation 2.5%International outlook – of staff, students and research 7.5% Both the Jiao Tong and the THE world rankings give most weight to research. The Jiao Tong does not measure teaching and the THE gives it a 30% weighting whereas research and citations together have a 60% weighting. This explains governments’ use of bibliometric measures to allocate funding as an incentive to publish in the journals that ‘count’ in the world rankings (discussed above). But it is unclear how useful rankings based predominantly on research output are for students trying to judge where to do their degree. The US News and World Report’s ‘Best College Guide’ does focus on teaching but its variables, according to Gladwell (2011) in a critical essay, are ‘each weighted according to a secret sauce cooked up by the editors’:US News and World Report’s ‘Best College Guide’: variables and weightingsVariableWeightingUndergraduate academic reputation22.5%Graduate and freshman retention rates 20%Faculty resources 20%Student selectivity15%Financial resources10%Graduate performance 7.5%Alumni giving5%Gladwell points out that each of these is made up of further measures. Some of them are quite flimsy. For example, ‘Undergraduate academic reputation’ is based on a survey of presidents, provosts and administrative deans who are asked to grade a list of 261 national universities. As Gladwell says. ‘It is far from clear how any one individual could have insight into that many institutions’. Rather, he presumes, ‘when a president is asked to assess the relative merits of dozens of institutions he knows nothing about, he relies on their ranking’, so that reputation and ranking reinforce each other in a self-fulfilling prophesy. Other measures are constructed from further weighted variables. For example, ‘Faculty resources’ is calculated from a weighted combination of class size, faculty salary, percentage of professors with the highest degree, student-faculty ratio and percentage of full time faculty. These measures, Gladwell argues, are not effective proxies for how well a college informs, inspires and challenges students.Overall, Gladwell makes a criticism which is relevant to many other rankings. He points out that a ranking system can be one of two kinds. A heterogeneous ranking system takes a wide range of different items (all kinds of cars, or all kinds of universities) and compares them according to one variable. A comprehensive ranking system takes one kind of item (one category of university, or one discipline) and compares them according to a range of variables. In what Gladwell calls ‘an act of real audacity’ the US News and World Report tries to be heterogeneous and comprehensive – and the same could be said for THE and many other ranking systems. Whereas in their original form of marking and grading individual performance, students have all taken the same class and are all asked the same exam questions or set the same assignment, now in the shift to marking and grading organisational performance, universities are very heterogeneous, and they are set a bewildering array of different kinds of tests, whose significance is interpreted in obscure ways. University leaders are often aware of the methodological weaknesses of rankings. For example, when US News and World Report started making an annual ranking of law schools, Yale’s dean called it ‘an idiot poll’ and Harvard’s dean described it as ‘Mickey Mouse’, ‘just plain wacky’ and ‘totally bonkers’ (quoted in Sauder and Espeland 2009: 68). Yet university leaders accept rankings as a measure of accountability towards funders, especially governments and boards of governors. They also give them credibility and currency by using them to set targets for their institution – ‘We’re going to rise in the US News Ranking’ – and to incentivise and judge the performance of managers, including their own. In 2007, the President of Arizona State University was reputedly offered a $10,000 bonus to improve that university’s ranking (Simpson 2012: 22). The endorsement by leaders and academics themselves makes these shaky measures pervasive and generative of the organisation itself. The law school deans in Sauder and Espeland’s study (2009: 68) felt rankings are integral to everything about their school’s reputation, admission policies, and how it budgets and allocates money, evaluates subordinates, colleagues and themselves, motivates and coordinates activities, formulates goals and explains outcomes. As Power (1997) argued, audit systems do not objectively measure organisations as they are; they create organisations in their own image. This can be seen in university leaders’ descriptions of their own work. In the UK, the Director of Marketing and Communication of Exeter University, one of the few universities to make a leap in the rankings (from mid 30s in 2004 to top 10 in 2012) explained, ‘We took the trouble to understand how the league tables worked and then implemented a deliberate policy of using the metrics to drive institutional performance’ (Catcheside 2012). Sauder and Espeland’s (2009) survey of deans of US law schools found that among their job tasks, they gave paramount importance to trying to ensure their school did not slip in the rankings. They have to know in intimate detail how each of the variables in each of the rankings is constructed. Then they have to check that they are registering all the positive figures about their students, staff, income, publications, exam performance, graduate employment etc. in precisely the right way for them to be picked up in the questionnaires and surveys of the ranking companies. They find academic decisions about the curriculum, the grading curve or faculty publication strategies shaped by their likely effect on the school’s numbers and ranking. Whether in making serious decisions and budget allocations or in keeping meticulous records of inconsequential details, administrators resent the all-consuming attentiveness to the presumed requirements of the for-profit ranking companies. But the punishment for not doing this is hard. If they get it wrong, their school could slip, with consequences for the students they attract, the fees they can charge, and staff dismissals. Danish universities are beginning to give similar detailed attention to data management to improve their position in the rankings. The two rectors of Copenhagen and Aarhus Universities published a feature article in the newspaper Politiken in 2011, where they discussed their different rankings in the THE, Shanghai, OECD and Leiden listings and argued for taking their positioning seriously (Holm-Nielsen and Hemmingsen 2011). In 2010, Aarhus University advertised for two special advisers (specialkonsulent) to join the top leadership team, advising the governing board, rector and senior management team on ranking and on making visible the university’s scientific output and bibliometric analysis. The tasks of one of these posts included developing a ranking strategy, participating in international work on developing new methods for ranking, and coordinating the data reporting to international ranking systems (Aarhus University 2010). There is awareness, as in the US law deans’ accounts, that rankings depend heavily on knowing how the system works and how to achieve best advantage within it – in what could be called ‘administrative gamesmanship’. The skewing effects of ‘academic gamesmanship’ and the ploys of ‘administrative gamesmanship’, are both responding to the imperatives of a system of steering through grading and ranking which makes clear ‘what counts’ and how to adjust work and behaviour to maximise individual, institutional and national advantage. Of course not all academics and administrators respond in this way, but my current fieldwork indicates the speed and extent to which the Danish ‘BFI’ (bibliometric research indicator) has been used as a steering tool by university management and is known and understood as ‘what counts’ among academics. The examples from US, UK and Denmark show that government steering and funding algorithms, university management and academic behaviour are all being increasingly influenced by ‘what counts’ in the rankings. Is this a sign that this steering by numbers is successfully established as a total accountability system of governance? Or, should there be concern that politicians, university leaders and academics are using, responding to, and thereby endorsing, systems of grading and ranking that they know are flawed and that are changing practices in ways sometimes contrary to administrative and academic values? Goodhardt’s law famously stated that when a measure becomes a target, it ceases to be a measure because it distorts behaviour. Bearing in mind the distortive and sometimes perverse effects of rankings, are they fit for purpose as a form of accountability to government and funders? Rankings and the international trade in students If the second purpose of rankings in a global knowledge economy is to enable international students to choose where in the world they wish to study, how fit are the rankings for this purpose? The volume and value of the international trade in students is huge and, in the words of the rectors above, very serious. Many countries in the west, and increasingly in ‘hubs’ in Asia, the Gulf and Africa see the international trade in students as a means of attracting the highest talents who would hopefully join the country’s workforce (although the same country’s immigration policies sometimes do not permit this!). Foreign students arealso a source of earnings increasingly important for many countries’ Gross National Product. A report for the World Bank (Bashir 2007: 13) calculated that between 1999 and 2004, the number of students studying abroad rose by nearly 50 percent, from 1.64 million to 2.45 million. The British Council calculated that the global demand for international student places will rise to 5.8 million by 2020 (Bohm et al. 2004: 5). The five English-speaking countries (USA, UK, Australia, New Zealand, Canada) dominate the trade, although between 2000 and 2009 the USA’s market share declined and those of Australia, New Zealand and also Russia increased (OECD 2011: Chart 3.3). In 2005, the value of education exports for the USA was $ 14.1 while the other four earned $ 14.2 billion (Bashir 2007 : 19). A report for the Canadian Foreign Affairs and International Trade Ministry indicates the importance of this trade for the national economy: expenditures of international education students (tuition, accommodation, living costs, travel and discretionary products and services) infused $6.5 billion into the economy, surpassed exports of coniferous lumber ($5.1 billion) and coal ($6.1 billion) and sustained employment for 83,000 Canadians (RKA Inc 2009). This trade is a drain on the countries that send students. The annual bilateral and multilateral Official Development Assistance to these countries for postsecondary education amounts to only one tenth of the annual value of exports of higher education services in the five main exporting countries (Bashir 2007: 11). Despite much political attention, Denmark has not yet successfully positioned itself in this trade. Because of the high unit cost of tertiary education, Denmark introduced tuition fees for non-EU international students from 2006-7 and offered a few hundred state scholarships for these students. Between 2004 (indexed at 100) and 2009, Denmark’s index of change in the percentage of international students in all tertiary education was 118, whereas in Norway and Sweden, which had no tuition fees for international students, the index was 141 and 159 respectively (OECD 2011: Table C3.1). Denmark’s figures are too low to feature in OECD charts showing the percentage of foreign students by country of destination and countries’ market share of international education (OECD 2011: Charts C3.2 and C3.3). How should students choose their university a continent away? The OECD painted future scenarios of universities responsive to the demand side (students) instead of dominated by the interests of the supply side (academics) and organised through markets rather than by state steering (Wright and ?rberg 2012). Yet the OECD became alarmed at the prospect of the World Trade Organisation’s General Agreement on Trade in Services (GATS) opening up free trade in higher education. This offered no protection to student/customers against fraudulent and low quality providers. These risks had long been recognised by the United Nations Education, Science and Culture Organisation (UNESCO). In the light of the growth of the market and the entry of commercial providers, UNESCO established a Global Forum on the International Dimensions of Quality Assurance, Accreditation, and the Recognition of Qualifications, to develop a dialogue between academic and market values in ‘cross-border education’. Some participants wanted a UNESCO Convention (an international regulatory tool) ‘to shape the global market of higher education and embrace alternative tools and frameworks to GATS’ (Mathisen 2007: 271). UNESCO emphasised the importance of governments’ establishing methods of quality assurance and accreditation and they planned to establish a database of all recognised higher education institutions operating in each country whether public/private, not-for-profit/for-profit, or national/foreign – a definition which included many of the new forms of provision through foreign campuses, franchises, etc. (Mathisen 2007: 275). Students could then check the bone fides of institutions in distant countries and be sure they were accredited and quality controlled before making applications. The OECD, which lacked the powers to make international regulatory instruments, sought to collaborate with UNESCO, and in the end a much weaker instrument, a UNESCO Guideline, was agreed (ibid.). UNESCO’s website on authorised providers and its work on protecting cross-border students has had very little publicity or recognition. It stands as an alternative to rankings, which are a major source of earnings for the publishers that own them and are part of an upsurge in commercial operations associated with trade in international students. Universities themselves can be seen as just one, relatively small, player in a mass of new businesses and organisations involved in higher education – changes so extreme as to be called a process of ‘re-sectoralisation’ (Robertson et al. 2012). Companies include the five big publishers that own the ‘top’ academic journals; firms, like Thomson Reuters, that create journal citation data and impact factors; consultancies doing surveys for other aspects of the rankings; and the newspaper companies that own the rankings and boost their sales (in the UK, Murdock’s THE, The Times, and The Guardian, in Canada Maclean’s). Universities buy individualised data packages from ranking companies and draw on numerous consultancy companies (e.g. World 100 Reputational Network) to improve their ranking. A plethora of other companies use and re-package the rankings in organising the student trade - running international education fairs, and (like i-graduate ) matching students with universities, supporting the admission and visa process, and recruiting to language schools and preparatory classes. Many of these companies recognise that rankings may only play a small part in students’ choices. John O’Leary, editor of the Times Good University Guide and consultant on the QS Global Rankings, is quoted as saying that there is almost no correlation between the universities in the top 10 of the league tables and the top ten for applications (Catcheside 2012). Will Archer, CEO of i-graduate published a ‘Global International Student Barometer’ based on 209,422 respondents, which shows by far the most important factor in deciding where to study is ‘teaching quality’ – a factor that most rankings either ignore or have trouble finding a proxy measure for. Next come reputations for careers, of the institution itself and the system. Fifth on the list comes ‘research quality’, followed by ‘department reputation’, ‘specific course title’, ‘cost of study’, ‘personal safety’ and last of all ‘cost of living’ (Baty 2012). If databases are to be useful for students to weight their choices against such an array of factors, then perhaps a website where students can construct their own ranking system from a database by picking their own criteria and assigning their own weighting is what is needed. Jeffrey Stake (Indiana University Law School) has designed The Ranking Game on these principles, and maybe this is what the European U. Multirank is trying to achieve. It seems that just a relatively small set of international students intent on creating ‘Jetset CVs’ use world rankings as a proxy for reputation, on the grounds that the brand of the university is more important than the subject of study or personal grades in opening doors in the global labour market (Baty 2012: 15).In this case, rankings can be seen as just one of a number of measures that universities engage in for ‘reputational management’ (probably less important than monitoring social media). This would explain why the THE puts such emphasis on reputation that it now published this as a separate world ranking (whereas the Jiao Tong does not measure reputation at all). Consultancies with experience of reputation management in the private sector are trying to move into the university sector, arguing that if for top companies, monitoring reputation is as important as assessing financial risk, then this is even more important for universities where the product is high cost, time consuming and cannot be sampled beforehand (Simpson 2012). Simpson explains that reputation management means getting a product’s brand (the distinctive look of a product) and reputation (the public perception and appreciation of that product) to converge, in which case, ‘there is money to be made’ (ibid). Simpson argues that external prestige (rankings, positive media, endorsements) is four times more influential in gaining ‘supportive attitudes’ (recruiting high quality students, donors, partners of choice) than student satisfaction. By the same token ‘the reputation of the universities on graduates’ CVs will be more beneficial than alumni’s own experiences. This all feels rather Orwellian: what you think of a university is more important than what I know about it’ (Simpson 2012, emphasis added). But if the universities’ focus on managing their standing in rankings is part of a process of reputation management (where the external view is more important than the internal content) then this echoes the way in which citizens are depicted in the global knowledge economy (above). They too are expected to create themselves in this way: if they can create a product with a distinctive look (a CV, where the ranking of the universities attended is an important part of the brand) and make it converge which what the consumer/employer wants, then ‘money is to be made.’Conclusion Hoskin and Macve’s study of the introduction of grading and ranking in West Point Academy, suggests that students were not just intended to learn a subject at university, but also reflexively, systems of organisation. If so, then what are the systems of performance measurement, self-monitoring and governance that students are meant to be learning at universities now and to be taking into the world of work? The first is the benefits to the elite and the cost to the mass of focussing attention and resources on rankings and league tables. Ellen Hazelkorn, who conducted a major study of the influence of league tables and ranking systems, points out that although there are over 17,000 universities worldwide, governments, university managements and the popular press have ‘a near gladiatorial obsession’ about their universities being ‘in the top 100’ in international rankings. To have a ‘world-class university’ as was the Danish minister’s ambition, costs $1-1.5 billion a year, which only Copenhagen and Aarhus Universities could come close to in Denmark, and for most universities in the world, the kinds of metrics that propel institutions to the top are far beyond achievement. When funding allocations are tied to these metrics, higher education is re-structured for the benefit of the elite and, as Hazelkorn argues, the Matthew effect becomes increasingly obvious, with resources concentrating on the richest universities and a widening wedge between elite and mass education (2008: 212). Summarising the literature and her own research, Hazelkorn expands on her idea that rankings provoke an academic arms race: Rankings... lock...HEIs [Higher Education Institutions] into a continual ‘quest for ever increasing resources’ (Ehrenberg 2004: 26), reinforcing the ‘effects of market-based and competitive forces’ (Clarke 2007: 36) ‘intensif[ying] competitive pressures’ and creating a ‘global market’ which places a ‘growing emphasis on institutional stratification and research concentration’ and establishes a worldwide norm for a ‘good university’ (Marginson 2007: 132, 136). Universities which do not meet the criteria or do not have ‘brand recognition’ are effectively devalued or ignored (Machung 1998: 13, Lovett 2005). (Hazelkorn 2008: 209-10).The second lesson for students is that ranking is a key technology in ‘total accountability systems’, that is forms of governance adopted by international agencies and ‘competition’ states that order whole countries, institutions and individuals through competition to achieve the measures by which they are graded and ranked. As Foucault (1977) explained, such systems of governance are ‘totalising and individualising’, that is they simultaneously order a sector and every individual institution and individual person within it. But the art of governing through ranking is that although the categories (like ‘top 100’) are fixed, no one has the security of a fixed position. This insecurity intensifies when governments and rankings companies make continual, even slight, adjustments to the measures, and when grading is always relative to others’ unpredictable, achievement. As Foucault also pointed out, such systems of surveillance are diffuse, continuous and banal – there is no let up from checking and recording the diverse and otherwise often insignificant measures that count in the rankings. A total accountability system also normalises, as Sauder and Espeland explain, ‘by defining a class of subjects as the same and then using normative criteria to establish individual differences’ (2009: 72). When it is the features of the elite that are taken as normal, this condemns the majority to failure, or worse, wastes their resources by ordering the sector so all universities strive for the unachievable instead of encouraging them and rewarding them for defining their own strengths and diversity.The third lesson is that what really counts is reputation management. The competition state visualises institutions and individuals in similar ways. Universities are ‘set free’ to use their own agency to maximise their standing in a globalised sector, presenting their credentials in the rankings to network in the highest possible global league. The competition state’s ‘view of human nature’ is, similarly, to envisage individuals as responsible for creating their own CV, gaining the best credentials and outputs (or appearance of outputs) that count, which importantly includes the ‘brand’ of their university, marketing them and networking to gain access to an elite, globalised labour market. Both universities and students are expected to be accountable selves who accept that there is competition between units (nations, institutions, individuals) which mobilise their resources as free agents and who know what counts and how to succeed. Will students take from universities and into the world a reflexive knowledge about how to run networked organisations through performance indicators, league tables and world rankings, and the kinds of person such systems of governance imply? In such a system of total accountability what is not encouraged is taking a critically reflexive stance to the world, yet surely that is a core purpose of universities. How, then, can universities, through both their education and the hidden, implicit curriculum learnt from being in such an institution, encourage the formation of critically reflexive practitioners, who analyse what is ‘positive and perverse’ (in Hazelkorn’s terms) about this way of governing across global, national, institutional and individual scales and work out how to act on and change such systems? References Aarhus University (2010) Ledelsessekretariatet s?ger 2 specialkonsulenter. Magisterbladet, 12:47.Bashir, S. (2007). Trends in International Trade in Higher Education: Implications and Options for Developing Countries. Washington DC: World Bank. HYPERLINK "" , P. (2012). De rigueur for the jet-set CV. Times Higher Education World Reputation Rankings 2012: 15-17. Bohm, A., M. Follari, A. Hewett, S. Jones, N. Kemp, D. Meares, D. Pearce and K. Van Cauter (2004). Vision 2020. Forecasting International Student Mobility. A UK Perspective. London: British Council. , P., H. Lauder and D. Ashton (2007). “Towards a High Skills Economy: Higher Education and the New Realities of Global Capitalism” In D. Epstein, R. Boden, R. Deem, R. Fazal, Rizvi and S. Wright (eds), Geographies of Knowledge, Geometries of Power: Higher Education in the 21st Century. World Yearbook of Education. London: Routledge.Cameron, A. and R. Palan (2004). The Imagined Economies of Globalization. London: Sage.Carr, K. (2011). Improvements to Excellence in Research for Australia. Australian Government media release, 30 May. , K. (2012). What do universities actually gain by improving league table performance? The Guardian, 16 March.Cheng, L., Y. Watanabe and A. Curtis (2004). Washback in Language Testing. Mahwah NJ: Lawrence Erlbaum Associates. Cussó, R. (2006). Restructuring UNESCO’s statistical services – The ‘sad story’ of UNESCO’s education statistics: 4 years later. International Journal of Educational Development, 26: 532-544.Danish Government (2006). Progress, Innovation and Cohesion. Strategy for Denmark in the Global Economy – Summary. Copenhagen: Regeringen.Danish Government (2010). Danmark 2020. Viden>v?kst >velstand>velf?erd. Copenhagen: Regeringen, February. , H. and P. Rabinow (1982). Michael Foucault: Beyond Structuralismand Hermeneutics, Brighton: Harvester Press.Foley, B. and H. Goldstein (2012). Measuring Success. League tables and the public sector. London: British Academy Policy Centre.Foucault, M. (1977). Discipline and Punish. London: Allen Lane.Fr?lich, N. (2008). The Politics of Steering by Numbers. Debating Performance-Based Funding in Europe. Oslo: NIFU-STEP.Gladwell, M. 2011 The order of things: What college rankings really tell us The New Yorker, 14 February. , E. (2008). Learning to live with league tables and ranking: the experience of institutional leaders. Higher Education Policy, 21: 193-215.Holm-Nielsen, L. and R. Hemmingsen (2011). Ranglister for universiteter er alvor. Politiken, 18 December.Hoskin, K. W, and R. H. Macve (1988). The genesis of accountability: the West Point connection. Accounting, Organizations and Society, 13 (1), 37-73.House of Commons, Science and Technology Committee (2004). Research Assessment Exercise: a re-assessment: Eleventh Report of Session 2003-4. London: House of rmation (2011). Danske universiteter lander udenfor top 100. Information, 6 October , B. (2002). The Future of the Capitalist State. Cambridge: Polity Press.Macdonald, S. and J. Kam (2011). The skewed few: people and papers of quality in management studies. Organization, 18 (4): 467-474.Martin, K. (2010). Robert McNamara and the limits of ‘bean counting’. Anthropology Today, 26 (3): 16-19.Mathisen, G. (2007). “Shaping the global market of higher education through quality promotion”. In D. Epstein, R. Boden, R. Deem, R. Fazal, Rizvi and S. Wright (eds), Geographies of Knowledge, Geometries of Power: Framing the Future of Higher Education. London: Routledge: 266-279.OECD (2011). Education at a Glance 2011; OECD Indicators. Paris: Organisation for Economic Co-operation and Development. . O. K. (2011). Konkurrence Staten. Copenhagen: Hans Reitzels Forlag.Peters, T. (2001). Tom Peters’ true confessions. Fast Company, 53: 80-92.Power, M. (1997). The Audit Society. Rituals of Verification. Oxford: Oxford University Press.Reich, R. ADDIN EN.CITE <EndNote><Cite ExcludeAuth="1"><Author>Reich</Author><Year>1991</Year><RecNum>671</RecNum><record><rec-number>671</rec-number><foreign-keys><key app="EN" db-id="zwffaa5w55sp57e2sf65fdsuztxfda5tv0p5">671</key></foreign-keys><ref-type name="Book">6</ref-type><contributors><authors><author>Reich, Robert</author></authors></contributors><titles><title>The Work of Nations</title></titles><keywords><keyword>higher education</keyword></keywords><dates><year>1991</year></dates><pub-location>New York</pub-location><publisher>Vintage Books</publisher><accession-num>SW</accession-num><urls></urls></record></Cite></EndNote>(1991). The Work of Nations. New York: Vintage Books.RKA, Inc (2009). Economic Impact of International Education in Canada. Final Report Presented to Foreign Affairs and International Trade Canada. Vancouver: Roslyn Kunin & Associates, Inc. , Susan L., R. Dale, S. Moutsios, G. B. Nielsen, C. Shore and S. Wright 2012 Globalisation and Regionalisation in Higher Education: Toward a New Conceptual Framework. Working Papers on University Reform no. 20 (URGE Project). Copenhagen: DPU, University of ?rhus. , M. and W. N. Espeland (2009). The discipline of rankings: tight coupling and organisational change. American Sociological Review, 74 (1): 63-82.Shore, C. and S. Wright (1999). Audit culture and anthropology: neo-liberalism in British higher education. Journal of Royal Anthropological Institute (formerly Man), 5 (4): 557-575.Shore, C. and S. Wright (2000) “Coercive accountability: the rise of audit culture in higher education”. In M. Strathern (ed.), Audit Cultures. Anthropological Studies in Accountability, Ethics and the Academy (EASA Series). London: Routledge: 57-89.Simpson, L. (2012). Reputation to consider? Check the league tables. Times Higher Education World Reputation Rankings 2012: 22-3.Strathern, M. (1997). ’Improving ratings’: audit in the British university system. European Review: Interdisciplinary Journal of the Academia Europaea, 5 (3), 305-21.Usher, A. and M. Savino (2006). A World of Difference. A Global Survey of University League Tables. Canadian Education Report Series. Toronto: Education Policy Institute.Willmott, H. (2011). Journal list fetishism and the perversion of scholarship: reactivity and the ABS list. Organization, 18 (4): 429-442.Wright, S. (2009). What counts? The skewing effects of research assessment systems. Nordisk Pedagogik/Journal of Nordic Educational Research, 29: 18-33.Wright, S. (2011). ”Universitetets performancekrav. Viden der t?ller”. In Kirsten Marie Bovbjerg (ed.), Motivation og mismod, Aarhus Universitetsforlag.Wright, S. (2012). ”Danske universiteter – virksomheder i statens koncern?”. In J. Faye and D. Budtz Pedersen (eds), Hvad er videnspolitik? Copenhagen: Samfundslitteratur (in press). Wright, S. and J, Williams ?rberg (2008). Autonomy and control: Danish university reform in the context of modern governance. Learning and Teaching: International Journal of Higher Education in the Social Sciences (LATISS), 1(1): 27-57.Wright, S. and J, Williams ?rberg (2009). “Prometheus (on the) Rebound? Freedom and the Danish Steering System”. In Jeroen Huisman (ed.), International Perspectives on the Governance of Higher Education. London: Routledge: 69-87.Wright, S. and J, Williams ?rberg (2012). “The double shuffle of university reform – the OECD/Denmark policy interface”. In A. Nyhagen and T. Halvorsen (eds) Academic identities – academic challenges? American and European experience of the transformation of higher education and research. Newcastle upon Tyne: Cambridge Scholar Press. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download