Www.research.manchester.ac.uk



Rankings and Policy AssemblagesMiguel Antonio Lim?and Jakob Williams Oerberg in University Futuresed. Rebecca Lund and Susan WrightBerghan BooksThis chapter suggests a dynamic understanding of the role global rankings and ranking agencies play in international and national policy coordination. While literature suggests rankings are instrumental for ‘neoliberal’ reform of universities and the ‘marketization’ of higher education, we suggest that a closer look at the business of ranking itself ?makes possible a more differentiated account of the complexities of contemporary international policy coordination. We argue that rankings do not mesh and intertwine with national policies or wider processes of ‘neoliberal’ coordination easily. Instead they are mobile and ambiguous references to global standards that attain meaning in national policy development in particular ways. What we suggest is to see rankings’ as active elements in spaces of policy making (Lim and Williams-Oerberg, 2017). Rankings are active as vehicles for making the visions or imaginaries of the global knowledge economy more tangible, but they are also active in the sense that their production and results are contingent on interests and factors that are often unrelated to the policy visions they are meant to underpin and the reform projects they are sought to be employed within.?Rankings have been criticized from various angles. Authors have pointed out faults either in the methodologies that they are based upon (e.g. Aguillo et al. (2010); Amsler and Bolsmann (2012); Billaut et al. (2010); Bowman and Bastedo (2011); Coates (2007); Dill and Soo (2005); Docampo (2008); Halffman & Leydesdorff (2010); Li et al. (2011); Locke, (2011); Marginson and van der Wende (2007); Meredith (2004); Moloney (2011); Monks and Ehrenberg (1999); Morgan (2010); Nolle (2010); O’Connell (2013); Pascarella (2001); Pike (2004); Taylor and Braddock (2007); Saisana et al. (2011); Usher and Savino (2006); and Yonezawa (2007)), debated their wider political implications (e.g. Pusser and Marginson, 2008), and their detrimental ‘effects’ of rankings within universities (Kehm, 2014; Marginson and van der Wende, 2007; Naidoo, 2010; Ball, 2012; Olssen and Peters 2005). However, the actual making of rankings and the particular interactions between ranking lists, ranking agencies and other policy actors have received less attention. What we suggest here is a closer look at the particular ways in which rankings gain influence (Lim, 2017) in higher education policy development. We suggest to look beyond Rankings as seemingly passive instruments to assess the complexities of their production and performance in relation to the policy situations they enter into. What we argue is that rankings, rather than instruments for or drivers of global uniformity in international policy coordination, may be windows through which particularities and complexities of such coordination can be studied.?The attractiveness and seductiveness of rankings lie in their ability to relate national level institutional performance to an idea of global standards, and in developing rankings, rankers must continually assure the balance between creating assessments that relate to and are seen as relevant for national contexts while purporting a disembedded global standard to measure the national against. Rankings in this sense is as a tool that connect different scales – departmental, institutional, national and international – of policy making, and we suggest that by following rankings and rankers as they relate to actors and processes at these ‘policy levels’ we can point towards a flexible and less hierarchical understanding of policy coordination that expand existing notions of policy travel or ‘translation’.?If policy, as Wright and Shore have argued, is to be seen as a process of contestation, ranking, because of the ambiguity of its role and organization, is exemplar as an object for the study of this contestation, and as a site for questioning the multi-contextual dynamics underlying it. What we ask in this chapter is, what we study when we study rankings if we assume rankings are not by design and nature subjected to policy ambitions or wider reform trends? We suggest suggest they are windows into the complex and multidirectional processes often summed up as coordination or under critical descriptors for coordination such as ‘neoliberalism’, commercialization or ‘reform’. In essence, we are not trying to acquit rankings of neoliberalism or of riding on a wave of audit culture practices. What we argue is that studies relegating rankings to either a support or driving role in these modes of governance miss opportunities to open up and understand the complex process of international coordination underlying their development. Sometimes the use of rankings in policy making can result in further alignment of national policy actors, agendas and instruments. On other occasions, rankings can open up new fault lines. We employ the term policy assemblage to underline the multiple subjectivities, policy instruments, and imaginaries that produce the articulation of rankings in national policies. The university ranking itself, we argue, is an active and undetermined element in such assemblages. This means in particular that it changes in relation to the decisions taken by ranking agencies and in response to its audiences’ demands.?Rankings and Policy Coordination Studies on international higher education coordination have increasingly dwelt on the creation of aligned reforms aimed at the role of market coordination (Marginson 2007), furthering increased privatization, marketizations, and so-called ‘new public management’ forms of governance (Bleikle, 1998). Some studies have focused on the development of international benchmarking and other forms of Open Methods of Coordination processes (Alexiadou et al. 2010: Ozga, 2008), while the OECD’s development of reports and recommendations, as well as country reviews has been another set of instruments scrutinized in studies of international policy alignment (Wright and Oerberg 2011). On a global scale, the World Bank’s loan conditions in the area of higher education is well known, while in more recent years policy analysis as a coordinating instrument has attained a forceful role, especially, as is also exemplified here, through the development of the concept of World Class Universities described as a basis for reform by Salmi (2009 and critically assessed by Deem et al (2009)?University rankings are most often studied under the assumption that they as instruments play a coordinating role for policy initiatives supporting an increased marketization of universities and what has been termed the rise of ‘audit culture’ (Power, 1996) at universities. We suggest here, that there is further potential in studying the actual connection between ranking lists, government policies and higher education policy debates at both national and international scales of coordination. We suggest that the relationship between policy making and rankings is not an easy or necessarily supporting one, as the internal developments in the business of ranking, personal dynamics within ranking agencies and among rankers and their stakeholders, as well as biases and instabilities in the data grounding the creation of rankings, make them active instruments that are not easily employable for set agendas in policy development (Lim and Oerberg, 2017; Lim 2017). Rather than a mere coordination support for policy, ranking articulates with various elements to produce unstable higher education futures. The study of ranking as a heterogeneously organized agency can open the box of policy making and better explain both what goes into higher education policy making and the multi-actor multi-directional processes involved in its creation.?India: Rankings as problem and a solution?Rankings play a role in the success and failure of policies and in the formulation of the problems tackled in policies, but they may also create uncertainty about government policies ?and become ambiguous allies for policy actors. This ambiguity came to the fore in the Indian context, when in 2013 the QS global rankings placed the established half-century so-called ‘old’ Indian Institutes of Technology well outside the global top 100 and produced an upheaval in the English language Indian media. Rankings were debated on the main Indian news channels in heated exchanges between politicians, ranking company officials, and IIT leadership. In one NDTV show, IIT students too were invited as audience members to the discussion and they raised questions and comments suggesting that rankings did not reflect the actual status of IITs in the world and cast doubts about both the quality and honesty of ranking methods. Central government representatives at the time stayed largely out of the debate.?In the Indian case, as with many other higher education systems, coordination of policy across the sector has been a central goal.The focus has been on the ability of institutions and government policy to coordinate the performance of higher education sectors to achieve desired outcomes and, only more recently, on the coordination and alignment of diverse national systems across world regions and globally. The main agents in these later efforts of coordination have been international organizations such as UNESCO, the World Bank and the OECD as well as regional actors such as the European Commission and ASEAN (Citations needed). The Indian interaction with rankers can be seen as a complicated attempt at achieving both these aims for coordination through engagements with rankings. However, the low ranking of Indian universities questioned rankings role in this process, as well as the hierarchical understanding of the Indian higher education sector (IITs at top bringing the global level into an Indian context) which ranking was assumed to solidify.?We followed the 2013 ranking debate and its aftermath through fieldwork in and around IIT Delhi and central government offices as well as interviewing key persons at the global ranking agencies and the wider community of ranking practicioners and managers. The particular episode of tension brought together in the same assemblage the Indian Higher Education sector, the national government, and the ranking agencies. The cause of the scandal was the non-inclusion of IIT Delhi within the top 200 in QS World University rankings. The IITs then found themselves in a media storm over their internationally benchmarked performance. These were related to worries over India’s place in the global knowledge economy and became the topic of intense media debate. As a result of this media interest, the methods of global ranking agencies were challenged and at times accused of allowing top rankings to be contingent on fees paid to the ranking agencies. At a heated debate at NDTV, IIT-Delhi students questioned the value of the global rankings by pointing to the attention their institute was given by global employers (CITE). Likewise, an IIT director mentioned that his institute had not been paying attention to rankings since they had not seemed important to the purpose of his institute. ?Notwithstanding these counterarguments, the faltering ranking of India’s top institutes became a policy problem emphasized repeatedly from the President’s office and further pushed onto the IIT agenda by the Ministry for Human Resource Development, which viewed IITs as central institutions for the development of World Class Universities in Delhi, a policy now questioned by their absence in among the world top ranks IIT’s began assembling and improving the kind of datasets that could enable a better response to ranking parameters. An institutional peer review exercise that had already been underway was now used to develop a better response to rankings, and in the case of IIT Delhi, concrete suggestions for the institute’s interaction with rankings such as QS became part of the recommendations arising from the process (IIT Delhi 2014: 7; Lim and Williams-Oerberg 2017). Existing ambitions to strengthen the IITs role in society were now matched by an aim to lodge them more securely within the imaginary of the global knowledge economy.?IITs were tasked to give certainty to the government claim that, as a 2011 report (MHRD 2011) states, they had ‘all the ingredients’ for leading the Indian higher education system towards world-class status. These IITs were already supposed to have achieved the ‘World Class’ standard which the system needed to develop and which IITs through their global standard was to transmit to the other parts of the higher education sector. During fieldwork at IIT Delhi, Williams-Oerberg met both students and faculty from other institutions taking courses or upgrading skills at IIT. IITs were mentoring other institutions and IITian faculty acted as mentors for other institutions’ faculty, just as their students were leading students of other institutions in NGO activities or even in political activism.?Keeping out of the QS controversy, the Indian government actively engaged with ranking agencies. The president’s office, with the president as Visitor and formal head of the collective IIT system, hosted a symposium on ranking, and through the committee overseeing the IIT system, the Ministry of Human Resource Development began nudging the widely autonomous and powerful IITs to device a consistent and sector coordinated response to their poor rankings, which it was argued, was to a large extent an effect of lack of data about the actual performance of the sector. While the Indian higher education sector was transforming its policy agendas around the awkward presence of the rankings, the Indian government also sought to engage the ranking agencies. National policy makers and Indian higher education leaders sought to argue for changes in ranking methodologies that would allow Indian institutions to score more highly. The Indian policy makers wanted the ranking agencies to take into account the particular constraints that their educational institutions were operating under, such as the considerable difficulty to hire foreign academic staff, which is one of the factors that contributes to the THE rankings’ internationalization indicator. The THE was willing to listen to these suggestions, even giving the impression that actual changes were forthcoming (Lim and Oerberg, 2017), however, in the event, they denied any changes would be made on account of Indian requests (Baty, 2013). Instead both QS and THE continued to develop ranking tools better matching Indian and other Asian contexts, creating BRICS rankings, small university or young university rankings, in which Indian universities, albeit still well behind East Asian counterparts showed up closer to the top. These sub-rankings now makes it possible to relate internal Indian rankings or the new THE specific India ranking to some measure of global standing. ?Eventually, the central Indian government further launched an India specific ranking, which took into account both aspects missed in the global rankings and social issues seen as inseparable from Indian higher education policy. ?The engagement with the THE by Indian government and higher education institutions, the refusal of the THE to change its methods, but parallel development of India relevant ranking shows that rankings play important roles in national policy making processes, but in ways that can be unpredictable and as contingent on ranking business models and views of rankers as on trends and interests in national policy circuits. Rankings have played a part in reforms of the Indian higher education sector in the sense that Indian government officials allowed public concern over the rankings to grow, giving it the space to attempt some changes to steer its higher education institutions, notably the IITs. ?The government’s concern that its sector was not performing well was matched by performance in the QS, as well as other, rankings. The desire for World Class University institutions was a particularly strong motivator in the Indian effort both to enforce discipline among its top tier institutions as well as involve ranking agencies as external consultants or advisors. The government pushed for the collection of more data regarding the operations of the IITs, whereas the IITs themselves utilized their engagement with rankings to secure a continued status as the certified global element in the Indian higher education environment, which has since been confirmed by their place (per definition and design) in domestic ranking and evaluation systems that balance Indian national demands on it higher education system with efforts to place Indian institutions in ‘global’ contexts. What the case pushes us to acknowledge is the continued complication of the policy process spun around the incorporation of rankings and the idea of the global yardstick in the Indian policy process. It further alerts us to the particularity of the interaction between ranking agencies and the national policy situation, which involves both personal relations and actions of polcymakers and ranking officials, as well as institutional, national and private economic and political interests.Conclusion: Rankings in the policy assemblage ?Rajani Naidoo et al. (2001) has drawn attention to what they call ‘pathologies’ developing in universities following from engagements with global ranking instruments. Rankings, they argue, eschews institutional priorities to focuses detrimental to wider policy aims. We have aimed to illustrate how rankings while implicated in processes of policy making in higher education and especially in attempts at global coordination, when studied in their actual engagement and presence in policy debates and negotiation form parts in shifting and complicated policy assemblages of policy actors and interests with contingent results.?University rankers are engaging with different national and even international policy makers. The THE, to give one example, targets higher education ‘thought leaders’ as its target audience. It organizes several events to bring these leaders together and curates discussions about reform in higher education. In many ways rankers are indirectly involved in the efforts of countries to build up World Class Universities. In some cases rankers can be more directly implicated in reform processes through which this concept may take on a form quit particular to the national setting, as we have seen in the case of the THE in India. We propose that rankings are important examples of the ‘interface’ (Wright and Oerberg, 2011) between policy actors, instruments, processes and different national policy contexts. However, the ways that rankings coordinate policies and interests, even across national policy borders, cannot be assumed from a seeming trend towards alignment around e.g. World Class Universities promoted by international actors such as the World Bank. While the seeming goal of rankings to provide an environment for competition to produce excellent universities appears consistent with such international reform agendas, rankings may complicate this as they balance global translatability with national relevance and relate to moving national policy environments and specific actors – in the case discussed her, elite Indian universities, Indian policy makers, and even students. The supposed ‘alignment’ work assumed to be done by rankings cannot be taken for granted.?Focusing on the assemblage of policy instruments, agencies, institutions, business models, personalities and social imaginaries that rankings partake in when entering national contexts complicates notions of international coordination as e.g. neoliberal reform or marketization and points towards deeper understanding of particular national policy developments and their interfaces with private or international agencies as they relate to global scales. Ranking agencies themselves have their own institutional histories and goals. Their various individual strategies have led to their increasing prominence in the field of higher education policy, helped by favourable trends such as the growth in international student numbers, as well as an increased concern on internationalization (Altbach and Balán, 2007) and competitiveness (Wright and Oerberg, 2011). Increasingly, some influential rankers may even increase their autonomy to themselves define policy directions taking on influential expert roles in international and national policy debates (Lim 2017). ?Seeing through the rankings window on to policy process, we argue, highlight the “tension between diversity and similarity in international coordination” (Lim and Williams Oerberg, 2017). What we have argueds in this chapter is how ranking can act as an access point in studying the multi-directional and multi-positional process of higher education reform, and bring to life both government, national institutions, global policy discourses, and private international agencies as elements of processes of global coordination not easily aligned through government visions or the usual narratives of higher education studies.?References ?Aguillo, I. F., Bar-Ilan, J., Levene, M., & Ortega, J. L. (2010). Comparing university rankings. Scientometrics, 85(1), 243-256.?Alexiadou, N., Fink-Hafner, D., & Lange, B. (2010). Education Policy Convergence through the Open Method of Coordination: theoretical reflections and implementation in ‘old’and ‘new’national contexts. European Educational Research Journal, 9(3), 345-358.?Amsler, S. S., & Bolsmann, C. (2012). University ranking as social exclusion. British journal of sociology of education, 33(2), 283-301.Ball, S. J. (2012). Performativity, commodification and commitment: An I-spy guide to the neoliberal university. British Journal of Educational Studies, 60(1), 17-28.Baty, P. (2013). No Plans To Alter World Rankings: Times Higher Education. Interview with Smita Polite (2013 December 12). ?Edu-Leaders. Retrieved from: . Accessed 20 April 2017?Billaut, J. C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking?. Scientometrics, 84(1), 237-263.?Bleiklie, I. (1998). Justifying the evaluative state: New public management ideals in higher education. Journal of Public Affairs Education, 87-100.?Bowman, N. A., & Bastedo, M. N. (2011). Anchoring effects in world university rankings: exploring biases in reputation scores. Higher Education,61(4), 431-444.?Coates, H. (2007). Excellent measures precede measures of excellence. Journal of Higher Education Policy and Management, 29(1), 87-94.?Dill, D. D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems. Higher education, 49(4), 495-533.?Docampo, D. (2008). International rankings and quality of the university systems. Revista de educación, 149-176.?Halffman, W., & Leydesdorff, L. (2010). Is inequality among universities increasing? Gini coefficients and the elusive rise of elite universities. Minerva, 48(1), 55-72.?Li, M., Shankar, S., & Tang, K. K. (2011). Why does the USA dominate university league tables?. Studies in Higher Education, 36(8), 923-937.?Lim, M. A. The building of weak expertise: the work of global university rankers. Higher Education, 1-16.?Lim, M. A., & Williams ?erberg, J. (2017). Active instruments: on the use of university rankings in developing national systems of higher education. Policy Reviews in Higher Education, 1(1), 91-108.?Locke, W. (2011). The Institutionalisation of Rankings: Managing status anxiety in an increasingly marketised environment. In J. Shin, R. Toutkoushian, & U. Teichler (Eds.), University Rankings. Theoretical basis, methodology and impacts in Higher Education.?Marginson, S., & Van der Wende, M. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal of studies in international education, 11(3-4), 306-329.?Meredith, M. (2004). Why do universities compete in the ratings game? An empirical analysis of the effects of the US News and World Report college rankings. Research in Higher Education, 45(5), 443-461.?Morgan, J. (2010). Higher education becomes a globally traded commodity as demand soars. Times Higher Education, July 22: 6–7.?Moloney, J. (2011). International World rankings: Where do you stand, British Council—Going Global 5 conference proceedings, Available at: , Retrieved 13 March 2015?Monks, J., & Ehrenberg, R. G. (1999). US News & World Report's college rankings: Why they do matter. Change: The Magazine of Higher Learning,31(6), 42-51.?Naidoo, R. (2010). Global learning in a neoliberal age: Implications for development. Global inequalities and higher education: Whose interests are we serving, 66-90.?Nolle, L. (2010). Cluster-Based Benchmarking of Universities as an Alternative to League Tables. In Research and Development in Intelligent Systems XXVI (pp. 499-504). London: Springer.?O’Connell, C. (2013). Research discourses surrounding global university rankings: Exploring the relationship with policy and practice recommendations. Higher Education, 65(6), 709-723.?Olssen, M., & Peters, M. A. (2005). Neoliberalism, higher education and the knowledge economy: From the free market to knowledge capitalism. Journal of education policy 20(3), 313-345.?Ozga, J. (2008). Governing Knowledge: research steering and research quality. European Educational Research Journal, 7(3), 261-272.?Pascarella, E. T. (2001). Identifying Excellence in Undergraduate Education Are We Even Close?. Change: The Magazine of Higher Learning, 33(3), 18-23.?Pike, G. R. (2004). Measuring quality: A comparison of US News rankings and NSSE benchmarks. Research in Higher Education, 45(2), 193-208.?Pusser, B., & Marginson, S. (2013). University rankings in critical perspective. The Journal of Higher Education, 84(4), 544-568.Saisana, M., d’Hombres, B., & Saltelli, A. (2011). Rickety numbers: Volatility of university rankings and policy implications. Research policy, 40(1), 165-177.?Salmi, J. (2009). The challenge of establishing world-class universities. World Bank Publications.?Stensaker, B. (2007). Quality as fashion: Exploring the translation of a management idea into higher education. In Quality assurance in higher education (pp. 99-118). Springer Netherlands.?Taylor, P., & Braddock, R. (2007). International university ranking systems and the idea of university excellence. Journal of Higher Education Policy and Management, 29(3), 245-260.?THE 2016, World Reputation Rankings, Available at: , accessed on 20 April 2017?Wright, S., & ?rberg, J. W. (2011). The double shuffle of university reform–the OECD/Denmark policy interface. Academic Identities–Academic Challenges? American and European Experience of the Transformation of Higher Education and Research, 269-93. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download