Www.research.manchester.ac.uk



Governing Higher Education: the PURE Data System and the Management of the Bibliometric SelfMiguel Antonio LimAbstractThis article reflects on the ‘quantified self at work’ (Moore and Robinson, 2016), neoliberal government (Miller and Rose, 1990; Rose and Miller, 1992; Ball, 2003), and the use of bibliometric technologies that record research output. It charts and reflects upon the development of the ‘bibliometric self’ by presenting an analysis of the case of PURE – a data management system increasingly used in higher education. PURE is an important case to study because it (1) requires academics to engage with the software and actively update their own profiles and (2) aims to capture all academic activities and not only publication records. Its design – both category bound but also open to other inputs allows it to become a ‘total’ management system. It is becoming central to the work of research managers and heads of departments who rely on PURE to provide data for internal and external assessments (such as the UK’s Research Excellence Framework). The article shows how users engage with the software as well as the context in which PURE was designed and continues to develop. It concludes by reiterating the need for a critical but hands-on engagement with the everyday technologies in use in higher education policy.Acknowledgements: This work was supported by the European Commission FP7 People programme: Marie Curie Initial Training Network UNIKE (Universities in the Knowledge Economy) under Grant Agreement number 317452.Keywords: Higher education, PURE, governance, policy instruments, performativity, bibliometricsAuthorDr. Miguel Antonio LimLecturer in Education and International DevelopmentCo-Research Coordinator of the Manchester Institute of EducationThe University of ManchesterOxford Road, ManchesterEmail: miguelantonio.lim@manchester.ac.ukShort bioDr. Miguel Antonio Lim is Lecturer in Education and International Development and Research Coordinator at the Manchester Institute of Education at the University of Manchester. His research interests include performance metrics, internationalisation, and reputation management in higher education. He is task force leader on migration and higher education at the EU-Marie Curie Alumni Association.Previously, he was EU-Marie Curie Fellow at Aarhus University, Denmark. He has worked on international partnerships for Sciences Po-Paris and taught at the London School of Economics (LSE). From 2010-2012, he was the Executive Director of the Global Public Policy Network Secretariat. Governing Higher Education: the PURE Data System and the Management of the Bibliometric SelfAbstractThis article reflects on the ‘quantified self at work’ (Moore and Robinson, 2016), neoliberal government (Miller and Rose, 1990; Rose and Miller, 1992; Ball, 2003), and the use of bibliometric technologies that record research output. It charts and reflects upon the development of the ‘bibliometric self’ by presenting an analysis of the case of PURE – a data management system increasingly used in higher education. PURE is an important case to study because it (1) requires academics to engage with the software and actively update their own profiles and (2) aims to capture all academic activities and not only publication records. Its design – both category bound but also open to other inputs allows it to become a ‘total’ management system. It is becoming central to the work of research managers and heads of departments who rely on PURE to provide data for internal and external assessments (such as the UK’s Research Excellence Framework). The article shows how users engage with the software as well as the context in which PURE was designed and continues to develop. It concludes by reiterating the need for a critical but hands-on engagement with the everyday technologies in use in higher education policy. Governing Higher Education: the PURE Data System and the Management of the Bibliometric SelfIntroduction: the Quantified Academic at WorkThere is an ongoing rising debate about whether ‘too much academic research is being published’ (Altbach and de Wit, 2018). Hall (2013) observes that writing books has become a ‘herd-like’ act for academics. The imperative to ‘publish or perish’ has been present in academia for some time (McGrail et al, 2006) but recent developments in information technology have only increased the anxiety of authors about how much and how well they write. Many universities are pushing their academics to engage with online publication databases and technologies. This has given rise to a growing sense among research academics of their bibliometric profile and, consequently, their bibliometric self. The role of management, audit technologies, and rankings (Nedeva et al., 2012; Welch, 2016) in the development of performance culture is widely recognised. However, they usually attend to wider processes of discourses and policy processes (Magalh?es et al, 2013), critiquing the logic (Olssen and Peters, 2005; Davies and Bansel, 2007; Baltodano, 2012; Shore and Wright, 2017, among others) of ‘neoliberalism’ and other perverse forms of ‘government at a distance’ (Peters, 1996; Mitchell, 2006). In addition to this continuing work, there needs to be a sustained attention to the study of the growing range of technologies – particularly those related to accounting and data management – that are in use in higher education governance today. Burrows (2012) claims that there are easily over a hundred different indicators that assist in ‘managing’ performance of UK academics. Of these, he uses the h-index, as an example of a range of bibliometric measurements of research productivity. While there are various kinds of performance numbers, the h-index is among the most prominent. The instruments that collect information and produce these numbers should be understood as mechanisms in which various political and economic interests are ‘coordinated’ (Miller and O’Leary, 2007). This coordination brings together a wide range of corporate, government, institutional, and even personal interests. There is increasing work around the use and effects of data and number based technologies linked to digital governance in education (Williamson, 2015; 2016), new forms of educational assessment (O’Keeffe, 2016) and computer ‘screens’ (Decuypere and Simon, 2016). Sellar and Lingard (2013) illustrate how coordination of interests takes place in the case of the OECD’s PISA instrument. Grek (2009) has shown how education policy ‘tools’ have shifted ‘from the margins… to the very center of (European) policy making’. In all this, there is a critique of how numbers are insufficiently ‘thin descriptions’ (Ozga et al., 2011) which nevertheless become ‘key governing device(s)’ (Ozga, 2016). To add to the understanding of how data becomes ever more important in policy making, this article responds to the need for more granular considerations about how data is collected and under what conditions this collection takes place. The rise of numbers and the measurement of performance through new digital technologies has led to more thinking about the individualised ‘quantified self’. This concept has notably been built up in critical clinical sciences (Lupton, 2013). Lupton (2013) describes the emergence of a new quantified health subject that arises from the spread and use of technologies that track biometric data. These ‘mHealth’ (ibid) devices include smart watches, heart-rate monitors, and step counters. This understanding of the quantified self has been further adapted to the context of the workplace and linked to the condition of precarity which this article adapts to the academic workplace. Moore and Robinson (2016) and Moore (2017) denote this subject as the ‘quantified self at work’. Moore and Robinson (2016) argue that the quantified self at work functions within a capitalist dynamic that leads to a ‘line of subordination (that) goes from economic system to the ego-deal. This means that workers are pushed to be a more productive version of themselves. Reflecting on earlier sociological work by Rose (1998), Moore and Robinson echo that the “self-controlling self calculates about itself, and… works upon itself in order to better itself” (Moore and Robinson, 2016: 2775). The quantified self at work links with wider thinking on how society is controlled and how capital itself is an ‘academic subjectivation machine’ (Hall, 2013). While this is also relevant in other countries, Hall (2013) shows how the UK higher education system, in response to ‘market dynamics’, leads academics to manage themselves as a ‘labour force’. The accumulative logic of academic ‘capitalism’ and the transformation of esteem and prestige into forms of capital fuel the writing of papers and books for the academic publishing industry. It is the accompanying measurement of this production that leads to growing distinction of the bibliometric self. The increasingly appropriate description of research academics as ‘quantified selves at work’ is a strong incentive to reflect on the growing and, in some cases, pervasive use of digital technology to record research outputs. In this article, I ask four questions, the first two are roughly descriptive and the second two more reflective. This article therefore asks: (1) for the research-writing academic, what technologies are involved in capturing and recording research output and in what context were they developed? (2) How do these technologies ‘work’ and how do academic engage with them? Then (3) what role do these technologies have in constructing the academic? (4) How does the bibliometric self come about? The article then offers some reflections that binds the answers together. The proposed analysis engages the broader critique of university management practices but extends the literature by illustrating the concrete and material interaction of users with these new tools. It begins by describing the context of the data used this study, then goes on to outline the concrete policies in which PURE was used in Denmark to calculate and reward bibliometric outputs. It then shows how users interact with PURE and discusses what this interaction potentially brings about in its users’ behaviours and perceptions regarding what is expected of them. Finally, it concludes by reiterating the need for a critical but hands-on engagement with the everyday technologies in use at schools, universities, and other research and teaching spaces. A Personal Engagement with Bibliometric TechnologiesI draw upon a personal experience both as an end-user (i.e. someone who submits his data) and ‘reviewer’ (i.e. someone uses reports generated by PURE) of the data management system. There is an increasing amount of research in metrics and higher education governance (Wilsdon, 2016). An inductive thematic analysis of conversations with experts and observations at the European Scientometrics Summer School (ESS) forms the basis of some of this article’s analysis. The ESS is a weeklong activity taught by specialists on bibliometric analysis that is meant for university library staff members, university research managers, and other interested academics. The summer school included a ‘crash course’ – including both basic concepts but progressing quickly to hands-on application – on bibliometric analysis tools. Observations are also drawn from the presentations of Times Higher Education data managers and their partners at Elsevier during the ‘understanding the data’ session during the presentation of the World University ranking results at the 2015 World Academic Summit at the University of Melbourne. Finally, personal notes and activities carried out in relation to PURE as Research Coordinator of my own university Department, where PURE is part of an active preparation for the 2021 REF, were analysed. At the ESS, bibliometric experts advised a striking degree of caution when using this form of analysis in setting out evaluative strategies at universities – particularly any analysis done at the level of the individual academic. The difficulties and challenges of bibliometric analysis for individual evaluation of research impact and productivity was, in fact, highlighted as the special theme of the 2015 School (ESS, undated). While university academics are understandably anxious to produce impactful research work, very few seem to understand the details of data collection, associated calculations, and shortcomings of the bibliometric systems by which this impact is commonly measured. At my current institution, workshops to orient users on PURE are regularly held in response to confusion and anxiety about how the system works and what should be recorded. The Development of PURE: the Danish ContextThe enrolment of technical instruments into wider public policy processes is a point of departure for the study of ‘instruments’ in other policy fields (see, for instance, Scott 1998). In education, there are many instruments that could be placed under the same scrutiny. These include those produced by private actors such as ranking agencies, consultants, auditors and many others but which also come to have their own life in realm of public governance (Lim and Oerberg, 2017). PURE was developed by Danish IT experts together with the workers at the public Danish library system, it came to greater importance in Denmark as bibliometric analysis became part of the funding equation by which the Danish government allocated resources to its universities. The Anglo-Saxon tendency towards ‘publish or perish’ in higher education systems (McGrail et al, 2006) was institutionalized relatively recently in settings such as Denmark. The bibliometric way of measuring research performance, however, appears to make otherwise incomparable countries, universities, disciplines, research, and people comparable with respect to a designated objective international standard. The well-known trope is that with performance measures in place, universities can be transformed into ‘world-class’ institutions, attract the best scholars and students from around the world, and secure national economic competitiveness and growth (Salmi, 2009). High rates of citations to publications are frequently cited as an indicator of academic excellence.In 2006, the Danish Parliament introduced the Bibliometric Research Indicator (BRI). Bibliometric performance was then explicitly used in the equation for distributing funding among Danish universities (Regeringen 2006; Regeringen 2009). Although there are nuances, since 2010, the new system has come to determine a quarter of the funding that a Danish university receives. The difference between the old and the new system is presented in table 1 below. Table 1.?Funding Formula For Allocating New Basic Funds To Danish Universities (in percentages).Table 1.?IndicatorOld systemNew systemEarning from educational programs (ST?)5045External funding4020BRI (as of 2015)-25Number of PhD degrees1010Source: Regeringen (Denmark) 2009The BRI assigns a certain number of points to different kinds of research outputs. There are two factors – the type of output and its quality. An outputs quality is determined by whether it is published in a ‘level 1’ or ‘level 2’ outlet, as in table 2. Journals are rated by a group of university academics to determine which category they fall into. This system has been criticized for being simplistic and inappropriate (Wright, 2009). Table 2.?BRI Point-SystemTable 2.?TypeLevel 1Level 2Monographs58Chapters in anthologies0.52Journal articles13Doctoral theses5Patents1?? Source: Sivertsen & Schneider 2012As a result of these changes, some researchers are ‘slicing’ their findings to the smallest publishable unit in order to maximize publications from them (Macdonald and Kam, 2007). Osterloh (2010) argues that the importance of publications over scientific discovery is increasing. There are also those who fear that original research is being hindered by the need to publish ideas that engage with existing discourses in relatively well established journals (DeRond and Miller, 2005). Yet these concerns have not prevented the increasing orientation towards trying to measure and capture a university’s research output. This is because it has become important insofar as its financial resources can only be guaranteed by a comprehensive reporting of all its publications. At Danish universities, the need to document and manage research output has important financial repercussions. Although universities claim that the ‘overall aim’ of the indicator project is to increase research quality by ‘motivate(ing) researchers in all fields to publish in the best acknowledged and most prestigious channels of publication’ (AU, undated), they also point out that the ‘amount and type of publications registered… have a direct impact on the university’s financial situation’ (ibid). The main implication of this reform was that academics were subjected to a much greater pressure to produce and record research publication outcomes. New instruments, i.e. PURE, came to serve this purpose. What are the technologies recording research output?PURE is a software and IT platform, also known as a Current Research Information System (CRIS), that collects and manages primarily research publication data. However, while it is primarily used for gathering and managing research output, PURE’s design allows it to collect data on many other academic activities: awards, prizes, service activities, and others. This functionality means that PURE’s use can be extended to include a wider set of activities if managers should choose to measure and emphasize other targets. In the UK, in the lead up to the upcoming REF evaluation in 2020, PURE plays a key role for some research directors/coordinators who are responsible for strategically managing their submissions for assessment. Beyond research output, PURE data can increasingly be used to craft a wider and more general picture for managers about a department’s or faculty’s potential ‘impact’ and influence, which are also important for the REF evaluation. The PURE system was acquired by Elsevier publishers and is now part of a wider set of bibliometric tools offered by the company. Elsevier describes itself as a “world leading provider of information solutions that help (you) make better decisions, deliver better care, and sometimes make groundbreaking discoveries in science, health, and technology” (Elsevier, 2016). Elsevier also publishes over 2,500 journals and manages the ScienceDirect and Scopus bibliometric platforms, making its own stable of journals a significant data source for bibliometric analysis. PURE was therefore a complement to the company’s range of products and activities in the higher education and research sector. The accumulated submission of data into PURE by individual researchers increases the value of its database and the strength of bibliometric analysis that can be done by its tools. It is important for PURE, its managers, and the people who use its analyses to have it populated with as much data as possible. This can be seen in the way that PURE’s use has been promoted by university managers. Research outputs are invariably flagged as the central and most important data item in PURE. From a user’s perspective, it is the first ‘tab’ that is seen. Several reminders from administrators remind researchers that the research outputs section needs to be completed. At my current institution’s website for staff members, there is a direct link to PURE. The hyperlinked text to PURE reads: ‘PURE (publications)’ (emphasis added). In some contexts, PURE is increasingly becoming the only way for a researcher to become visible on the university’s official webpages. PURE becomes a central instrument for a researcher to manage the visibility of his/her research and activity profile and to demonstrate his/her contribution to knowledge production and, to a certain extent, his/her contribution towards the institutional goals of the university (e.g. with respect to REF targets). In Denmark, where PURE was developed, universities needed to record all output because every publication would have an effect on the university’s financial position as explained above. How does PURE work? How do academics interact with its data system?Opening an account in PURE is linked to one’s university password and profile. The university also recommends that researchers register for an ORCID ID – a unique number that identifies a researcher and avoids some problems associated with ambiguous or shared/common names or names with special characters. At some universities, ORCID ID numbers for each of its research staff were registered by the institution itself and academic informed afterwards about their new ORCID number. Thus the first step was to be ‘enrolled’ into PURE, to become ‘visible’ to the system (Dean, 2010). The information registered in PURE becomes the basis of the content of each staff member’s webpage. Those who do not register their information on PURE have blank or empty personal webpages as a result. Registering this information is time-consuming and also demands timely scrutiny by others. The information registered is reviewed by an administrator before it is accepted and later made public on the website.A researcher usually begins by filling in some profile details about himself, such as a picture, his research group affiliation and his research interests. PURE can be adjusted to different universities’ needs and systems. In some universities, researchers can upload a CV which they have made themselves onto their ‘profile’ section. At other universities, PURE automatically generates the CV on the basis of what has been inputted – the ability to generate the CV is another sign of the comprehensive scope of data which PURE is meant to gather. When a researcher clicks on ‘research outputs’, a variety of publications types are presented. Publications take many forms. It is possible to input contributions to scientific journals, conference contributions, books, anthologies, dissertations, reports, working papers, newspapers contributions, net publications, patents, and even ‘non text’ contributions. The great variety of possibilities also reflects, to some extent, the different ways in which various academic disciplines produce and publish research outputs. There are options that allows the user to import publications from a database (in some cases Elsevier can link its own journals to PURE for a quick match of outputs). In many cases the publications are manually entered into the system (although the system is being upgraded to automatically capture some bibliometric data). Under the category ‘contribution to journal’ there are many further sub-options. These range from a full journal article, conference article, letter, review, literature review, editorial, comment/debate, and conference abstract in journal. This is to show the variety of submissions that can be submitted but also the desire of the system to comprehensively capture any kind of research output that is produced. The system also solicits research output at several stages of development. For instance, journal articles that have been accepted but not yet published can be entered into the system. But there are also options to list works that are in even earlier stages of development. On the drop-down tab for ‘publication status’ there are options to input research that has just been ‘submitted’ or are even just in the stage of ‘in preparation’. This means that even early stage work can potentially be placed into PURE. The comprehensive nature of this data collection process creates conditions where scholars market their outputs as well as their works-in-progress. The fact that there is an option for listing early stage work-in-progress or submitted manuscripts does not mean that managers make use of them. Some researchers might want to make use of this option to boost the impression that they are productive, have a research publication pipeline, and are dedicated to delivering sustained output. How does PURE construct the bibliometric self? In Denmark, the system can calculate the ‘weight’ of a research output according to the Danish BRI system (outlined above). This means that the system can give a ‘judgment’ about a research output based on the journal or outlet in which a publication appears. This gives every researcher an idea of how much each of his works is ‘worth’. On an aggregate level, this function is useful to heads of departments and university leaders who want to understand the financial value of all the research outputs of a given department and even the entire university. With regard to the Danish BRI, PURE easily recognises high impact journals (by cross referencing any output to an official journal list). These receive immediate recognition and ‘weight’. However the calculation is not always straightforward. There are some journals which PURE does not yet recognise. Another issue relates to book chapters, which can be submitted as research outputs in PURE, but are not as easily recognised as BRI compatible outputs. When this happens, as when a book has not been listed on the authorized list, no value is returned, i.e. no BRI points are assigned. The likely result is that Denmark based researchers are pushed towards delivering and registering outputs that do count in the PURE system, insofar as BRI calculations are concerned. PURE’s influence over academics research outlets can also extend beyond research. The PURE system also shapes the visibility of non-research publication related work. PURE also collects information on all kinds of ‘other activities’ that academics are meant to undertake. PURE can be understood as a platform that aspires to have a total overview of academic work and thus a tool of a holistic management system. Through PURE, activities such as editorial work, mass media appearances, lectures and oral contributions, participation in conferences, councils, boards, and networks, and many ‘other’ activities become both visible and ‘rewardable’. PURE lists many options under ‘other activities’: visiting other institutions, winning prizes, scholarships or distinctions, doing external teaching and subject coordination, consulting work and, again, ‘other’ work. The wide range of activities in the system suggest that if a researcher were to seriously aim to document his entire work output – and this is central to why it should be studied as an instrument of governance – PURE would be able to capture and represent it. The ability of PURE to capture ‘every’ aspect of academic work is behind the system’s capacity to automatically generate curricula vitae (if requested), i.e. PURE can show a picture of an academic’s professional work that could be used as a judgment tool on the job market. Elsevier says that researchers can use the system to perform other functions related to their own visibility. PURE is presented as an opportunity to market oneself and establish an opportunity to network with other researchers. Researchers can, for instance:“Enhance the visibility of their profiles and areas of expertise to the research community, government agencies, industry, media and the public by automatically publishing their activities and achievements online; identify their peers' expertise for potential collaboration – down to the most precise terms – through our exclusive Fingerprint Engine? (and) provide information for reports to ensure reporting is based on current and complete data” (Elsevier, 2016) What this suggests is that PURE captures and then presents the image of the academic and the representation of his work. Productivity and, by extension, ‘excellence’ is made comparable and measurable in accordance to standardized measures (e.g. the BRI calculator results) that PURE is able to capture. This further implies that viewers can easily evaluate the scholar’s merits and potential. More importantly, it means that academics who now have to register their work in PURE are strongly steered in their evaluation and prioritization of which activities are worth engaging in and which written works are worthwhile to publish (and which are not). Researchers need to regularly input their work into PURE and this leads to learning by iteration about which outputs matter. This regular engagement reinforces the sense that one is building up a bibliometric self. The use of PURE in other contexts: the UK REFPURE can be customised to fit the policy needs of its users. It is an instrument that can ‘travel’ and be adapted to different contexts and policy goals. Although I have described the historical development of PURE in Denmark, PURE has been exported to other countries and systems. It is increasingly being used in the UK, where some of its features are particularly useful for research managers. Some UK Universities have adopted PURE as a research management tool for the reasons already outlined above. But PURE is an instrument that can also be deployed in preparation for the sector wide research assessment, known as the Research Excellence Framework (REF). The UK does not have a system with ‘automatic’ weight for research outputs in certain journals similar to the Danish BRI. Its REF framework which takes place every 6 years, is based on peer based review and classifies papers on a four point scale (plus a category for unclassified work). In at least one UK university, PURE has been internally configured to assist in the preparation for REF. Academics can ‘refer’ research outputs recorded on PURE for internal review. Research coordinators then assign these pieces for internal peer review; ‘marks’ are agreed and then revealed to the respective authors via PURE. PURE then becomes an internally facing platform that constructs the ‘value’ based on the results of this submission and peer review. Some colleagues can become the ideal 20-star self (5 papers rated at 4 star). In an analogous way to the Danish BRI calculation, PURE can record and catalogue the value of bibliometric selves. The current REF framework now requires all research-active staff to be submitted for evaluation, which means that every academic will need to have an output and thus a bibliometric profile and value. PURE will be a tool that makes this bibliometric self visible to the academics themselves and to their managers. Apart from gathering data regarding research outputs, PURE can also be configured to gather data about the research environment and the research impact at different departments. Some universities are already actively encouraging academics to fill out all the numerous and comprehensive fields on PURE not only to capture research output but also to generate the necessary data which will underpin the ‘research environment’ statement – a holistic picture - that each unit of assessment must write as part of its submission to its relevant REF evaluation panel. It is the totality of the PURE system or at least its intended application which gathers information not only about ‘principal’ activities – such as research outputs – but also about all other activities (e.g. service activities, editorial work, awards received, ‘impact’ activities etc.) that is crucial to this role. PURE aims to create a total profile of the individual academic and relate this with various elements of university strategy. The sum of these profiles can also be analysed to produce a picture of a academic department or faculty. The practice of building up these kinds of databases are, to use Ball’s (2003) vocabulary, ‘fabrications’ which have become embedded in education. Once these data sets are produced, they can be easily expanded, reported onwards, and used in further database building. Once database ‘categories’ are used widely among users and managers, they then begin to crowd out other aspects of performance ‘which do not “fit” into what is intended to be represented or conveyed’ (Ball, 2003, 225). By studying PURE, its historical development in the Danish library system, and its current ownership, users might have a better idea of what kinds of purposes it was developed for. By studying its configuration in the UK, users can have a better idea of how it can further evolve.The Bibliometric Self and GovernanceWhile many data systems exist in educational settings, the PURE system helps to illustrate self-governance and the construction of the self because (1) individual academics need to actively engage with it – it is not an automatic data capture system – and (2) because PURE aims to record all activities that academics are meant to engage in. This makes it particularly apt as a space where programmes of government (Rose and Miller, 1992) are negotiated and enacted. It is also a good example of what Burrows (2012: 357) describes as the “‘autonomization’ of metric assemblages”. Moore and Robinson’s (2016) quantified self at work are often measured by automated technologies, while PURE might eventually produce bibliometric profiles automatically, at present it still needs an active participation on the part of academics to produce their bibliometric selves. The bibliometric self as constructed by PURE can only ever be a partial reflection of what academics do. The quantified self at work can never be the whole academic self. The imperfect fit between education policy instruments such as PURE and the ‘reality’ of laboratory, library, and the classroom is one of the best examples of the incomplete, yet real, power of technical instruments. There are many ways in which the instrument and real life can be poorly matched. This is particularly relevant to systems that aspire to capture the entirety of this life. Nevertheless, policy technologies such as PURE have the ‘capacity to reshape in their own image the organizations they monitor’ (Shore and Wright 1999: 570). The important issues are whether academics will ‘deeply’ engage with PURE. They might populate the fields for publication outputs but leave other data fields blank. This might indicate a certain ambivalence towards PURE as a tool but may, as a result, draw even more attention to publications as the core, countable, aspect of an academic’s performance and, therefore, identity. Systems like PURE can be applied ‘from above’ to steer selves into ‘networks of power’ (Ball, 2003). In other education policy spaces, they can come not ‘from above’ but from ‘from without’ as in the case of university rankings in higher education (Hazelkorn, 2015; Lim, 2017; Pusser and Marginson, 2013) or IT and other consultants (Gunter et al, 2015). In both cases, the potential for the system to be mismatched with the reality of life on the ground is considerable.Bibliometric experts constantly express their concerns about the limits of their approaches, especially when applied to individual evaluations. Yet by setting what counts as important and what kind of importance is valid, systems like PURE hold an enormous influence over individual behaviour, leading to the need to reflect on ‘the issue of who controls the field of judgement is crucial... Who is it that determines what is to count as a valuable, effective or satisfactory performance and what measures or indicators are considered valid?’ (Ball, 2003, 216). Concluding Remarks: a Space for Reflective ActionThis article reflects on the growing use of data instruments in higher education governance. In particular, it presents the case of PURE – a data capture system with two important characteristics: (1) it attempts to capture all activities of those who work in higher education and (2) it entails an active engagement of the part of academics who are supposed to register themselves on the system and keep their profile regularly updated. PURE is instrumental in the construction of the bibliometric self. It engages the user by requiring him to submit his outputs for recording, review (as in the case of internal preparation for REF in the UK) and valuation. The debates about the valuation assigned to various bibliometric selves as important point of critique about what kinds of activities are valued and how these are incentivised by calculations such as the Danish BRI.The role of technical and calculative instruments and devices in governing education is now commonplace. Ball reflects that ‘(i)t is the data-base, the appraisal meeting, the annual review, report writing, the regular publication of results and promotion applications, inspections and peer reviews that are mechanics of performativity” (Ball, 2003, 220). Despite the critique of the dysfunctional effects of some of these performative practices they are present in ever more routine ways. There will probably be some sympathy among academics towards the potential victims of this new form of ‘academic subjectivation’ (Hall, 2013). The bibliometric self as a governed subject is a reformulation of Moore and Robinson’s (2016) quantified self at work but places a greater stress on the analysis of the technologies that give rise to the bibliometric self and the context in which those tools were developed. The focus of previous theorising about the quantified self at work (Moore and Robincon, 2016; Lupton, 2013) reflects on capitalist structures in economic terms, this article acknowledges that there is indeed a capitalist publishing industry that involves academics, but that PURE is more oriented towards an economy of esteem and prestige. This article makes its contribution to the critique of these ‘stressful’ systems of evaluation (Nedeva et al., 2012) by examining the development of one particular technology that quantifies the ‘self at work’. This is especially relevant because the gathering and use of calculative data is becoming naturalised. In practice, the critiques of PURE are moving from more reflective dissatisfaction with how it warps academics priorities to functional complaints about how repetitive and ‘clunky’ some parts of the user interface are. Part of the anxiety that users feel is the sense that they are not keeping up with these governance technologies. There is a feeling that despite more and more systems being able to capture data about our performance, workers are not offering up enough data about themselves to convince managers that they are performing well and exceeding expectations – expectations that the data systems themselves fuel. Academics need to consider whether engaging with systems like PURE is a capitulation to ever more measurement or whether it allows them to refocus performance areas away from publications alone towards wider and more varied activities that constitute academic performance. Instruments like PURE could quietly shift debates. This calls for further study of technical instruments like PURE. A new instrument-oriented study of education instruments would, for instance, include asking ‘humble’ software coders, auditors, and administrators about what they had in mind when they create and ask for forms to be filled. These could be a future step for studies about systems like PURE. These studies would be useful not only for academics but also for policy makers who should also be concerned about the potential for the system to bog down if it becomes too complex for users. When “representational artefacts are increasingly constructed with great deliberation and sophistication” (Ball, 2003, 225), they are potentially more difficult to comprehend. To sum, there is a need to better understand new technologies of education governance and have a more than passing knowledge about how they function. Readers are invited to judge what PURE’s and similar systems’ ambitions are and how they work, e.g. how they gives visibility to certain types of activity. Everyone who works at university deals with a system like PURE. The reason for doing this is because there is a capacity for action. Neoliberalism is not done to us (Gerrard, 2015, 859) but is a practice that can be shaped. It is not a static, ‘external evil’ (Morley et al. 2014, 457) but a ‘process, with incremental reforms constantly evolving and adapting’ (Gerrard, 2015, 859). It is precisely by engaging with the new aspects of technical governance that a new avenue for change comes to light. ReferencesAarhus University (undated), research strategy, internal communication.Altbach, P. and H. de Wit. (2018). Too much academic research is being published. University World News Issue No. 519. Available at: Accessed on: 18 October 2018Ball, S. J. (2003). “The teacher's soul and the terrors of performativity.” Journal of Education Policy?18(2): 215-228.Baltodano, M. (2012). Neoliberalism and the demise of public education: The corporatization of schools of education. International Journal of Qualitative Studies in Education, 25(4), 487-507.Burrows, R. (2012). Living with the h-index? Metric assemblages in the contemporary academy. The sociological review, 60(2), 355-372.Davies, B., & Bansel, P. (2007). Neoliberalism and education. International journal of qualitative studies in education, 20(3), 247-259.Dean, M. (2010). Governmentality: Power and rule in modern society. Sage publications.Decuypere, M., & Simons, M. (2016). What screens do: The role (s) of the screen in academic work. European Educational Research Journal, 15(1), 132-151.De Rond, M., & Miller, A. N. (2005). Publish or perish: bane or boon of academic life?. Journal of Management Inquiry, 14(4), 321-329. Elsevier (2016) ‘Elsevier’. Available at: (retrieved 15 August 2016)European Summer School for Scientometrics, undated, available at: Accessed 31 January 2018Gerrard, J. (2015). Public education in neoliberal times: memory and desire.?Journal of Education Policy,?30(6), 855-868.Grek, S. (2009). Governing by numbers: The PISA ‘effect’in Europe. Journal of education policy, 24(1), 23-37. Gunter, H. M., Hall, D., & Mills, C. (2015). Consultants, consultancy and consultocracy in education policymaking in England. Journal of education policy, 30(4), 518-539. Hall, G. (2013). # Mysubjectivation. New Formations, 79(79), 83-103.Hazelkorn, E. (2015). Rankings and the reshaping of higher education: The battle for world-class excellence. Springer.Lim, M. A. (2017). The building of weak expertise: the work of global university rankers. Higher Education, 1-16.Lim, M. A., & Williams ?erberg, J. (2017). Active instruments: on the use of university rankings in developing national systems of higher education. Policy Reviews in Higher Education, 1(1), 91-108.Lupton, D. (2013). Quantifying the body: monitoring and measuring health in the age of mHealth technologies. Critical Public Health, 23(4), 393-403. Macdonald, S., & Kam, J. (2007). Ring a ring o’roses: Quality journals and gamesmanship in management studies.?Journal of Management Studies,?44(4), 640-655.Magalh?es, A., Veiga, A., Ribeiro, F., & Amaral, A. (2013). Governance and institutional autonomy: Governing and governance in Portuguese higher education.?Higher Education Policy,?26(2), 243-262.McGrail, M. R., Rickard, C. M., & Jones, R. (2006). Publish or perish: a systematic review of interventions to increase academic publication rates.?Higher Education Research & Development,?25(1), 19-35.Miller, P., & O’Leary, T. (2007). Mediating instruments and making markets: Capital budgeting, science and the economy.?Accounting, organizations and society,?32(7-8), 701-734.Miller, P., & Rose, N. (1990). Governing economic life.?Economy and society,?19(1), 1-31.Mitchell, K. (2006). Neoliberal governmentality in the European Union: education, training, and technologies of citizenship. Environment and planning D: society and space, 24(3), 389-407.Moore, P. V. (2017). The quantified self in precarity: Work, technology and what counts. Routledge: London.Moore, P., & Robinson, A. (2016). The quantified self: What counts in the neoliberal workplace. New Media & Society, 18(11), 2774-2792.Morley, L., Marginson, S., & Blackmore, J. (2014). Education and neoliberal globalization.?British Journal of Sociology of Education,?35(3), 457-468. Nedeva, M., Boden, R., & Nugroho, Y. (2012). Rank and file: managing individual performance in university research.?Higher Education Policy,?25(3), 335-360.O’Keeffe, C. (2016). Producing data through e-assessment: A trace ethnographic investigation into e-assessment events. European Educational Research Journal, 15(1), 99-116.Olssen, M., & Peters, M. A. (2005). Neoliberalism, higher education and the knowledge economy: From the free market to knowledge capitalism. Journal of Education Policy, 20(3), 313-345.Osterloh, M. (2010): “Governance by Numbers. Does It Really Work in Research?”?Analyse & Kritik?02/2010: 267-283.Ozga, J. (2016). Trust in numbers? Digital education governance and the inspection process. European educational research journal, 15(1), 69-81.Ozga, J., Dahler-Larsen, P., Segerholm, C., & Simola, H. (Eds.). (2011). Fabricating quality in education: Data and governance in Europe. Routledge.Peters, M. (1996). Poststructuralism, Politics, and Education. Critical Studies in Education and Culture. Bergin and Garvey, 88 Post Road West, Westport, CT 06881..Pusser, B., & Marginson, S. (2013). University rankings in critical perspective. The Journal of Higher Education, 84(4), 544-568.Regeringen (2009):?AFTALE mellem regeringen (Venstre og Det Konservative Folkeparti), Socialdemokraterne, Dansk Folkeparti og Det Radikale Venstre om ny model for fordelingen af basismidler til universiteterne?30. juni 2009.?Available at: ?(retrieved October 28, 2014).Regeringen (2006):?Fremgang, fornyelse og trykhed. Strategi for Danmark i den globale ?konomi – de vigtigste initiativer. Available at: (retrieved October 28, 2014).Rose, N. (1998). Inventing our selves: Psychology, power, and personhood. Cambridge University Press in Moore, P., & Robinson, A. (2016, p.2775). The quantified self: What counts in the neoliberal workplace. New Media & Society, 18(11), 2774-2792.Rose, N., & Miller, P. (1992). Political power beyond the state: problematics of government.?British Journal of Sociology,?43, 173-205.Salmi, J. (2009). The challenge of establishing world-class universities. World Bank Publications.Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.Sellar, S., & Lingard, B. (2013). The OECD and global governance in education. Journal of Education Policy, 28(5), 710-725.Shore, C., & Wright, S. (1999). Audit culture and anthropology: neo-liberalism in British higher education.?Journal of the Royal Anthropological Institute, 557-575.Sivertsen, G., & Schneider, J. (2012). Evaluering av den bibliometriske forskningsindikator. Welch, A. (2016). Audit Culture and Academic Production. Higher Education Policy, 29(4), 511-538.Williamson, B. (2015). Governing software: Networks, databases and algorithmic power in the digital governance of public education. Learning, Media and Technology, 40(1), 83-105.Williamson, B. (2016). Digital education governance: data visualization, predictive analytics, and ‘real-time’policy instruments. Journal of Education Policy, 31(2), 123-141.Wilsdon, J. (2016). The metric tide: independent review of the role of metrics in research assessment and management. Sage.Wright, S. (2009). What counts? The skewing effects of research assessment systems. Nordisk Pedagogik, 29 (Special), 18-33.Wright, S., & Shore, C. (Eds.). (2017). Death of the Public University?: Uncertain Futures for Higher Education in the Knowledge Economy (Vol. 3). Berghahn Books. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download