CHAPTER 1 – Chains of Knowledge

It was a sunny day in 2014. I probably should remember this—such days were rarities for late winter in London—but my mind was transfixed by something much more memorable. Even as I recall the details of that day, my brain imposes drizzle upon my memories, even if Google says there was none. I was too captivated by what was happening as we sat in the newly remodeled meeting room for what was otherwise just another departmental meeting.

At the time, I was a lecturer[1] in the Department of Sociology of the London School of Economics and Political Science. As in most other institutions, being a full-time academic involved attending occasional meetings with colleagues where we would discuss matters relevant to our teaching programs and internal administration, the hiring of new colleagues, the intellectual direction of our department, and our views on our institution’s constantly changing initiatives and policies. Bureaucracy, some would say, perhaps community building, maybe just ceremonial practices of academic participation. As every now and then, that Wednesday we were going through such performance, a strange mix between scholarly ritual and managerial intervention, sitting around the room, collectively talking departmental shop. We read and approved the minutes from the previous term’s meeting. We heard from our Head of Department, who informed us of what was taking place at a school-wide level. And the then listened to various reports from faculty committees concerning students, teaching, and research, all on that abnormally sunny day.

As we crossed items from the agenda, we reached the apparent end of this organizational ceremony. And that’s precisely when my mind became transfixed, in this twilight moment when you start thinking about what follows in the day (office hours, a meeting here, an unattended inbox there) and the closing items of our assembly. “The school has requested us”, I heard our Head of Department mention, “to come up with a list of journals that we consider prestigious in our fields of expertise, that define us as a department. They want to use this for our next evaluation, to have a better sense of our own standards of excellence”. The sun vanished, replaced by an object infinitely more intriguing. A silence followed, heavy with a combination of both incredulity and resignation. “This is an opportunity for us to decide how we are evaluated”, remarked our Head of Department. The silence continued, broken by an intrepid colleague who offered the first contribution: “the British Journal of Sociology, I guess”. In addition to being one of the flagship journals of our field in the UK, we edited such journal, so the offer made much sense. “Sociology”, followed another, naturally reflecting our discipline’s professional association’s own scholarly publication. “City and community”, said someone else, echoing our department’s interests in urban sociologies. “Certainly Theory, Culture and Society”, said a fourth. “Work, Employment and Society”, someone suggested. “Antipode”, I heard from the back of the room. All these journals were sensible suggestions. They were, after all, close to the topics, scholarly genres, and intellectual traditions followed by academics in our department. But they weren’t necessarily as prestigious as they could have been, at least not in the eyes of the administration. They wanted to see top journals, the kind that dominate rankings in citations and that attract prestige for their contributors. “Think big”, we were reminded, “the list has to be credible; it needs to convey we are ambitious and want to publish in the very top”. “Well then, it’s the American Journal of Sociology and American Sociological Review”, said someone else in the room, “even if we rarely publish there”. (At the time, rarely was quite the understatement). This was the sight the eclipsed the sun. There we were, a room full of sociologists creating chimeras, lists that combined tradition with aspiration, practice with expectations, forging the very chains that would bound our knowledge.

This exercise, which was repeated across universities in the United Kingdom and often with far less participation from staff[2], was a direct response to the cultures of assessment and evaluation that came to dominate the British higher education sector in the last four decades[3]. Since 1986, when the British government first introduced standardized assessments of research quality to its publicly funded universities, scholars and managers have faced the vexing problem of evaluating the academic value of articles, books, and other creative products of career academics[4]. When do we know that a paper is excellent? How do we know if a book made a substantive contribution to knowledge? Is the public expenditure in science actually worth it? Is state funding being used efficiently, going to the best possible researchers in the most effective centers of knowledge? Does it, in other words, foster a form of excellence that is visible, understandable and, above all, measurable? The list that we were asked to produce was a cog in this process, serving as an instrument that allowed managers and academic peers from other disciplines within our organization to make sense of our work, to give us a value. It was a ruler, of sorts, a way of being measured. And even if it was without a clear metric and obvious numbers, it operated as a palpable device of quantification.

In this book, I study what happens when scholars and their work are quantified through lists, rankings, assessment exercises, and other such devices—what sociologists Bruno Latour and Michel Callon call valorimeters, that is, collections of people, organizations, practices, and technologies that determine the value of objects which, like knowledge, can’t be placed on a scale or measured with an agreed upon ruler[5]. We already know from a vast and impressive literature that science and quantification are no strangers[6]. On the contrary, measuring the world is an essential feature of scientific practices and has been so for centuries. This also holds particularly true for the social sciences, which count on counting to explain the multiple, conflicting, and at times taken-for-granted patterns of their everchanging settings. Social scientists have expended tremendous efforts in measuring such intangibles as economic growth, social class, political attitudes, religious conviction, unseen psychological dispositions, interpersonal trust, human suffering, commitment, and taste. These are examples of objects that were quantified to produce knowledge, measuring exercises that allow social scientists to make claims about the social world and its multiple, complicated connections. What I am interested in is something slightly different: occasions when science itself is quantified with a managerial objective, when numbers and rankings are made of knowledge and its producers with a practical purpose[7].

Metrics are used widely throughout the world to manage, organize, reward, and shape science. I still have vivid memories of my parents, both professors of biochemistry working at public institutions in Mexico, assiduously going through the same rituals of verification every year: their funding agency expected them to report the impact factors[8] of the journals in which they published as way of guaranteeing the quality of their contributions. When the internet made them available, these measures were complemented by individual citation counts for papers, which had to be diligently provided—sitting behind expensive paywalls—to demonstrate value and sustain support. Indeed, the practice of counting science has been around a while, often connected to an attempt to value knowledge in way or another. The statistician Alfred Lotka’s very early studies from 1926, for example, sought to identify patterns in the distribution of publications to “determine the part which men (sic) of different caliber contribute to the progress of science”[9]. While motivated by intellectual concerns about the structure of scientific disciplines, the established fields of bibliometrics and scientometrics are often enrolled into efforts to determine the worth of scholars and their contributions[10]. And even if quantification is often decried as “a cheap and ineffective method of assessing the productivity of individual scientists”[11], it remains in broad use, with metrics routinely employed in evaluating scholarly quality by universities, grant agencies, government bodies, and international organizations.

Rulers, scales, and other measuring devices are not supposed to change the nature of the objects they describe. Measuring the productivity and quality of scholars is not the same, however, as estimating the arch of a particle in a cloud chamber or the size of parcels of land with string and trigonometry. Quantifying scholars and their work invites reactivity, it suggests incentives, transformations of their interests and intellectual approaches. What happens when scientists are quantified? Do they produce ‘better’ knowledge? Or does quantification and its implied forms of management and surveillance produce ‘worse’ descriptions of the world?

I’ll be honest from the very start. Quality is ultimately relative, in the eyes of the beholder, so I cannot give you, reader, a definite judgement on the direction of quantification’s influence on knowledge. I wish I could, for that would mean I’ve cracked one of the most fundamental puzzles in the philosophy of science (what precisely separates science from other forms of knowledge?), which I can’t say I have. Throughout this book, however, I do provide extensive evidence that efforts to quantify the value of science have definitive effects on the way knowledge is produced and on how disciplines are organized in the form of fields, careers, and academic units. By studying the mechanics of research evaluations in the United Kingdom—a site that provides an almost perfect natural experiment—I show that social scientific knowledge has become increasingly similar in topics and meanings within and across institutions while academic fields have become ever more disciplinary in their logics and organization. This occurs through a process that I call ‘epistemic matching’: the cultures of evaluation fostered by quantification create incentives for scholars to sort themselves out in ways that change their disciplines in the direction of thematic homogeneity, altering what they know about the world in potentially fundamental ways. (Of course, there is an underlying value claim to these findings—for those who find worth in the serendipity of diversity, effects of quantification that I document in this book are likely pernicious. But that is a claim that I cannot impose on my findings). As social scientists, we have long counted on counting. What this book explores is how being counted changes what we know.

I’ve chosen the United Kingdom for several reasons. In addition to having a longstanding higher education system, Britain has periodically assessed academics employed in public universities through exercise that determine the value of academic units. These evaluations matter financially: so-called “quality related” research funding is disbursed only to the best performing institutions in each field. Known under different names—initially Research Selectivity, then Research Assessment Exercise, and since 2014 Research Excellence Framework—these peer evaluations thus differ from the more individualized forms of assessment that exist elsewhere, placing emphasis how well disciplines are performed within and across institutions. Scholars are certainly quantified in this process: a feature of these evaluations is that selected works by large numbers of academics from all public institutions in Britain are read and scored from ‘unclassified’ outputs that fall “below the standard of nationally recognized work” to four star research “world-leading in terms of originality, significance and rigor”[12]. The outcome of the assessment is not a statement about each scholar (no-one really knows how their work was evaluated, with individual scores shrouded in secrecy and destroyed shortly after the assessments). It does, however, have consequences on individual scholars and the disciplines they make produce their work. In some institutions, these evaluations matter for hiring; in others they don’t. Some institutions play the game incorrectly, submitted scholars to the wrong disciplines only to then be paid with poor results that lead to punishing academics for their performance. In some, the evaluations are not felt; in others, they are objects of constant anxiety.

To study quantification, I have departed from some of the traditional approaches taken by other scholars in the sociology, anthropology, and philosophy of science. For decades, science and technology scholars have studied knowledge as constituted not through some universal method of discovery but, rather, through piecewise processes of enrolment, delegation, representation, intervention, looping, controversy, falsification, refutation, contestation, and closure. Scientific knowledge is ‘socially constructed’, insofar as it is created within specific communities of experts who, on the basis of ongoing conversations and interventions, revise their claims about how the world[13]. These approaches clearly foreground epistemic dimensions of science, tackling the question of how practices, communities, and institutions come together to assemble scientific knowledge. Although informed by these approaches, I am less interested in how knowledge is made (how it encodes politics and interests, how it depends on complex alliances between humans and instruments, how it produces or forestalls social action) than on the conditions experienced by those who are in the business of its production. Laboratories are certainly sites for epistemic practices, but they are, too, invariably sites of work, of paid employment, of managerial intervention[14]. This is the strategy of The Quantified Scholar: it eschews epistemic questions in favor of studying knowledge as a distinct product of labor. This is where quantification comes to matter: not only in how knowledge is produced but in how knowledge-makers experience their crafts.

Trajectories of devotion

Given its focus on labor, the objects that I study in this book are not scholars as individual epistemic agents but rather as embedded workers whose intellectual labor is invariably shaped by the affordances, incentives, biases, and barriers that they face on the shop floors of the modern university. Although inspired by a large sociological literature on work and occupations, my emphasis is not so much on the employment relations that constitute the workplace but rather on how experiences of evaluation shape the experience of labor over longer periods of time. The processes that I trace in this book—the forms of epistemic matching and linguistic change that are tied to the implementation of quantified research evaluations—are not punctual but processual, taking place over decades over which we can observe a slow shift in the register of scholarly conversations and the organization of their associated fields. These changes are certainly associated to the nature of the contractual relations between employers and managers, but they also speak to broader scholarly expectations of practice that overflow the individual employment relation. This is a key tension of academics: while they are effectively workers, bound by contract to the universities that employ them, they are also bound to the practices, traditions, and evaluation cultures of their individual professions. This translates into is a perspective that focuses not so much on the discrete experiences of epistemic workers in their institutions as the endpoint of analysis but on the scientific careers that connect scholars’ labor to larger organizational forms.

Careers are fascinating processes situated between micro- and macro- phenomena. As sociologists Erwin Goffman once noted, the concept of the career is exceedingly useful, allowing us to “move back and forth between the personal and the public, between the self and its significant society”[15]. This is also the case scholarly work. Thinking of scientists not as individual epistemic workers but as individuals navigating the tensions between the personal and public provides a novel understanding of how knowledge and scientific fields change over time. Indeed, like other careers, scholarly ones depend fundamentally on the work of individuals who, bar exceptional situations of luck or nepotism, would find virtually impossible establishing what is considered to be a ‘successful’ scientific career without expending some degree of individual efforts—without putting in work and investments that translates into an intellectual contribution, an institutional affiliation, a site for research. At the same time, careers are shaped by factors that are beyond the control of individuals. For example, gender and racial biases in formal evaluations by employers may translate into some scholars being promoted faster than others, some being cited more than their peers even if performing similar work, and some suffering larger penalizations for their life circumstances[16]. Many of these well-documented biases—reflected in productivity gaps, promotion gaps, salary gaps, and citation gaps—surely reflect larger structures of discrimination and inequality in the economy (academia is not particularly unique in this sense). Unlike inequalities observed in other sectors of the economy, though, those we find in scientific careers are bound to non-contractual expectations that shape academic disciplines. Who we decide to include in our syllabi or cite in our works is rarely controlled by our employers but is often policed by our disciplinary peers. Indeed, academia is somewhat unique in that, in addition to being a form of employment where the formal structures of our institutions impinge on the careers of scholars, work is also associated to a form of vocation where what we do is ultimately evaluated and shaped by the invisible yet weighty colleges that we are part of.

The strategy of The Quantified Scholar is to move back and forth between the individual scientist and her personal experiences as a managed worker, and the public, disciplinary setting where her work is read, used, and given worth. Research evaluations operate across both domains, establishing expectations that link the work considered relevant and worthy of pursuit by disciplines and employers with the career affordances provided to, and decisions made by, scholars in their intellectual working lives. The argument of the book, then, is that while quantification may impinge on the individual strategies of researchers when deciding what to study and how (as already suggested by a large literature on metrics in science policy), it matters also because of how it shapes the experiences of their careers. Disciplines, after all, are not just spontaneous assemblages of free actors, but collective institutions bound together through organizations, practices, and orientations that define and reproduce knowledge. Tied to the labor of scholars and their conditions of work, scientific careers thus serve a as scaffolds for fields of academic practice.

These scaffolds are admittedly peculiar. A notable feature of scientific careers is the degree to which they are framed by the idea of a vocation, a ‘calling’ to produce knowledge for its own right, a devotion to the discipline, its logics, and its practices. This was a point famously raised by Max Weber in his renowned lecture on The Scholar’s Work, where he eloquently captured the personal sense of the vocation of scholarship that captivates the selves of many academics[17]. Scholarship was a form of passionate dedication, Weber reminds us, an intense devotion toward that highly specialized thing which we study. This vocation is not practical—as Steven Shapin notes, the scientist’s orientation does not encompass, in Weber’s conception, “commercial goals and entrepreneurial means”[18]—but is concerned solely with the production of facts, the making of knowledge, the finding of truths that, however fickle and ultimately falsifiable they may be, constitute our shared fields of scholarship, our sense of academic integrity. Although written almost a century ago, when universities were quite different organizations, when performance management and scholarly evaluation were still incipient, Weber’s account still resounds with the working lives of many modern academics—this explains why, decade upon decade, The Scholar’s Work has found itself back in the printing press. The professionalization of scientists and the transformation of scholarship from a calling into a ‘mere’ job was an incomplete transition, Shapin reminds us, with the tension between employment and vocation, paycheck and devotion, shaping the identity of scientists and how they value their contributions. We remain vocational, our work a mode of life rather than simply a skilled task.

The survival of a vocational spirit in the sciences has concrete implications on how scholars evaluate their work and that of others. Yes, Weber was correct in that scientists produce increasingly specialized knowledge claims on ever more particular fractions of our world. But even in today’s hyperspecialized scholarship where no single person can feasibly know her entire field, the larger structures of disciplines loom large. While we have seen a tremendous rise in ‘interdisciplinary’ research over the past four decades, professional identities and organized evaluations of performance are still too often framed in terms of identifiable disciplines and subfields that anchor the objects of our vocation. Within these, loosely institutionalized forms of prestige are often used to navigate the fast-growing communities of our fields. How do we know if our work meets the standards of integrity that we collectively hold so dear? How do we evaluate our vocational practice? How do we know and show to other we are truly committed to our calling? Sociologists Richard Whitley provided an answer to these questions: repute, he argued, sits at the base of many of our organizational forms, a convenient of assigning confidence to knowledge claims in an otherwise messy ecology.

Our vocation is not merely guided by a disinterested pursuit in knowledge. A feature of how we think about our fields and assign value to our work and that of others is the numerous hierarchies—of institutions, of scholars, of traditions, of theories, of concepts—that we are habituated into taking for granted as part of our professional formation. Quantification has surely contributed to cementing some of these hierarchies. In Grading the College, for example, Scott Gelber relates very early efforts in 1912 to rank colleges in terms of how well they prepared their graduates, followed by increasingly intensive pushes in the 1920s and beyond to evaluate teaching quality[19]. This finds echoes in the work of Jelena Brankovic and Stephan Wilbers, who identify both the long historical roots of academic rankings in the beginning of the twentieth century and the mechanisms through which they acquired dominance and significance from the 1950s onwards. Moving away from their originally peripheral position to become instruments of management required a shift in the logic of institutions of higher education, which increasingly framed excellence as a form of performance that could (perhaps had to) be constantly evaluated. Being an assiduous scholar wasn’t enough: true devotion was only seen in ongoing, assessable, measurable actions and contributions. Of course, Wendy Espeland and Michael Sauder provide what is now the canonical account of how such public evaluations transformed organizations and, in the process, our collective comfort with quantified hierarchies[20]. In their study of rankings of law schools by US News and World Report, they show how these public instruments became environments that organizations had to react to, changing their strategies and priorities and the way their managers and workers thought about themselves. We partly accept hierarchies because they are readily observable, patently material. They are now immutable mobiles that travel across organizational settings, making the commensuration of otherwise distinct objects possible, permitting common conversations about worth. Our vocation has slowly accepted quantification as part of our craft.

This is, indeed, fascinating. Yet what I find interesting about these rankings and evaluations is not that they exist and have performative effects on the world (that they are, to an extent, self-fulfilling prophecies by design), but that we readily accept them and the forms of worth that they imply. This is, I think, what is missing in our discussions of quantification, a link between the historical circumstances that made counting research possible and the way scientists continue to frame their craft in vocational terms. What we have learned from the literature on self-tracking is that quantification is ultimately seductive, allowing individuals to evaluate their own worth and efforts, to aspire to particular selves that are prefigured by the devices and arrangements that measure them up. The quantification of scholars is no different. While it depends on certain historical conditions of possibility, it is ultimately maintained by practices of status, prestige, and repute that hold affinities with our vocational ideals[21]. If we are quantified it is not only because we are ordered to do so but, as importantly, it is because it is part of our rationalized, modern, scholarly vocations.

This explains, perhaps, why devices like the H-index, barely 15 years old at the time of writing, became so quickly part of the global infrastructures of science metrics and research evaluation. The uncanny history of the rise of the H-index—which represents the highest number of publications that an author has with N citations—was not explained by zealous administrators seeking to extract ever more from scientists, but by scholars actively adopting it to better “judge the performance of researchers”. Most scholars, reads an editorial in Nature in 2005, “prefer an explicit peer assessment of their work. Yet those same researchers know how time-consuming peer assessment can be”[22]. And, indeed, discontent with what were often seen as less adequate measures of quality—citations, impact factors, journal rankings, and institutional pedigree—made the H-index appealing enough to be adopted by scholars across various fields, with a frenzy of academic articles in physics, biology, sociology, computer science and elsewhere testing its performance against previous metrics. Note that this wasn’t about rejection but calibration, acceptance. If the H-index made it, it was not because, as sociologist Roger Burrows suggests, it was part of a powerful autonomous assemblage that, market-like, sought to economize intellectual value[23]. Rather, if this form of quantification succeeded at all it is partly because, deep within our modern vocation, within our training, habituation, and disposition, scholars have clear affinities with measuring their prestige.

This is ever more complicated because, as happens with artists and other creative workers, scholars have difficulties in separating their personal and professional lives (that is, of putting their vocations on hold, treating them as a 9 to 5 job). Academic careers are fuzzy, mingling the personal and professional selves. When we think of scholars celebrated for their contributions to the understanding of culture, politics, economy, and society—the types of names peppered throughout most introductory classes in high school and college—we casually equate their works with the lives of their authors. We talk of Max Weber as we do of his works; we conflate Hannah Arendt and her essays; we shorthand Adam Smith for his foundational books. They are one and the same, the carefully considered, curated, crafted, edited words on paper and ink, and the messy, complicated, and contradictory lives of their authors, their bodies, and their careers. This is what we are trained to think, both as audience and as performers. As C. Wright Mills noted, in becoming a scholar he soon realized that most of the “thinkers and writers whom [he] admired never split their work from their lives”[24]. We are what we do. We embody our professions. The “strange intoxication” that Weber speaks of in relation to scholarly work is, indeed, an intoxication of mind, body, and soul. And in a vocational frame that implicitly reflects in criticisms and exultations of our work the hierarchies of our fields, these become, too, comments on ourselves, synonyms of either personal failure or merit that bleed into our identities. These are our careers which, amplified, refracted, and modulated by quantification, alter what we see and think of as objects worthy of our individual passions.

Reflexive knowledges

Such intense individuation of scholarship is what makes the episode that I opened this book with so extremely peculiar. Sitting around the green and orange Formica-clad tables, we were knowingly tracing the contours of our promised selves, committing to aspirational personas that would reflect back onto our work and our sense of vocation, accepting a daily target of 10,000 epistemic steps for which we knew little about the terrain ahead[25]. The looping effects of rankings, lists, and quantification were not alien to those of us sitting in the room. On the contrary, paper after paper, study after study in our very own discipline suggested that counting things mattered in markets, organizations, and employment, a matter of life and death. And yet there we were, trying to balance control and bureaucratic dictum, internal consistency and external legitimacy, forging our own chains of knowledge, creating our own boundaries, establishing our own value.

Admittedly, this was not such a peculiar practice outside of our field. British sociologists have been adamant in stressing the importance of evaluating books and papers by their contents rather than the status of their publishers or the affiliation of their authors. This ethos is not shared equally across the board, though. In other fields like economics, hierarchies of value are readily accepted by most scholars, with clear tiers of journals (books are exceedingly rare) mapping onto evaluations of intellectual quality. In other domains, like political science (or politics and international relations, a more appropriate term for the field in Britain), some hierarchies do exist but are nevertheless noisy, defined by the multidirectional tensions between European and American traditions of political thought as well as varying methodological approaches. In a smaller field like anthropology, the onus was placed on the quality of texts, though a ranking of publishers often underlined conversations and references were made, now and then, to the longstanding institutional ‘golden triangle’ of the discipline (formed by elite departments in London, Cambridge, and Oxford). These different disciplinary cultures, as Michèle Lamont argues, vary in their definitions of excellence and so in how willingly they accept certain forms of self-quantification[26].

What these four disciplines share is a sense of reflexivity. Even economics, which seems to hold the most naturalized and individualistic view of the social, operates under the assumption that knowledge of its objects of study can be used to optimize or nudge them in particular directions. That is partly why I chose these four fields—anthropology, economics, politics, and sociology—for The Quantified Scholar. In addition to not having the more complicated fixed investments of disciplines where laboratories and other forms of equipment make careers stickier (this explains, in particular, why I excluded fields like psychology), the vocation of the social scientist is somewhat unique, inflected by a form of reflexivity that gives a different register to discussions about excellence, quality, and performance. Reflexivity also matters, of course, because of how scholars position themselves before quantification. An endemic feature of efforts to measure quality and productivity in science is how often metrics are shown to be empirically inadequate. Take impact factors, mentioned above, that were recently demonstrated to be as accurate at predicting future citations as a coin toss[27]. While unreliable, these metrics remain in use. What matters is not so much whether quantification is ‘actually accurate’ but, rather, it is used and made sense of by those who it counts.

The social sciences are also particularly flexible (or reactive towards the use of metrics), making them ostensibly better targets for studying how quantification changes their practices of knowledge-making. Marion Fourcade’s exceptional work on economics shows, for example, that the organization and contents of this discipline—visible in its intellectual interests, theoretical approaches, and forms of institutionalization—varied greatly between France, the United Kingdom, and the United States, influenced by how economists were positioned with respect to the state and industry in their respective countries. A similar variation would be hard to find in the natural sciences—not because it can’t exist (national varieties of the ‘hard’ sciences have been well documented by historians) but because these have experienced greater levels of standardization across the world than the social sciences[28].

Within these four disciplines, I focus on academic careers and, primarily, on movements of scholars between institutions. While the product of communal efforts, knowledge is tied to the bodies that make it possible, so studying their movements through the institutional space offers some insights into how domains and disciplines changed over time in association to research evaluations. Similarly, the fact that social scientists do not require laboratories makes, in principle, their institutional mobility less uncommon than, say, biologists or engineers. This relative independence from research infrastructures also means we can better appreciate how quantification shapes the organizational strategies of scholars as they govern their immediate environments. At the end of the day, university scientists have some control over their labor via the standards and expectations of their embedding disciplines. This may not be a direct form of control, but it is a form of control nonetheless that is practiced in internal organizational processes (for example, in assessing colleagues for promotion, serving on external panels, or reviewing the work of others in the field). Control may be reduced in fields where equipment matters centrally to the production of knowledge (moving a laboratory is a fraught, costly experience). Attention to social scientists allows observing how, even in settings with greater relative autonomy, quantification comes to matter in the way knowledge is made, particularly in relation to the reflexive dispositions of scholars, the managerial logics of their institutions, and the prestige-based hierarchies of their fields.

But reflexivity is also important for the opportunity it offers for challenging quantification. The central lesson of The Quantified Scholar is than numbers, rankings, lists, and other valorimeters work only insofar we allow them to define our worth. In looking at the spectrum of experiences under quantification in Britain, the book identifies cases and strategies where scholars actively resisted, as organized collectives, the most pernicious effects of quantification. These effects are not really about knowledge but about the experience of being quantified, of challenging the vocational affinities that make numbers so dear to our profession. The objective of this exploration, then, is to give readers opportunities to reflect about how, by means of the knowledge we produce, we might be able to break some of the affinities between our vocation the broader zeal to quantify excellence and performance, as part of a joint effort to produce more equitable, humane conditions of work. That is my key aim: to foster thoughts about solidarity as a balm against quantification.

Structure of the book

To answer the question of how quantification changes scientific knowledge, I adopt a multi-pronged approach that combines various computational techniques of text analysis, quantitative models of career mobility, and interviews with British scholars active in anthropology, economics, politics, or sociology, as well as union representatives. Each of these tactics provide forms of evidence that, jointly, suggest a process of increased homogeneity in the British social sciences driven by quantification’s effect on careers.  (a fuller description of my methods and data analysis are available in the appendix).

Understanding the logics of quantification requires first making sense of how research evaluations work in the United Kingdom. This is what we will turn to in chapter 2, where we explore the origins of research assessments and quantification in the context of key transformation of the British higher education system. The story there is one of quantification of excellence as connected to the implementation of austerity measures across universities in the United Kingdom in the 1980s and the rise what anthropologist Marylin Strathern famously called ‘audit cultures’[29]. This chapter also provides an opportunity to explain how research evaluations quantify ‘excellence’ in practice.

In chapter 3, I turn to the effects of quantifications on careers. There, I present evidence that links research evaluations with changes in the structure and organization of academic departments across time. This is followed by discussion in chapter 4, where I analyze how language (and ostensibly knowledge) changed within different fields in response to disciplinary pressures towards conformity. These two chapters present the concept and mechanism of epistemic matching as central to the effects of quantification on academic labor and careers.

Chapter 5 takes a different perspective, looking at how quantification was experienced by academics. In particular, it stresses the importance of local managerial implementations of research evaluation exercises as key in understanding how scholars rethink their vocations. A key observation of this chapter is that quantification is moderated by different hierarchies: those at the top of their fields and their institutions feel less the pressures of quantification than those at the bottom. This interplay between quantification and prestige becomes an opportunity to discuss cases where scholars were insulated from the effects of research evaluations by their peers.

Finally, in chapter 6, quantification is pitted against the scholar’s vocation. There, I argue that the problem is not uniquely quantification but, as importantly, the way we deal collectively, through our disciplines, with the individualization of our professional worth. There, I insist on the importance of rethinking our vocation, not around a devotion to scholarship as a calling but, rather, to one centered on scholarship as a lived, shared, multidimensional form of labor.


[1] British lecturers are roughly the equivalent of tenure-track assistant professors in the United States.

[2] Unlike the United States, where faculty is a distinct group, academics in the United Kingdom are referred to as ‘academic staff’. I will use this terminology throughout the book.

[3] Strathern, Audit Cultures.

[4] Wilsdon, The Metric Tide.

[5] Latour and Callon, “‘Thou Shall Not Calculate!’Or How to Symmetricalize Gift and Capital.”

[6] Espeland and Stevens, “A Sociology of Quantification”; Porter, Trust in Numbers.

[7] Lepenies, The Power of a Single Number; Özgöde, “Institutionalism in Action”; Bland, “Measuring” Social Class” A Discussion of the Registrar-General’s Classification”; Goldthorpe and McKnight, “The Economic Basis of Social Class”; Bukodi and Goldthorpe, “Decomposing ‘Social Origins’”; Verhulst, Eaves, and Hatemi, “Correlation Not Causation”; Norpoth and Lodge, “The Difference between Attitudes and Nonattitudes in the Mass Public”; Vaisey, “The ‘Attitudinal Fallacy’ Is a Fallacy”; Vaisey and Lizardo, “Cultural Fragmentation or Acquired Dispositions?”; Jerolmack and Khan, “Toward an Understanding of the Relationship between Accounts and Action”; Himmelfarb, “Measuring Religious Involvement.”

[8] The Journal Impact Factor is a proprietary metric developed by the Institute for Scientific Information (now Clarivate Analytics) that tries to approximate the visibility of publications by calculating the ‘average’ frequency of citations of papers published in peer-reviewed journals.

[9] Lotka, “The Frequency Distribution of Scientific Productivity.”

[10] Godin, “On the Origins of Bibliometrics”; Godin, Measurement and Statistics on Science and Technology; Abramo, D’Angelo, and Caprasecca, “Allocative Efficiency in Public Research Funding”; Cronin and Sugimoto, Beyond Bibliometrics; Gingras, Bibliometrics and Research Evaluation.

[11] “How to Improve the Use of Metrics.”

[12] “Assessment Criteria and Level Definitions : REF 2014.”

[13] Hacking and Hacking, Representing and Intervening; Hacking and Hacking, The Social Construction of What?; Barnes, Bloor, and Henry, Scientific Knowledge; Latour, Science in Action; Latour, The Pasteurization of France; Kuhn, The Structure of Scientific Revolutions; Lakatos, Proofs and Refutations; Feyerabend, Against Method; Popper, The Logic of Scientific Discovery.

[14] I owe this observation to Judy Wajcman and her pioneering work.

[15] Goffman, Asylums, 127.

[16] Long, “Productivity and Academic Position in the Scientific Career.”

[17] Weber, “Science as a Vocation.”

[18] Shapin, The Scientific Life.

[19] Gelber, Grading the College.

[20] Espeland, Sauder, and Espeland, Engines of Anxiety; Espeland and Sauder, “Rankings and Reactivity.”

[21] Berman and Hirschman, The Sociology of Quantification.

[22] “Ratings Games.”

[23] Burrows, “Living with the H-Index?”

[24] Mills, “On Intellectual Craftsmanship (1952).”

[25] Lupton, The Quantified Self.

[26] Much of The Quantified Scholar is inspired by Michèle Lamont’s, How Professors Think., which highlights the tensions in interdisciplinary spaces (like grant and award review panels) where evaluations of quality and excellence are made across distinct and relatively insular academic fields. Fortunately, what I study here are mostly contained disciplinary struggles, which allows approximating in a slightly more idealized way the disciplinary cultures that characterize different fields. The reader should understand that these are always typifications of a much messier reality, and that while some commonalities may be present within fields, they are not by consequence defining.  

[27] Brito and Rodríguez-Navarro, “Evaluating Research and Researchers by the Journal Impact Factor.”

[28] Fourcade, Economists and Societies; Fourcade, “The Construction of a Global Profession.”

[29] Strathern, Audit Cultures.