A central element of the sociology of scientific knowledge a la Edinburgh is the principle of symmetry: accounts of success should also be able to make sense of failure; what explains how some things go up should also explain other’s downfall. Whatever social processes made neuroanatomy successful should also make sense of phrenology’s decline, without invoking some particularly magical attribute of their respective knowledge claims. Like most things Edinburgh, this seems an entirely reasonable sociological approach.

If anything describes the recent debates among academics in general, and social scientists in particular, about the use of Large Language Models, it is certainly not “symmetry”. Wherever folks stand, they fall into but a handful of categories in an increasingly polarized discussion. Some see LLMs and AI as the future, a fundamental challenge to the fabric of academia and knowledge-making in the modern world. Others drink some of the techie Kool Aid, only to then decry the loss of the soul of research (i.e. these technologies sort of work, but that breaks something essentially good about our craft). And yet others see LLMs as nothing more than a gimmick or, as one notable research put it, a combination of “fraud, plagiarism, and cognitive offloading”. Symmetry? Naeway.

What we miss with this situation is a more productive discussion than what characterizes much of the “public discourse” on AI. This applies in all directions. For those who see LLMs as a fundamental revolution, it might show some important bottlenecks and organizational limits to how much AI can change things. For those concerned with the soul of our profession, it might highlight other pressures that are actually at work upon academia and that provide the conditions of possibility for the use of AI in the first place. And for those who are and will forever be critical of LLMs, it might suggest better forms of action and organization than a generalized complaint about these technologies.

For example, while LLMs are still in their relative commercial infancy (they’e been available for less than 5 years), we still really don’t know much about their overall usefulness. In some domains, like coding, LLMs seem to have become a breakthrough technology that allows producing and testing code at might higher rates than before and with notably lower costs. The question of whether these increases in efficiency within a very specific segment of the industry translated into overall higher productivity is sill unclear. The perceptions of business folks, for instance, don’t place much weight on AI as a fundamentally transformative general purpose technology. In fact, some evidence even suggests that the use of AI may even harm both productivity and worker’s experiences making its net effect on economy/society unclear. Impacts of AI are all over the place: from zero and even negative effects on productivity to 3.7% of productivity increases in the next half century. The fact is: we still don’t know nearly enough about how AI shapes labor and organizations in practice. Given the apparent wonders you can do with an off-the-shelf LLM, you can certainly see why some people would be optimistic! But some of those people were also optimistic about Apple’s Newton back in the days.

Those concerned with the fact that AI might erode the institution of knowledge also have some clear reasons for being pessimistic. We have all encountered far too many essays and papers that were obviously written by AI. There is a real risk that the way we consume and conduct research will also be fundamentally altered. AI-generated papers are bound to flood an already peer review system. Debates in metascience now contend with much lower barriers for conducting (not necessarily great) checks on published work. Students might no longer learn the same way they did before, offloading their cognition onto machines that produce cheap simulations of knowledge. There is lots of truth in these fears.

The problem with these specific concerns is that they seem to assume that LLMs are breaking something that was sort of working. However, dear reader, things were not actually working before. The peer review system that limped through the aftermath of the COVID pandemic was already in problems before LLMs became as ordinary as a mediocre coffee. A system controlled by for profit publishers that seek volume rather than quality–and to which we all contribute with publications and reviews–hasn’t been changed despite courageous efforts to make knowledge more accessible and independent. Our institutions haven’t helped, either. Over the past 50 years, the number of publications per average researcher per year has more than doubled. There is, if anything, an over-proliferation of research that predates commercial LLMs (frankly, much of what gets published could have been seminar presentations or fireside chats). Yet the expectation of productivity is not something that we are actually, seriously, truthfully engaging with, given that pretty much our entire institutional logic continues to rely on a weird combination of status signals and publish or perish dynamics.

It’s also not clear whether LLMs have created a new problem in education or laid bare a longstanding issue. If students can cheat with LLMs, it’s partly because we deploy systems of assessment that can be gamed with these technologies. The multiple choice exam and the short essay–canonical ways of assessing learning–reflect pedagogical choices made in the context of rapidly expanding enrollments and stagnant instructional support: in other words, these are the things we invented in order to deal with 200 student classrooms. Whether these are actually effective ways of assessing learning is something we don’t really ask ourselves (maybe we do, when we have to justify assessments in course proposals, and we use an almost LLM-like language to say “yes! I can guarantee that this 2,000 word essay is the best way of assessing learning objectives!”; LLMs are, in many ways, great bullshit detectors: if an LLM can do it reasonably meh, then it was maybe bullshit to begin with). Perhaps this should be a moment to think about what classrooms should look like with the constraints we currently have and the ones we will undoubtedly face. While any system we introduce may break with LLMs, the question really is about designing classrooms that provide students few opportunities and incentives to do so. Some students will cheat. But cheaters will be cheaters and designing for those isn’t the most reasonable way of moving forward.

(I even wonder: are there legitimate use cases for AI in instruction? Could a Socratic chatbot, like the ones used at a pilot study in my institution, be of use? Is it possible to design a system that would create a ‘close-reading-like’ environment? Could tailored tutors that help students resolve questions without giving them the answers provide an additional resource that would bridge make our 200 student lectures more effective, individually? Is AI more of an issue than, say, screens in classrooms? These are, above all, empirical questions.)

Which brings me to the critics. Concerns about the environmental impacts, intellectual property violations, and imbrications with surveillance infrastructures are all absolutely central. Don’t get me wrong: the companies commercializing LLMs are the same variety of predatory capitalists as those pushing fossil fuels down our energy systems. But that’s precisely the issue: the problems of AI aren’t intrinsic only to AI but stem from its distinct corporate/capitalist logics. “Fraud, plagiarism, and cognitive offloading” describes much capitalist enterprise today, within and without AI. Sure, it’s good to call out cases where AI is blatantly stealing, polluting, and making things dumber (though I would first check if they weren’t dumb to begin with). But let’s try to tie the critique back to the broader critiques of how our economy works today. Maybe this exercise of excising the critique of the system from the critique of affordances of the technology could even be useful in practice. If we were to imagine workers of the world coming together to produce an AI to deliver the Keynesian vision of a world of leisure and little work (a.k.a., a Star Trek, but without space, or the military ranks, or the Borg), what would it look like? What things would remain? Which would be different? I’m sure the transformer architecture that underlies many LLMs would not be that different. But maybe to way it is used, and how information producers are compensated in the development process, would be radically different.

Symmetry doesn’t mean surrender; symmetry doesn’t mean the end of critique; symmetry does not mean wholesale acceptance. But maybe it does offer something that, in “the discourse” is missing for charting paths forward.