In between nervously freaking out about the future of higher education and education more generally, I’ve been thinking about possible tomorrows. Will we come back from this? Will we rebuild the institutions we are seeing collapsing before our eyes? Throughout the turmoil, I’ve been a pessimist. Entropy goes one way—and reversing its effects takes energy that, at least at this moment in time, is nowhere to be found.
My concerns have finally coalesced into a much more structural explanation—the kind of account that I am partial towards. Writing Science in Turbulent Times, I’ve relied on an argument about elite disruption that explains some of the shifts in patronage for science in the last 30 years. TL;DR, the reason science has received less public support—and the reason why higher education has seen the strong state support of the past dwindle on a per capita basis—is that the elite alliances and networks that once underpinned politics in the US have unraveled, making the kind of collective action necessary to support big science largely impossible. This isn’t entirely my argument. It merely derives from combining Mark Misruchi and Olúfẹ́mi Táíwò’s contributions on elite fragmentation and elite politics, respectively.
What makes my pessimism even stronger is that it is possible to see this structural explanation as intersecting with yet another broad structural process involving elites: the rise of AI. In recent years, economic elites pivoted towards AI as a one stop shop for the future. Everything can be solved by AI, from production chains and logistics to governance and education. Crucially, even science—the production of new knowledge not present in the training sets that feed LLMs and GAI—can be captures by artificial intelligence. We’ve seen the first phases of this already with lab automation over the past two decades, where the throughput of tests was increased dramatically in genetics and pharma through the introduction of new technologies. AI merely extends this a step further into the realm of the production and testing of epistemic claims.
We all know that this automation is ultimately impossible, at least to the degree where human scientists can be substituted by AI. But this is not what technological elites believe. If they can replace radiologists, why not astrophysicists? AI offers the promise of eliminating the messy politics of having to deal with—and support—scientists and their institutions.
These elites are unlikely to change, whatever happens in future elections. Their absurd vernacular philosophies of knowledge are equally durable. Ballot box politics may change support at the edges—a billion here, a billion there—but it is unlikely to produce the transformative relations we saw in the past.