It’s Existential Question Time for Academia
The rise of the machines means that academics must ask themselves “Why Do I Exist?”
A recent article in New York Magazine describes the existential crisis that academia currently faces. In the summary for social media, the role of large language models—otherwise known as “artificial intelligence”—is highlighted. “In only two years, ChatGPT has unraveled the entire academic project,” is how New York Magazine’s team put it on Bluesky, the new safe space for academics following Elon Musk’s purchase of Twitter. The general tendency in the comments is then to denounce AI for what it has done.
Yet James D. Walsh’s article makes it clear that the new technology is not the principal cause of academia’s demise. “The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT,” Walsh writes. “In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core.” The academics who respond to his article by denouncing the large language models are therefore missing the point.
The problem is that college education has become a necessary rite of passage for getting a high-paid job. As Bryan Caplan has argued in his book Against Education, it is simply a way for young people to signal to potential employers that they have some basic competence and are, crucially, willing to obey. From this perspective, universities are little more than preparatory schools that give students the cultural capital that may, if they are lucky, provide an entry to the elite. They are not there to teach critical thinking.
The large language models have merely pushed universities towards their endpoint of becoming a simulation of what an education should, in theory, be. Students pretend to learn, while academics pretend to teach them. But now, at least, this can be automated. “Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays,” Walsh explains. “Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots—or maybe even just one.” This is, then, very much the hyperreal world of Jean Baudrillard. It is the “desert of the real,” in which critical thinking only occasionally peeks through the simulacra of learning that has covered it.
And similar problems exist on the research side. Since World War II, a blind peer review system has become ever more institutionalized. It incentivizes what Thomas Kuhn called “normal science”: incremental progress through small modifications of dominant paradigms. Academics who challenge dominant paradigms will, by contrast, struggle to be published and perish. In this way, the institutions of academia also prevent the kind of critical thinking that universities are supposed to promote.
A spate of recent studies illustrates the results in the economics departments of American universities. Their faculty are increasingly likely to believe the same things—there has been a notable increase in the degree of consensus on key issues. Groupthink is taking over. Indeed, they are also far more likely to work in groups than in the past, with the share of articles being published by individual economists falling. Worryingly, they are also more likely to use their own data sources to prove their collective beliefs. And there is some evidence that they are filtering their data to get the results they expect. Confirmation bias rules supreme.
In this way, academics have themselves become like large language models, only made of flesh and bone. They have been trained to confirm what is expected from the existing literature, as a way to get published and climb the academic hierarchy. Those at the top are the ones that have mastered this game, and they ensure that its rules are enforced. Critical thinking, again, is to be discouraged.
It is likely, however, that the large language models made of silicon and various metals will soon be able to do normal science as well as, if not better than, their human equivalents. And, crucially, they will be cheaper, too. There are already signs that the most advanced models can produce research papers that are better than what is written by many academics. The slop produced by the machines is, in other words, becoming indistinguishable from the slop being produced due to the publish-or-perish imperative in academia.
Academics will therefore have to ask themselves “Why Do I Exist?” They will increasingly wonder whether androids dream of electric sheep.
The answer is, I believe, that academics should exist to promote and actively engage in critical thinking. Unlike the machines, they should not just be reiterating the existing literature. If they put their minds to it, they can do what the machines cannot. And they can train their students to do it as well. Yet many are, I suspect, out of practice due to the incentives they face. Something therefore has to change if academia is not to be made obsolete by the new technology.
If you like this essay, you can help me to write more:
And you can subscribe if you haven‘t already:
You can also follow me at Bluesky or Twitter/X.
Because of the rise of AI and the growing importance of non-written mediums, I've been thinking for years that the focus should go back to teaching the trivium—grammar, logic, and rhetoric—with an emphasis on doing so orally, in class.