A couple of days back, a listserv that I’m signed up to delivered the first example of a thing I’d heard whsipers of from others: an invitation to a seminar aimed at teaching academics how to use the new crop of LLMs to make the writing of grant applications more “efficient”. This caused me to note publicly that I was
concluding that I’ve accidentally contrived to leave academia at the perfect moment, just before what laughably passes for its career advancement model goes full-on IT-augmented Thunderdome
On reflection, I suspect that the more likely result is that already cash-strapped departments will spuff away money to consultants for courses which will teach them little they couldn’t have learned in an hour of g**gling, and little of substance will actually change.
But there’s a deeper malaise in the sacred grove, and Alan Jacobs articulates it pretty well here:
I wrote, “If an AI can write it, and an AI can read it and respond to it, then does it need to be done at all?“ Might we not ask the same question about our research, so much of which is produced simply because publish-or-perish demands it, not because of any value it has either to its authors or its readers (if it has any readers)?
Countless times in my career I have heard people talk about their need to publish research — to get tenure or promotion — in an AI-like pattern-matching mode: What sort of thing is getting published these days? What terms and concepts are predominantly featured? What previous scholarship is most often cited? And once they answer those questions, they generate the appropriate “content” and then fit it into one of the very few predetermined structures of academic writing. And isn’t all this a perfect illustration of a bullshit job?
Yes, I’m worried about what AI will do to academic life — but I also see the possibility of our having to face the ways in which our work, as students, teachers, and researchers, has become mechanistic and dehumanizing. And if we can honestly acknowledge the conditions, then maybe we can do something better.
Part of my decision to give up on the grant-money casino is right there in the middle paragraph. I suspect—though it may be giving myself too much credit—that my uncanny and unintentional life-habit of ending up working in industries during the period they are collapsing, disintermediating or being asset-stripped has made me better able to read the not-always-metaphorical writing on the wall.
Part of me would still like to be in a Jacobs-esque position, i.e. relatively securely employed, and therefore able to stay in the room as a contrary voice. There is a strong utopian dimension to the (post)humanistic academic ideal, beautifully (if complicatedly) extolled by Philip Wegner in is book Invoking Hope*; I believe very much in what academia could and should be, and which at times it even actually is. I would love to be a part of that.
Of course, it seems I can’t—and as such, perhaps this whole thing might be best (or at least fairly) dismissed as an enduring case of sour grapes. But I’m not so sure; discontent seems widespread, as do attacks from government on exactly those better elements of the academic project, which are unquantifiable, inefficient, or (when they’re honest) ideologically undesirable**. LLM-written grant bids, to extend Jacobs’s argument, will only serve to make it impossible to deny what everyone knows all too well anyway, but still feels honour-bound not to say aloud: the grant allocation system is stupid, unfair, and spectacularly wasteful of the expertise it is supposedly intended to evaluate and steer; it will be revealed as bullshit. And I can’t imagine I’m the only one who finds it impossible to do something so strenuous, wasteful and unrewarding when they know it to be bullshit.
In his thing on “AI” for Newseek, Bruce Sterling leans hard on the metaphor of Mardi Gras, and observes that after Mardi Gras comes Lent. It seems reasonable to imagine that his boom-bust dynamic might extend beyond the specific issue of computing-at-scale marketed way beyond its actual competencies—if, indeed, computing-at-scale marketed way beyond its actual competencies is not itself something of a metonym for that still more more systemic malaise, for which we all have so many different names right now.
I kind of hope so? I almost feel like I’m looking forward to Lent, to what Bruce characterises as a time of “gray shroud, ashes on your forehead”; it feels like what is needed now.
[ * — My review of Invoking Hope can be seen at Extrapolation by anyone with the necessary institutional access. That the vast majority of people who might end up reading this post have almost certainly not got that access is a deliciously ironic illustration of the problem under discussion. ]
[ ** Jacobs and I would almost certainly come to disagreement over the matter of exactly which ideologies are considered undesirable. But while we might see the content differently—a sort of seeing which, in a system of such size and complexity, is surely a function of where the observer is standing—I find it very telling that we both observe the same form, so to speak. ]