help us elaborate upon what we are trying to say

What better way to spend your Good Friday morning than putting together a blog post on the topic that you most love to hate, eh?

(Yes, it’s a post about “AI”, so feel free to skip it; I wish I could.)


I want to look at a couple of recent bits from Mark Carrigan, who is the only academic I’m aware of who has tried seriously to engage critically with the LLM phenomenon while at the same time going long on actually using the things. Carrigan recently published a book on academic LLM use, and in the last few days has jettisoned the best part of another, unfinished book “about cultivating care for our writing, as opposed to rushing through it with the assistance of LLMs” in the form of a long run of blog posts (listed here, should you want to go and read them all). His reasons for abandoning it are, I think, quite telling:

I felt I had something important to say about the personal reflexivity involved in working with large language models, but in recent months I’ve realised that I lost interest in the project…

[…]

Instead my plan is to focus on doing my best intellectual work by focusing, for the first time in my career really, on one thing at a time. I’ll still be blogging in the meantime as the notepad for my ideas, but I’d like to take a more careful and nuanced approach to academic writing going forward. I’m not sure if it will work but it’s a direct outcome of the arguments I developed in this book. It was only when I really confronted the rapid increase in the quantity of my (potential) output that I was able to commit myself in a much deeper way to the quality of what I wanted to write in future.

I have not read all of the chapters, but I have skimmed a few, and I read the entirety of chapter 16, titled “We urgently need to talk about the temptations of LLMs for academics“—a fitting read for Good Friday, perhaps, if you’re inclined to such resonances. The whole piece is worth reading for context, but I think the core of it is here (with my emphasis):

What’s at stake here isn’t just a question of research ethics or academic integrity in the formal sense. There’s something more fundamental about our relationship to the creative process itself. The constraints we face as writers (whether time, energy or our own cognitive limitations) create the conditions in which genuine intellectual work happens. Without that productive friction, something essential to scholarly identity may be lost.

The use of machine writing in knowledge production is still in its infancy and, even with detailed empirical investigation, there is a limit to how far we could answer these questions in relation to an issue which is developing so rapidly. In raising them I’m trying to highlight the questions, rather than take a stance as to the answers. The assumption that human authoriality underpins what we write in monographs, edited books and journals is so axiomatic that it is difficult at this stage to think through what knowledge production looks like when it can no longer be assumed.

Well, perhaps not so very difficult, when approached from a different direction? I literally wrote a science fiction story in which I tried to wrestle with exactly this question. (Spoilers: it looks pretty ugly, even when seen from a few hundred years hence.)

But that aside, I don’t think it’s difficult to foresee the effects on academic knowledge production at all, as the effects outside of the academy are glaringly obvious: I mean, have you tried searching the internet recently?

Carrigan makes much of the circumstances of academic knowledge production (which is to say: shitty, and driven by the ever more perverse incentive structures generated by publish-or-perish), and he is very right to do so; what staggers me is that he has only done so at so late a stage in this otherwise incredibly deep investigation. From the same post, further down:

My suggestion is that difficulty is at the heart of how academics will tend to relate to the possibilities of machine writing. Conversational agents provide us with new ways of negotiating difficulties in the writing process. They can offer new perspectives on what we have written, help us elaborate upon what we are trying to say and provide detailed feedback of a form which would have previously required a human editor. The attempt to eliminate difficulty from the writing process will have downstream consequences for our own writing practice, as well as the broader systems through which (we hope) our writing makes an intellectual contribution.

With the utmost respect, I think that many others have been advancing that suggestion for years already. (Chalk one up for the hermeneutics of suspicion, if you’re still keeping score.)


Carrigan has perhaps been blinded somewhat by a tendency to think the best of others, and in so doing to assume that the majority of academics take the same pleasure in the process of writing that has characterised his own career. I wouldn’t credit myself as being anywhere near so much a joyful writer as he—though I would say that I became much more of one thanks to my own stint in the sacred grove, where the process required me to write through the difficulty of writing—but nonetheless I find it hard to credit as revelatory the notion that some, or even many academics might find writing difficult, and therefore be tempted by the short-cut of “machine writing”.

With few exceptions, most of my doctoral cohort discussed writing as difficult almost by definition; a clear majority of faculty of my acquaintance, also. And that’s only counting the social science and humanities people; I spent a lot of my time in an engineering department, where the writing part of producing papers was very much seen as a necessary but deeply unpleasant evil—an obstacle to the “real” work, rather than a part of it.

Returning to the question of “what knowledge production looks like when it can no longer be assumed that human authoriality underpins academics publications”, the temptation of difficulty reduction has long been manifest in practices such as paper-milling, and the palming off of writing tasks on postdocs, doctoral students and RAs. To start worrying, so late in the game, that perhaps LLMs will degrade the situation further, can only be taken as evidence of a soul far less cynical than my own.

As I understand it, Carrigan’s argument for engagement with these systems is aimed at seeing them as supports to human writing rather than substitutions for it; intentionally or not, it’s a preempting of the tiresome Mister Gotcha routine, where someone pipes up to observe that the tech-genie won’t go back in the bottle. (OK, Ethan Mollick, take your bow—and then fuck off.)

That may well be the case—though as I’ve noted before, the thing with the genie stories is that the genie always goes back in the bottle eventually, usually at a point where the protagonist has realised that they should have thought a lot more carefully about their earlier wishes. If it is the case, though, then the only thing that’s going to prevent the rapid and hyperincentivised slopification of academia is a complete refiguring of not only the incentive structures of that sector, but a revolution in moral attitudes to technology in general, let alone LLMs in particular. Put another way: the decision to not avoid the difficulty of writing will require a widespread understanding and acknowledgement that the difficulty of writing is that which makes the writing that results worthwhile.

You’ll forgive me, I hope, for not holding my breath.


My personal exhaustion with the whole business aside, perhaps there’s hope. After all, Carrigan himself has come to a realisation that quality trumps quantity—and academia seems, from my outsider’s perspective at least, to be among the more staunch redoubts of LLM rejection. I like to think that there’s an instinctive revulsion there, an identification of the unheimlich in the instant of its appearance; that this should be labelled retrograde, as counter to to the unquestionable imperatives of “innovation”, seems to me to be an endorsement of the position rather than a discrediting of it. But then, I’ve been out of step with the marching band of efficiency and optimisation for quite some time now.

So have many others, and there’s some comfort to be found in that—as well as in the steady encroachment of questions of ethics and morals into the heretofore “apolitical” realm of technology. Here’s Christopher Butler, for instance, channeling a little bit of Jay Springett:

I’m using loaded moral language here for a purpose — to illustrate an imbalance in our information-saturated culture. Idleness is a pejorative these days, though it needn’t be. We don’t refer to compulsive information consumption as gluttony, though we should. And if attention is our most precious resource — as an information-driven economy would imply — why isn’t its commercial exploitation condemned as avarice?

As I ask these questions I’m really looking for where individuals like you and me have leverage. If our attention is our currency, then leverage will come with the capacity to not pay it. To not look, to not listen, to not react, to not share. And as has always been true of us human beings, actions are feelings echoed outside the body. We must learn not just to withhold our attention but to feel disgust at ceaseless claims to it.

I clipped that because it’s an astonishingly uncanny echo of a line I noted down while reading Tom Chatfield’s Wise Animals a few nights ago, a book which itself feels like a step-change in writing and thinking about technology. “[The] manipulation of attention is about someone else trying to define the terms of your relationships with the world,” Chatfield writes, before observing that there is therefore “a fundamentally moral element” to the attention that we pay, and to the attention which is demanded of us (p84, emphasis in original).


All this was presumably part of my decision to stop reading Venkatesh Rao. This was a surprisingly hard decision to make—turns out it’s remarkably hard to close off a voice you’ve listened to for more than fifteen years—but the kicker was not so much his embrace of LLMs as central to his writing process, but rather his positioning of that decision as morally superior.

He’s not entirely wrong to note that attacks by writers on LLMs from the angle of copyright breach are self-serving and to some extent hypocritical, but I was disappointed that he seemed unable to understand that people who quite reasonably—and, I suspect, accurately—see their livelihoods and their art under threat might reach for the tools that have traditionally been used to protect said livelihoods and art. It felt to me like scoffing at factory-workers going on strike in the face of mass layoffs due to automation—which is to say it felt cheap, and lacking the essential human sympathy that once made Rao that rarest of things, namely a tech consultant worth reading.

Well, no one ever thinks that the leopards are going to eat their face, do they? I wish him the very best of luck with his full-throttle self-centaurification, but I have decided not to indulge it further with my attention; I suspect the temptation to avoid the difficulty that was generative of his best work has already been surrendered to.

Posted

in

, , , ,

Comments and pingbacks

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.