cultural-fracking-as-a-service (or: an abjuration of “artificial intelligence”)

I’m currently in the nuts-and-bolts phase of booting up my consultancy practice: doing stuff like establishing the business as a legal entity, starting bank accounts, buying domain names and (re)making websites. It’s the sort of stuff that makes a heretofore conceptual goal very concrete, very quickly: my business is actually a thing-in-the-world now. As a result, I’m finding myself confronting the practical ethical questions attendant on being in charge of an entity that does things in the world. I need to decide not just what that entity will do, but also how it will do those things.

One of those decisions has been that I will abjure the use of any so-called “artificial intelligence” tools in my practice, in perpetuity.


This is a pretty consequential decision, but in many regards it was easy to make. I’m going to try not to over-psychoanalyse it, but I strongly suspect that my identification with writing as central to what I do and who I am is a big part of it. My initial reaction to the current wave of chatbots and image generators was, to be frank, one of genuine revulsion: not just at the “content” they produce, which seems to me generic in both senses of that term, but also a revulsion at the way they work.

There are definitely a range of positions on this stuff, and I’m not going to relitigate the still-ongoing debates in this post. Rather, I’m just going to state my own position: “artificial intelligence” is a sleazy misnomer that plays on legacy skiffy imaginaries for its discursive stickiness, and it is being used to legitimise and monetise a form of highly automated plagiarism-at-scale. More generally, it is the latest in a succession of extremely cynical gold-rushes, the winners of which—as in every gold-rush—are the guys selling shovels and/or dubious land deeds, who will be long gone by the time the mania fades and the loans become due.

But that plagiarism-at-scale thing is the thing that causes the deepest revulsion in me, I think. Friend-of-the-show Jay Springett coined the concept of cultural fracking as a way of explaining the way in which contemporary culture seems unable to do anything other than reboot and jazz up ideas from a pre-internet past. “AI”, then, is cultural-fracking-as-a-service: sucking up pulverised granules of recent cultural production and regurgitating them as a samey slurry of “content” whose appeal—which I suspect, or perhaps just hope, will be short-lived—is rooted not in originality (which is, after all, a very vexed concept anyway, per Walter Benjamin et passim) but a sort of uncanny familiarity: the queasy libidinal appeal of the simulacrum.

I do not believe “AI art” to be art. I do not believe “AI writing” to be writing. These are less ontological claims than ontological choices; they are made with my heart rather than with my head.

No, this is not a rational argument. But my choice to make this irrational argument is, paradoxically perhaps, very rational.


This question is particularly important, I believe, given my work is concerned with futures. One reason I’ve always been leery of the world of business in general is a deep-seated belief that, to quote Saint Ursula, “how you play is what you win”. Another way to put it might be the line attributed to Gandhi: “be the change you want to see in the world”, right?

Well, there’s a corollary or restatement of that which is perhaps even more important: “don’t be the change you don’t want to see”. I don’t want to see a future in which artists and writers struggle even harder than they do today to get paid for their work. As such, I’m not willing to participate in a present which enacts that future as a done deal.

This is therefore also a refusal of a still-prevalent technological determinism: the argument that says that capital-P Progress is a) and unalloyed and unquestionable good, and b) pretty much inevitable anyway, so why fight it? This is closely related to the schism in futures practice between, on the one hand, the rationalist and deductive scenarios-based approach, which focusses on probable and plausible futures, and on the other hand, the imaginative and inductive foresight approach, which is (or at least can be) more focussed on desired futures.

The problems with plausibility as a guide to futuring should be obvious: how’s the weather where you are right now, hmm? This warming world was understood to be plausible—nay, even probable—round about the year I was born; however, alternatives to the profit to be made from that warming were deemed to be less plausible. I’m not interested in plausible, probable futures any more, because the probable and plausible futures on offer are ones where we basically carry on as we are, with added handwringing. I’m interested in possible futures—the futures we actually want, rather than the ones we’ve been convinced to accept as inevitable.

I am working to envision and enact futures in which we don’t chew through the planet on which we evolved for the sake of a few more decades of comfort for those who’ve always been the most comfortable. In much the same way, I am working to envision and enact futures in which artists and writers can make a living from doing what they’re good at. Fracking, whether cultural or mineralogical, is monstrous to me. I will do whatever I can to avoid partaking in its outputs.

(And yeah, yeah, “there’s no ethical consumption under capitalism”, okay—but if you’re reading that cliche as justification for giving up on ethics, then I humbly submit that you’ve misinterpreted it entirely. YMMV, &c &c.)

As well as the capital-P Progressive narrative of inevitability, there’s also the more defeatist crushed-Left narrative. You know this one: “AI” is Chekov’s Gun, right? Like, maybe we’ll manage to push for some regulations or what-have-you, but it’s not going to go away, so the best thing writers and artists (and musicians, and whoever else) can do is grudgingly submit to the march of technocapital: either you integrate this stuff into your practice, or you get your lunch eaten by those who bite the bullet sooner than you. That gun’s gonna go off in the final act, regardless of your high-falutin’ ethics; might as well be the one at the trigger end rather than the one looking down the barrel!

Well, maybe. But stories do not always end in the way that is most obviously implicit in their opening set-up. Wouldn’t it be a much more satisfying story—subversive of exactly this grim, dull determinism, of which we’re all so tired—if the gun were to be shot in the final act, but at a bear (or even a robot) run amok, rather than at a human being? What if the trigger were to be pulled, and the gun were to be therefore revealed as a theatrical prop, a concretised-metaphor punchline to a joke about how we thought about guns in a time not so distantly past?

(It’s interesting how often vaguely literary metaphors come up in these debates, actually. Another favourite form of the defeatist narrative is “the genie won’t go back in the bottle”—though that conveniently overlooks a common version of that story, wherein the genie always goes back into the bottle after you’ve (mis)used your alloted quota of wishes, never emerges again, and leaves you mired in the consequences of your myopia. You want a literary metaphor that fits better? The fingers on the cursed monkey’s paw will never uncurl. That one’s on the house.)


The more pragmatic critique of this position would be to say I’m doing something akin to a horse-buggy manufacturer circa 1910, doubling down on a decision to not shift over to producing carriage-work for Henry Ford’s factory down the road. “Some futurist you are, mate, if you can’t see where things are going!”

But that’s exactly wrong. I can see very clearly where we’re going, and I don’t want to go there. That’s why I’m not being the change I don’t want to see in the world.

That’s why my business will be founded on a pledge to never use “artificial intelligence” tools, nor to outsource to anyone else who uses them either.

Perhaps that makes its failure more likely than not. But not to try makes its failure—and the failure of those possible futures—inevitable.

Posted

in

,

Comments and pingbacks

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.