Category Archives: Philosophy

Fables of the deconstruction: Salmon (2020), An Event, Perhaps

Nice little biography of Derrida, this. A more manageable size than many of the man’s own books, it does a neat job of relating the philosopher and the philosophy, without being a hagiography in the case of the former, nor a full-bore “reading” in the case of the latter. Which makes it perhaps the ideal introduction to Derrida’s thought for someone (such as myself) who has read fragments here and there, and has a vague idea of where ideas like deconstruction sit (both philosophically and pop-culturally), but who has yet to actually tuck in to the texts themselves. Core ideas and themes are situated in the context of Derrida’s life and times, and of twentieth century philosophy in general; that these are simplifications is inevitable, particularly with a thinker as gordian and self-referential as Derrida. But that seems a fair price for what might stand as a rough map to prepare oneself for the exploration of a vast continent of ideas whose originality (and threat) are still manifest in the fear and loathing associated with his name—despite, as Salmon patiently explains, the complete absence of the relativist nihilism which is supposedly sourced in his work. This particular passage provides a succinct rebuttal to such accusations:

Of all the accusations, what seemed to sting most of all was the notion that his thinking was relativist, anything goes, and thus nihilistic. ‘Deconstruction’, he had reiterated in Memoires: For Paul de Man, ‘is anything but a nihilism or a scepticism. Why can one still read this claim despite so many texts that, explicitly, thematically and for more than twenty years have been demonstrating the opposite?’ Nihilism is an ontological claim that there is no truth. Deconstruction has no opinion on this. Nor does it on, say, pink elephants. What it does say is that we cannot know whether there is truth or not, which is an epistemological claim. So any assertion that there is truth is unprovable, and therefore whatever truth is offered should be analysed for the reasons why it is being offered.

Chapter 9, “Before the Law”

That these accusations were established by small groups of conservative academics in rival schools of philosophy and scholarship is a reminder that, for all their arguably increased intensity, the monstering of challenging ideas so prevalent in the present is not new, and nor are the methods thereof. One is tempted to suggest that the hazard to rationalist and analytical hegemony presented by Derrida’s ideas offers an explanation for their repeated misrepresentation—though as Salmon notes, and as my limited experience in the academy also suggests, misparsings based upon shallow readings, or indeed upon no readings at all, may be a significant part of the problem, too: to paraphrase Salmon, dismissing Derrida as a prolix relativist charlatan saves one the challenge of actually trying to read him.

I was particularly intrigued by the thread of Derrida’s work which aimed to demonstrate that “philosophy” is to some extent a generic form of writing—which is not at all to dismiss or denigrate it, nor to elevate, say, literature to a higher plane, but rather to argue that style and rhetoric are inextricable, and that metaphor is the root of all discourse. The parallels between analytical philosophy’s insistence on a very limited notion of truth in language and the “academic style” of writing (which, to belabour a point, is not a style which is taught, but rather a culture that is inculcated through osmosis, and just as opaque and frustrating as Derrida’s to anyone who has not normalised and internalised it) are notable; a doctrinaire positivism masquerading as a principled refusal to dirty one’s hands with “theory” or epistemology. While I plan to go to the source for the full experience, Salmon’s exploration of this theme has served to validate my prior attempts to push against (if not actually avoid) the “academic style”, and encourage me to bring more literary techniques to bear in my work to come. That’s unlikely to make things easy for me, of course… but hey, nothing worth doing is ever easy. Salmon’s story of Derrida—which, as he points out in Derrida’s own terms, is partial, in both senses of that term—doesn’t gloss over the difficulties and missteps (such as the De Man defence), but that serves to underline a consistency and fidelity which I find admirable, and worthy of some effort to emulate.

(I’d like to imagine I could emulate his terrifying levels of productivity, too, but, well, yeah, no. I wonder if that would even be possible now, to develop that sort of utter immersion in one’s work while being caught between on the one hand the relentlessness of the attention economy, and on the other the neoliberalisation of the academy? The sheer privilege of having the time to study deeply, without interruption from the demands of self-documentation and bureaucratic hoop-jumping, from the ubiquitous business ontology of modern scholarship… well, things are what they are, and one ends up where—and when—one is, and I’d do well to remember that in many respects I’ve rocked up to the plate with plenty more privilege than Derrida had when he started. The attitude is the thing to emulate, I guess, rather than the results.)

rocket from the crypto / two dead letters

Chairman Bruce appears to be repubbing longreads from the now defunct Beyond The Beyond blog. This is a weird experience for me—distinctly atemporal, to use the man’s own term—because I recall reading this stuff at the time. And so it’s familiar and just-like-yesterday, but also so alienated and impossibly historical… I mean, I can’t recall the last time I saw anyone so much as mention the New Aesthetic, but I certainly remember a time when it seemed like everyone was talking about it. (That feeling of atemporal synchronicity is being compounded, no doubt, by my having been going through some of my own published material from the same period over the last couple of weeks… with the added irony that said act of retrospection was to the end of writing a chapter about Sterling for an academic collection.)

TL;DR—middle-age is a headfuck. I kind of understand why my parents went so weird in their forties, now… though I’m not sure I yet forgive the particular direction in which they went weird. And they didn’t even have the internet!

Anyway, the essay in question is the Chairman’s response to the New Aesthetic panel at the 2012 SXSW, and the bit I’m clipping is less about the New Aesthetic than a side-swipe at AI that reads just as true (and just as likely to be ignored) today:

… this is the older generation’s crippling hangup with their alleged “thinking machines.” When computers first shoved their way into analog reality, they came surrounded by a host of poetic metaphors. Cybernetic devices were clearly much more than mere motors and engines, so they were anthropomorphized and described as having “thought,” “memory,” and nowadays “sight” and “hearing.” Those metaphors are deceptive. These are the mental chains of the old aesthetic, these are the iron bars of oppression we cannot see.

Modern creatives who want to work in good faith will have to fully disengage from the older generation’s mythos of phantoms, and masterfully grasp the genuine nature of their own creative tools and platforms. Otherwise, they will lack comprehension and command of what they are doing and creating, and they will remain reduced to the freak-show position of most twentieth century tech art. That’s what is at stake.

Computers don’t and can’t make sound aesthetic judgements. Robots lack cognition. They lack perception. They lack intelligence. They lack taste. They lack ethics. They just don’t have any. Tossing in more software and interactivity, so that they’re even jumpier and more apparently lively, that doesn’t help.

It’s not their fault. They are not moral actors and they are incapable of faults. It’s our fault for pretending otherwise, for fooling ourselves, for projecting our own qualities onto phenomena that we built, that are very interesting to us, but not at all like us. We can’t give them those qualities of ours, no matter how hard we try.

Pretending otherwise is like making Super Mario the best man at your wedding. No matter how much time you spend with dear old Super Mario, he is going to disappoint in that role you chose for him. You need to let Super Mario be super in the ways that Mario is actually more-or-less super. Those are plentiful. And getting more so. These are the parts that require attention, while the AI mythos must be let go.

AI is the original suitcase word; indeed, it’s a term that Minsky came up with to describe the way the goal of “AI” kept drifting, and coming up with the term and identifying the problem didn’t get him anywhere nearer to solving it. I was writing a report on AI last year in a freelance capacity (for a foundation in a location whose commitment to the Californian Ideology is in some ways even greater than that of California itself, despite—or perhaps because of—its considerable geographical, historical and sociopolitical distance from California), and tried to make this point, drawing on the tsunami of critiques of AI-as-concept and AI-as-business-practice that have emerged since then, both within the academy and without… but, well, yeah.

I guess we just have to conclude that the sort of person who decides to make Super Mario their best man is not the sort of person who’s going to take it well when you point out that Super Mario is a sprite… no one wants to be the first to concede the emperor is naked, particularly not when they’ve stripped off in order to join the parade. Nonetheless, given the residual enthusiasm for peddling that particular brand of Kool-Aid which still persists among the big global consultancies, the McKinseys and their ilk, there’s probably a few more years in business models offering “Super Mario solutions” before smarter, faster-moving players start focussing on practical applications without the pseudo-religious wrapper. Or, I dunno, maybe not? Seems like people will believe whatever the hell makes them feel like a winner these days, and the very unfalsifiable nebulousness of “AI” might make it all but bulletproof for that very reason. Every era has its snake-oils.

interrupt your text

McKenzie Wark interviewed at Bomb Magazine:

I’m interested in writing that engages with the way people read now. If you are a literary person, perhaps you and your friends are on Twitter or Instagram and share photos of favorite passages from the books you happen to be reading. I certainly do. So, I wanted the text to read like a feed. I think we read texts in juxtaposition now. I make those juxtapositions intentional. I interrupt my text with my favorite writers who sometimes seem to comment or provide a contrast or who describe what I am failing to describe and do it better.

Interesting observation from a writer whose work I’ve long been inspired by. That said, I think this nascent tradition had its foundations laid in the golden age of blogging, which was often heavy on the blockquotes as well as the hyperlinks… and that was in turn surely influenced by the telos of academic texts, if not necessarily their style. A dialectics of style, perhaps?

Also wonder if this isn’t perhaps a way of short-circuiting the notorious “agony of influence”… instead of flinching from the inescapability of the megatext, make your way through it like a forest, hacking through undergrowth or racing through clearings as necessary, dodging wolves and befriending other adventurers along the way.

(The emerging genre of “theory fiction” appears to be one expression of this instinct… I’m thinking particularly of Sellars’s Applied Ballardianism, here, but mostly because that’s the only example of the genre I can confidently claim to have encountered on the genre’s own terms. Though one might counterclaim that theory fiction is just autofiction for the overeducated, I suppose… but what else are we meant to do with the multiple self-subjectivities that our scholarship has cursed us with, eh?)

a cranky aspiration

Chairman Bruce on AI ethics at LARB:

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.