The medium is the message: why I’m sick of Twitter

I’ve been thinking a fair bit about McLuhan’s famous aphorism lately, and I’ve decided it explains why I am, in a very literal sense, sick of Twitter.

The point of McLuhan’s riff as I understand it isn’t that the content delivered by any given medium is irrelevant, but that the way in which any given chunk of content impacts on your sensorium is inevitably shaped by the form in which it is constrained. The form of Twitter is hypercompressed, caught up in a 140 character limit that even the SMS message from which it was inherited has largely transcended at this point; it is also, by default, a one-to-many broadcast format, a bullhorn in the town square. To be clear, that compression is a huge part of Twitter’s appeal and effectiveness, as is the bullhorn thing. The problem is the way in which the individual elements of massive ecosystems are obliged to evolve behaviours optimised to survival in said ecosystem. In the context of Twitter, or at least Twitter’s default public one-to-many mode, the optimal behaviour is the grabbing of attention, but that’s arguably true of any peer-to-peer medium; it was certainly just as true of the blogging era I pine for, and of newspapers, broadsides, and the popular ballad.

But the medium shapes the message: the innate terseness of Twitter inevitably requires the stripping away of nuance, the boiling-down and concentration of a single sharp point; meanwhile, the ephemerality of Twitter means not only does one have to grab attention, but one has to grab it RIGHTFUCKINGNOW, before someone else comes along with something equally grabby. As such, I think the polarisation of Twitter — which is not necessarily a monolithic Left/Right thing that covers the entire userbase, so much as a polarisation specific to each and every topic or event — is an inevitable consequence of the medium’s form, per McLuhan.

That said, I think this has been exacerbated by slower mediums deciding to plug themselves into Twitter in order to garner more eyeballs for their “proper” content. In the majority of cases, most major media brands have an established political polarity already, and had become very adept at grabby compression long before Twitter; this is the art of the headline, of the sound-bite. What Twitter brought to that party was the ephemerality mentioned above; it’s not just about grabbing attention, it’s about grabbing attention RIGHTFUCKINGNOW. Having money and metrics to throw at the problem, this behaviour has been optimised very quickly indeed — and individual users have absorbed many of the techniques involved by osmosis, much as one learns a local vernacular in order to remain part of the discourse. Level up, or get drowned out.

(Ironically, the corporate brand has never found Twitter as congenial a medium as the personal brand which — or so I’d suggest — is exactly why corporate brands are trying so hard, and often so laughably or grotesquely, to act more like personal brands, even as personal brands ape the corporate. The medium is the message; a crowded niche supports a limited range of physiological and behavioural adaptations. Evolve or die.)

This probably sounds more than a little bit “things ain’t what they used to be”, but y’know what? Things *aren’t* what they used to be. That’s how temporality works — and if noticing that difference and expressing a preference for the previous state of affairs is nostalgia, then fuck you, I’m nostalgic. However, I recognise that time’s arrow only points one way, and there’s no putting the genie back in the bottle. Twitter used to be a rhizome of watercooler conversations, and it still is — but the big numbers and fierce competition for attention, exacerbated by the monetisation of said attention, means that Metcalf’s Law has kicked in. Winner takes all; either you go big, or you go home.

There are backwaters and oxbow lakes, of course: Black Twitter, for instance, clearly provides a vital space for mobilisation for a demographic which desperately needs more such spaces, and the way in which messages from there can leak out into the global town square is clearly beneficial. But there is no avoiding the fact that those speech-acts are also polarised by definition, and hence attract speech-acts of the opposite polarity with all the inevitability of anions attracting cations. Compressed communications are highly reactive or volatile, to continue the chemistry metaphor, just as boiling down a solution will tend to polarise its pH toward acid or base. One of the great joys of Twitter — because make no mistake, it is a space that has brought me a lot of joy and good friends and interesting information over the years — is the way in which it gives everyone a voice. But as anyone with a marginal opinion will tell you, that is also its great horror; for every SJW, a G*merG*tor.

(And as repulsive as you might find either one of those two tribes, know that for sure that the tribe that revolts you feels an almost identical revulsion to your tribe. The medium is the message; you don’t have the bandwidth to be anything more than the affiliation ((or lack thereof)) in your biog-blurb, and they don’t have the bandwidth to look any further than it. Black hats versus White hats is the only game in town. You are Other, and that’s that.)

There are also attempts to ameliorate the problem: private and/or alt accounts, curated lists, so on and so forth. But this reminds me a lot of what it was like to live in a compound in a foreign city, as I did for a few years as a child; the compound is quite literally an oasis of comfort and familiarity, but that only serves to enhance the fear of what’s outside. This seems a particularly cruel irony in the case of Twitter, where in order to flee the echo-chamber of the town square, we simply try to build a smaller echo-chamber with a more exclusive guestlist… and the hypothetical end-game of that paradigm, if you think about it, is a return to a non-town square form. In order to “fix” Twitter, we’re trying to make it into not-Twitter. But even as the compound doesn’t feel like the city outside, the compound is still constrained by its being a polder; it is inherently defined by what it is trying to exclude. The compound is a contradiction, and living in a contradiction is exhausting; the walls of the dyke must always be maintained and strengthened, even as that which it holds back is studiously ignored.

But like I say, maybe it’s just me, or just people with whom I share some significant psychological overlap. Lots of folk I know seem to be able to manage that contradiction, or find the town square vibe thrilling and congenial, and I wish them luck — hell, I think I maybe even envy them, in a way. But I’m prone to anxiety and depression; large crowds have always made me nervous, and mob phenomena are terrifying — although it is a function of my white male Anglo privilege that I’m much more likely to be part of a mob rather than its victim, and I fully acknowledge that I have less to lose by giving up on any given medium than those who lack the luck of birth and circumstance I have.

Nonetheless, I’ve had enough. The literature on CNS stimulants such as amphetamines or MDMA talks about the “law of diminishing returns”, whereby as one becomes habituated to a stimulant, one needs ever larger doses to recapture the incredible high of the first few hits; at the same time, the lows of the comedown become ever deeper, and arrive more swiftly. I am sick of Twitter like an addict eventually becomes sick of speed or pills, and I do not have the psychological fortitude to carry on regardless of the increasingly obvious cost to my mental health.

I’m not saying “Twitter = bad” — though that’s exactly how this post will be tweeted if anyone decides to pick it up out there in the Twittersphere. Twitter’s just another extension of the human sensorium, another cybernetic part of us — and like us, it contains both good and bad, contains the potential to enact both good and bad. But I do not believe it to be determinist to suggest that the form of Twitter, per McLuhan, means that it is inevitably a polarised black-and-white space… and I crave the detail and nuance that only comes when there’s at least some bandwidth for a greyscale, if not even full colour.

Nor am I claiming that some mass renunciation of Twitter and a return to the slower, longer conversations of blogging would return us to some idyllic cultural golden age. The lid on Pandora’s box can never be closed; we can never go back, only forward. Perhaps Twitter will evolve into a slower, less brutally competitive ecosystem; perhaps a new ecosystemic niche will emerge; perhaps (and most likely, IMHO) social media will turn out to be yet another of the periodic new-medium fads our civilisation has been prone to, like the letter, the telephone, and so on. Only time will tell.

But I’ll be waiting the time out somewhere else, I think. As Michael Franti once reminded us, hypocrisy is the greatest luxury, and I’ll be keeping my Twitter account for announcing blog posts like this one — in the wider ecosystem of which Twitter is merely a subsystem, I literally cannot afford to disappear entirely, just as many do not have the luxury of even the partial renunciation which this essay announces. But privilege is at its worst when it is wasted, and the Skinner box that is Twitter is a demonstrable waste of whatever it is that I am.

So I’m done with it. Thanks for the memories, and I’ll be here if you need me.

The ghosts of infrastructures past

Somewhere along Brindcliff Edge Road in Sheffield, you can still see this wonderful infrastructural relic:

That’s a sewer-gas destructor lamp, of which there are maybe a dozen or so remaining in the city, though only a very few of them are a) undamaged, and b) still lit. Destructor lamps took a tricky infrastructural problem (the way in which noxious gases would accumulate in sewer sections near the top of hills) and solved it in a way that had a useful function (mixing said sewer gas with town-gas and burning it to light a street). I have a particular soft spot for this one because of the way it has been incorporated into the wall.

Rhyme vs. Reason

The why of my wanting you differs each time.
(The wanting, returning, is always the same.)
So strangle my reason and drown it in rhyme:
to query the telos of love is a crime.
(And I know there’s only one crook in the frame.)

The why of my wanting you differs each time;
this quiddity mocks me. Intense and sublime,
the language of love is revealed as a game
that strangles my reason and drowns it in rhyme —
so reason must die, then be buried in lime
and rise like a phoenix on feathers of flame.

The why of my wanting you differs each time;
in doing so, wanting refuses regime,
revealing the heart as a phoenix to tame.
I’ll strangle my reason and drown it in rhyme,
have faith in love’s meter and tempo, and chime
the bell in my chest at the sound of your name.

The why of my wanting you differs each time.
You tangle my reason; I crown you with rhyme.

The Metamedium

From a review at the Los Angeles Review of Books:

“Zielinski argues that what he calls “media” (a dense composite notion encompassing both discourse and its material supports) has vanished from the horizon because it is now ubiquitous.”

Obviously I need to read the whole book to make this claim more solidly, but nonetheless: this chimes with a chunk of my own infrastructural theory, where I claim that what we think of as “media” – which are themselves highly complex and increasingly emergent socio-technical systems – flow over and through a medium-of-media, a metamedium. That metamedium is the tangle of infrastructural socio-technical systems to which I refer as “the metasystem”, which has also been pulling a very effective disappearing trick over the last century or so.

Indeed, these two systems are effectively the public and private faces of a single coin. The metasystem is the screen upon which the Spectacle is projected; it is the conceptual veil which allows the enduring Western fiction of the social/natural dichotomy to persist, the discursive prestidigitation which distracts us from the (spatially) distant consequences of our technologically mediated consumption.

Imaginable alternatives

Tobias Revell takes the mic at AmateurCities to give a designer’s take on critical futures and the SmartCity!* shibboleth:

“Too often we are confronted with visions and stories of the future that say: ‘In the future everyone will live this way or that way. In the future everyone will have these things. In the future everyone will want that thing.’ This can often lead to acceptance of the idea that the future has been predetermined by powers greater than us. We need to imagine instead, what futures might bring. There are dozens of other small, niggling but significant alternatives that can challenge the theoretical basis for how the future might open up to a plethora of possible imaginable alternatives. Take for instance; domestic solar power, crypto currencies, end-to-end encryption or personal manufacturing. They are but a few that have the potential to either become incredibly empowering or to be sucked into our current continuous monument.”

In that essay linked above, Tobias is wrestling with a problem that I’ve been facing in two different settings, namely science fiction criticism and futures studies. I’m working on one paper for Futures and another for Foundation which are (at the nuts’n’bolts level) an attempt to explain and analyse the structural rhetorics of narrative as used to portray the future; what it’s really about is the telos of telling stories about the future — the purposes for which we create narratives of futurity, and the purposes for which those narratives end up being used. That distinction is important: the whole point of the argument is that even the most thoughtfully structured narrative will be read, by some audiences, in a manner orthogonal or outright opposed to that intended by its creator.

What interests me most about speculative design and critical futures are what happens when they are misparsed, or shorn of their original context. Dunne & Raby make the point that speculative designs usually require some sort of framing (e.g. by exhibition notes or labels) in order not to be “misread” as either a real product proposal or a purely artistic piece. I can remember plenty of times I cheerily blogged at Futurismic about some design-lab smartphone prototype as if it were a viable product, if not an actual production model, and I was far from alone in doing so; once those images were cut free from their original press releases or webpages, they became free-floating signifiers, which we would gamely situate into our (admittedly already hyperreal) cultural context. And therein lies the problem, in that it is human instinct to incorporate new narrative elements into  our own ontological metanarratives: to make new things fit into the world as we already understand it. In times of great change and upheaval such as these, this is a constant process of upgrade and change, like a *nix server automatically applying patches without ever needing to do a physical reboot.

That ontological integration effect is the thing that effective science fiction operates upon, I think — and, by extension, the thing that critical futures and speculative design operate upon; this is maybe what Suvin was on about with his “cognitive estrangement” riff – the jarring (thrilling? horrifying?) realisation that there is an ontological discontinuity between the world of the reader and the world of the reading. (Please note that Other Less Exclusive or Monolithic Theories of SF are Available; Suvin’s thing is just one piece of the puzzle.) Once the discontinuity is realised, it becomes a feature of the world of the reading, and thereby performs a sort of commentary or gloss on the reader’s world by proxy; this commentary is what we’re gesturing at when we try to describe what a science fiction novel or movie is “about”, at a level beyond a simple recounting of the main plot points.

This is also the mechanism by which the “flatpack futures” of glossy tech ads — and, in fact, almost all ads — work; in this case, the discontinuity created is the absence of the featured product or device in the viewer’s reality, a vacuum which is filled by a desire which assumes that possession of the diegetic prototype depicted in the foreground (e.g. a macbook as thin as a fag-paper) will necessarily reproduce the implicit background features of the world of the text (a spacious, airy and seemingly pristine open-plan Californian home in summer, populated by healthy happy white people with time to consume conspicuously) in the world of the viewer. Advertising is notoriously ineffective in terms of shifting specific products, but far less thought is expended on the cumulative psychosocial effects of swimming in an amnion of unattainable futures, as we all do; perhaps the contemporary struggle to even imagine utopia, as identified by Fredrick Jameson, is correlated with the sheer ubiquity of the utopian narratives of futurity with which we are bombarded perpetually, whether as ads, political manifestos, economic forecasts or whatever else.

So you see the problem, I hope: designers, critical designers, fiction writers, movie makers, copywriters and ad-makers, urbanists, architects and economists, futurists and critical futurists and manner of related professions all use exactly the same set of tools, but for very different ends. What I’m interested in is how the specific deployments of those tools, and the precise strokes or techniques with which they are applied, create desire and/or apprehension in the reader, regardless of intention. Answering this question will not only make it easier to choose the right tools to increase the likelihood of the desired reading, but also to identify exploitative narrative strategies; it’s the first analytical step toward an ethics of futurism, if you like.

[ * Readers in the academy will be aware that “Smart” (whether referring to cities, or seemingly anything else) is approaching the status of Infuriatingly Ubiquitous Funding-Call Buzzword, to the point that even the people promoting the funding streams in question end up making self-deprecating jokes about its inclusion. As frustrating as this is in the short term, it suggests that its lifespan may nearing an end; however, it further suggests that Smart has every potential of becoming the new Sustainable — a knee-jerk password, a hollowed sign with everything (and hence nothing) to signify. Selah. ]

Science fiction, science fact, and all that's in between …