Tag Archives: algorithms

Temporal delamination

This piece by Katherine Miller on (a)temporality in the age of the algorithm has been doing the rounds, and with some justification; it’s a strong piece of writing, and it’s grasping toward something important. I’d be lying if I didn’t find its implicit attempt to situate Trump as a sort of synecdoche for the state of the States somewhat wearying, but it’s eminently understandable, not least because life under 45 for anyone on the lefthand side of the fence is clearly very wearying also. (Furthermore, I imagine that anyone outside of the UK who reads UK-written essays of a similar thrust is pretty sick of everything magically boiling down to Brexit. Hell knows I am… and still I keep writing the fucking things.)

But ignore my carping, which is more in the nature of a stylistic note-to-self than a dig at Miller. It’s a good piece — though there’s a further irony in its being hosted at Buzzfeed, and accompanied by the sort of busy-but-pretending-not-to-be web design which sample-and-holds the very same temporal (gl)itchyness that the article describes.

The touch and taste of the 2010s was nonlinear acceleration: always moving, always faster, but torn this way and that way, pushed forward, and pulled back under.


The 2000s were a bad decade, full of terrorism, financial ruin, and war. The 2010s were different, somehow more disorienting, full of molten anxiety, racism, and moral horror shows. Maybe this is a reason for the disorientation: Life had run on a certain rhythm of time and logic, and then at a hundred different entry points, that rhythm and that logic shifted a little, sped up, slowed down, or disappeared, until you could barely remember what time it was.

I feel like the missing word in this piece is delamination: time hasn’t shattered so much as peeled apart, the shear layers shearing off of one another under the centrifugal force…

I guess we can chalk up another point for Chairman Bruce on the prolepsis leaderboard. When did he first start talking about atemporality? It seems like a lifetime ago, but at the same time just yesterday…

Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

Systematized instrumental rationality

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.

Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?

Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.