Category Archives: Infrastructural Theory

Roamin’ roads, redux

The WaPo [via the good folk at Moving History] reports on some interesting research which comes to a conclusion that (I hope) no regular reader here would be surprised by: current geographical levels of population and prosperity in Europe correlate strongly with the Roman road network laid down around two millennia ago.

Dalgaard and his colleagues marshal convincing pieces of evidence to argue in favor of a causal link that runs from ancient roadbuilding to modern-day prosperity. For starters, Roman roads weren’t typically built with trade in mind: their primary purpose was to move troops and supplies to locations of military interest. Trade was an afterthought.

“Roman roads were often constructed in newly conquered areas without any extensive, or at least not comparable, existing network of cities and infrastructure,” Dalgaard and his colleagues write. In many instances, the roads came first. Settlements and cities came later.

Just because I’m not a quant doesn’t mean I don’t like to see someone run the numbers and do the GiS work; indeed, it’s a pleasure to see an instinctive qualitative conclusion bolstered by solid research. As such, it’d be nice for someone to run a more detailed study of the same correlation focussed on Britain (for which some fine person did a tube-map style plot of Roman roads a while back)… and as an imminently unemployed self-employed researcher with experience in matters infrastructural-historical, I stand ready should anyone decide they’d like to fund such a study. Our operators are waiting for your call, etc etc.

In the meantime, have you read Jo Guldi’s Roads to Power? Because, by whatever gods (or the lack thereof) you may believe in, you really should — because it’s a  brilliant book exactly about how those Roman roads formed the basis of the road network we have now (as well as how the civil engineer came to be a thing, and the relationship of infrastructural provision to the projection of domestic state power, and much more), but also just because it’s a brilliant book, full stop.

Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

No such thing as magic: misinterpreting Clarke’s Third Law

Over the weekend John Naughton at Teh Graun provided some much-needed deflation regarding the religion of machine learning and “AI”. I am in full agreement with much of what he says — indeed, I have been singing from that songsheet for quite a few years now, as have a number of other Jonahs and Cassandras.

However, I feel the need to take polite objection to Naughton’s misrepresentation of Clarke’s Third Law. (You know the one: “any sufficiently advanced technology is indistinguishable from magic”.) While it’s quite correct to say that the thought-lords of Silicon Valley (and their PR people) have peddled Clarke’s Third as justification for and endorsement of whatever it is they’ve decided they’re trying to do this week, to assume that’s how Clarke meant it to be used is to do the man a disservice, and indeed to misparse the aphorism in exactly the same way that the techies have. (This seems to happen surprisingly often.)

The thing is, no one believed less in magic than did Clarke; those of a similar age to myself may recall him as a dogged debunker of woo and myth, both in books and on television. Firstly, Clarke’s Third does not conflate magic and technology; on the contrary, it merely points out that to anyone not initiated into either mystery-system, both mystery-systems are equally opaque with regard to cause and effect. Or, in other words, both magic and technology seem miraculous unless you have an understanding of how the trick is performed.

Which leads us to the second point: when Clarke said “magic”, he meant stage magic: illusion, prestidigitation, misdirection. He didn’t believe in the supernatural (though he took a while to come to that position, admittedly, after an early fascination with the paranormal), but he understood the power of showmanship when combined with a lack of knowledge in an audience — and he recognised that technology’s appeal lies exactly in its seeming magicality, its something-out-of-nothingness; that’s how you sell it.

It was true in the time of Edison and Tesla, and it’s still true now, that “technology” (which is itself a suitcase word that has come to refer to shiny consumer products rather than sociotechnical systems of practice) is largely an obfuscatory front-end to the provisioning capacities of infrastructure. That’s why Edison, cunning bastard that he was, worked so hard on developing usable light-bulbs: he understood that infrastructure is too abstract a proposition, but that applications are an easy sell. As such, Clarke’s Third Law is best understood as a proleptic critique of solutionism — though I suspect Clarke himself might have balked at that characterisation. (He was rather more an optimist than I am.)

There’s a lot more to this riff, and I’m currently rather too busy trying to find some gainful employment to write about it at length — but if you’ve 45 minutes to spare, and you’d like the full unpacking of Clarke’s Third Law as it relates to technology and infrastructure in the 21st Century (all wrapped up in a furious critique of transhumanism, which is basically Clarke’s Third elevated from mere business model to the status of a religion without a god), then y’all might want to watch the this video of a talk I gave in Munich last year:

It’s about data and smugness.

In practice, I don’t know that mainstream economists really care that much about the “ends” side of things. For instance, when they talk about “demand,” they aren’t talking about how many people actually want something or how badly they want it. For these guys, “demand” is the quantity of a commodity that people are willing and able to pay for, at a given market price. If ten thousand people in a wasteland are dying of thirst, and they have no money and no way of getting any money, what’s the “demand” for a sip of water in this particular market? It’s zero.

I’m talking about mainstream economics here. Since the so-called marginalist revolution at the end of the nineteenth century, the discipline has tended to ignore idle speculation about why we value this or that. There are exceptions, like hedonic shadow pricing, or research on entrepreneurship, or maybe some market design stuff. But mostly we’re just too weird and ornery. And besides, everybody’s different! Friedrich von Hayek is the big cheerleader for this perspective. And that shift was part of a bigger shift whereby mainstream economics became increasingly mathematical and “scientific.” The word “science” appears in Robbins’s definition, for instance. Much of the discipline, some would argue, also became increasingly less grounded in reality.

By contrast, science fiction — and other kinds of literature — is obviously extremely interested in getting inside people’s heads and hearts, and figuring out not only what people desire, but also why and how, and what it feels like. And how desires might change. And the deeper significance of those changes. When you write a novel, you’re not going to start off saying, “Okay, I am going to assume that my characters preferences will remain fixed.” So maybe that’s one reason the meeting between science fiction and economics can be quite fruitful. Science fiction has the same love for abstraction and modelmaking, and shares a certain sense of what “rigor” is … but it’s fundamentally about actual human experience in a way mainstream economics just isn’t.

The inestimable (and brilliant, and loquacious) Jo Lindsay Walton, interviewed on the intersection of economics and science fiction by Rick Liebling for The Adjacent Possible; a long read, but full of gems.

The above recapitulates, albeit in JLW’s own style, the argument I’ve been making for narrative prototyping in my own academic work: a model must be exposed to the social dimensions which it has necessarily externalised. Human behaviour is inherently unquantifiable — and indeed, the more we attempt to quantify it (and “manage” it on that basis), the more inhumane the results become.

What applies to economics applies equally to infrastructures; it’s wicked problems all the way down, and solutionism is a wicked problem in and of itself (as Keller Easterling also appears to be arguing). Until we understand the role of desire — in the DeleuzoGuattarean sense, but also to some extent in the weaponised-behavioural-psychology-AKA-marketing sense — in sociotechnical change, we will achieve nothing but an accelerating accretion of “solutions” which turn out to be new and intractable problems in their own right.

(See also Tainter on increasing complexity as a strategy for addressing problems arising from existing complexity; to paraphrase very broadly, it works, but it works ever less effectively every time, and only until it no longer works, at which point you’re wandering around the ruins of your civilisation wondering where it all went wrong.)

Dispositionally or structurally retrograde

… typically as designers, and in broader culture, we’re looking for the right answer. As designers we’re still very solutionist in our thinking; just like righteous activism that pretends to have the right answer, dispositionally, this may be a mistake. The chemistry of this kind of solutionist approach produces its own problems. It is very fragile. The idea of producing a ‘master plan’ doesn’t have a temporal dimension, and is not a sturdy form.

Having the right answer in our current political climate only exacerbates the violence of binary oppositions. Our sense of being right escalates this tension. I’ve been trying to think instead of forms which have another temporal dimension that allow for reactivity and a branching set of options—something like a rewiring of urban space. They aren’t vague – they’re extremely explicit – but they allow for responses to a set of changing conditions.

[…]

Regardless of spectacularly intelligent arguments, the bending of narratives towards ultimate, teleological ends – and the shape and disposition of these arguments – doesn’t work for me. Dispositionally or structurally it seems slightly retrograde.

I just don’t see change as singular or ultimate. It doesn’t come back to the one and only answer, or the one and only enemy that must be crushed.

There are many forms of violence, and it almost seems weak to train your gun on one form of it. There isn’t one singular way in which power and authority concentrate, and there’s not one giant enemy. Such thinking leaves you open to a more dangerous situation.

Keller Easterling interview at Failed Architecture, riffing on her latest book, Medium Design (which is apparently only available in print if you get a copy mailed from Moscow). Easterling is among the brightest of lodestars in my personal  theoretical pantheon; her Enduring Innocence not only rewired how I thought about space, but also rewired my conception of how an academic text could be written.