Category Archives: Sociology

The aesthetics of decentralisation

Despite the cacophony of political conjecture, the story of blockchain so far is a tale of financial speculation, in which the cash rewards reaped by bankers and venture capitalists are largely a result of the techno-utopian hype. Plus ça change, plus c’est la même chose. The prospect of decentralizing control does not absolve us of the hard work of politics, and blockchain has so far failed to transfer power to ‘We, the people’, whatever the white papers might suggest. Political economy cannot be replaced by technology alone. Today, technological wealth produced by and for society largely oils the machinery of capitalist accumulation. While we have yet to witness the decentralization of control, the collective wealth produced by of the decentralization of production — that is, the ‘sharing economy’, the big data industry, and other platforms that monetize our daily social interactions — remains firmly in the service of exploitative (centralized) corporations. Whether in logistics or social media, it is not so difficult — nor even particularly radical — to imagine decentralized, peer-to-peer services which produce value by and for the commonwealth. Nonetheless, it would require governance, by nationalization or other means: the network is not identical to the commons, and nor should we hope for it to be.

A super-chewy long-read [via Jay Springett]: “Systems Seduction: The Aesthetics of Decentralisation” by Gary Zhexi Zhang, one of ten winners in the Journal of Design & Science “Resisting Reduction” essay competition.


Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

Stating the bloody obvious

… those tech creators and tech billionaires who are influenced by Science Fiction seem to assume that because things in Science Fiction work in the society and culture of those created future-set universes, there is an expectation bias that they will work in our real life and present, without much testing or oversight.

Gadgets, services, and technologies work in Science Fiction because it is fiction. They work because it is a narrative, and as such, their authors or filmmakers showed them working. They work because in fiction, it is very easy to make things work, because they aren’t real and don’t need to actually work.

Realizing the unreal from fiction will not make that realization work in the same way in real life. It can’t. The context, timeframe, and people are different. Most importantly, Science Fiction is fiction.

Astonishing, really, that this even needs to be said — though it clearly does need to be said.

However, the author’s relentless capping of Science Fiction betrays what is likely the same superficial engagement with the genre demonstrated by those they are criticising: there’s plenty of science fiction in which the tech doesn’t work, and indeed which is totally about the tech not working, or working in ways orthogonal to its maker’s and user’s original (or at least originally stated) intentions; it’s also hard to square this piece with the effectively mainstreamed (but nonetheless totally wrongheaded) punditry to the effect that science fiction has gone too far in the tech-negative dystopian direction. But hey, when your research needs publicising and a venue has an obvious hook for your pitch, well, we’ve all been there, amirite?

That said, the author’s call for companies to hire social scientists to deal with these sorts of issues is something I’d support — though yer man Damien Williams makes the case far more effectively (not to mention eloquently). Meanwhile, re: science fiction, the distinction between the technological utopian mode and the critical utopian mode was old theory when I picked it up back in 2014, but it’s as relevant as ever. If people are going to turn to narrative forms as spaces of inspiration and reflection — and they clearly are, and clearly always have done — then we might as well use critical narrative form to counter the uncritical stuff, no?

It’s about data and smugness.

In practice, I don’t know that mainstream economists really care that much about the “ends” side of things. For instance, when they talk about “demand,” they aren’t talking about how many people actually want something or how badly they want it. For these guys, “demand” is the quantity of a commodity that people are willing and able to pay for, at a given market price. If ten thousand people in a wasteland are dying of thirst, and they have no money and no way of getting any money, what’s the “demand” for a sip of water in this particular market? It’s zero.

I’m talking about mainstream economics here. Since the so-called marginalist revolution at the end of the nineteenth century, the discipline has tended to ignore idle speculation about why we value this or that. There are exceptions, like hedonic shadow pricing, or research on entrepreneurship, or maybe some market design stuff. But mostly we’re just too weird and ornery. And besides, everybody’s different! Friedrich von Hayek is the big cheerleader for this perspective. And that shift was part of a bigger shift whereby mainstream economics became increasingly mathematical and “scientific.” The word “science” appears in Robbins’s definition, for instance. Much of the discipline, some would argue, also became increasingly less grounded in reality.

By contrast, science fiction — and other kinds of literature — is obviously extremely interested in getting inside people’s heads and hearts, and figuring out not only what people desire, but also why and how, and what it feels like. And how desires might change. And the deeper significance of those changes. When you write a novel, you’re not going to start off saying, “Okay, I am going to assume that my characters preferences will remain fixed.” So maybe that’s one reason the meeting between science fiction and economics can be quite fruitful. Science fiction has the same love for abstraction and modelmaking, and shares a certain sense of what “rigor” is … but it’s fundamentally about actual human experience in a way mainstream economics just isn’t.

The inestimable (and brilliant, and loquacious) Jo Lindsay Walton, interviewed on the intersection of economics and science fiction by Rick Liebling for The Adjacent Possible; a long read, but full of gems.

The above recapitulates, albeit in JLW’s own style, the argument I’ve been making for narrative prototyping in my own academic work: a model must be exposed to the social dimensions which it has necessarily externalised. Human behaviour is inherently unquantifiable — and indeed, the more we attempt to quantify it (and “manage” it on that basis), the more inhumane the results become.

What applies to economics applies equally to infrastructures; it’s wicked problems all the way down, and solutionism is a wicked problem in and of itself (as Keller Easterling also appears to be arguing). Until we understand the role of desire — in the DeleuzoGuattarean sense, but also to some extent in the weaponised-behavioural-psychology-AKA-marketing sense — in sociotechnical change, we will achieve nothing but an accelerating accretion of “solutions” which turn out to be new and intractable problems in their own right.

(See also Tainter on increasing complexity as a strategy for addressing problems arising from existing complexity; to paraphrase very broadly, it works, but it works ever less effectively every time, and only until it no longer works, at which point you’re wandering around the ruins of your civilisation wondering where it all went wrong.)

A certain hermetically sealed quality

Like nightmares, dystopias have a certain hermetically sealed quality. By their nature, they are inescapable—a dystopia you can escape from is not a dystopia, it is the third hour of Love, Actually. The circumstances that create any brave, new world simultaneously cauterize its edges and destroy memories of the world before. In Nineteen Eighty-Four, as near as Winston can recall, “He had first heard mention of Big Brother… at some time in the sixties, but it was impossible to be certain. In the Party histories, Big Brother figured as the leader and guardian of the Revolution since its very earliest days. His exploits had been gradually pushed backwards in time until already they extended into the fabulous world of the forties and the thirties, when the capitalists in their strange cylindrical hats still rode through the streets of London… ” To an extent, this is also how history works, as unlikely ephemera like Donald Trump fluke their way into awful existence and, in doing so, retroactively annihilate our former, lingering sense of other possibilities. For instance: remember when it seemed inevitable we’d have our first female president? Remember when public racism resulted in an exile from public life? Remember when we still had a functioning EPA? Disasters are amnesiac in nature.


… the best, maybe only, way of resisting dystopias, is to keep in mind that it was not always thus.  What has happened is an aberration, and the world worked a different way for a very long time.  Dystopias—fictional and real—are perhaps unavoidable, but not irreversible.  The cliché goes that those who forget the past are doomed to repeat it. Maybe it would be truer simply to say that those who forget the past are doomed.

Adam O’Fallon Price at The Paris Review. Not entirely sure he isn’t himself somehow relocating an uncritical liberal utopia to the past in this piece — in fact, I’m fairly sure he is doing so, though perhaps unwittingly, and that’s just as big a mistake as dytopianism — but the point about the amnesia of disasters is solid, and says something quietly profound (and profoundly disturbing) about our experience of temporality. Guy Debord might implicate the Spectacle in this phenomenon, and I’d be very willing to back him up on it.