Category Archives: Sociology

Head like a holist

From a Timothy Morton interview at Orion Magazine:

If you’re just a droplet in an ocean, and that ocean is more real than the droplet, well—poor little droplet. You totally don’t matter. I’m sorry to say this evil-sounding thing in an ecology magazine, but quite a lot of how we talk about the Gaia concept means, when you strip the nice, leafy imagery away, you’re just a component in a gigantic machine, and so are polar bears, and so polar bears are replaceable. Who cares if they go extinct? Mother Nature will evolve something else, another component. The normal holism is very often a form of mechanism.

But you have to be a holist to be interested in ecological beings such as meadows and coral. A meadow is a whole with lots of parts. Coral has lots of things in it that aren’t coral, like DNA and little striped fish. If you say there’s no whole, or that parts are more real than whole, then you’re agreeing with Margaret Thatcher that “society does not exist, it’s just individuals.” There is no biosphere. There is no Mother Earth. That’s not such a great pathway.

For me, if a thing exists, it exists in the same way as another thing. If there are such things as football teams, they exist in the same way as football players. They’re not more or less real than football players. So, there’s one football team. There’s lots of players on that team. Therefore, the whole is always less than the sum of its parts.

The holism that Morton describes here is far from being limited to our thinking about ecology; the sciences, and indeed the social sciences, are riddled with it. (The inverse of Thatcher’s nihilism, in which “the social” becomes both the source of and answer to every challenge, a sort of sociological alpha-and-omega, is still very prevalent — though I’d argue it’s slightly preferable.) As Morton points out, the problem is rooted in language, but perhaps more particularly in narrative; the systemic is difficult to narrate, because narrative — at least in its most popular and prevalent forms — needs heroes and villains, black hats and white hats, causes and effects. Climate change is particularly sticky in this regard. As I like to put it: no one’s to blame, but everyone’s complicit.

(Cf. Bruno Latour’s re-reading of Lovelock’s Gaia theory against the greater-than-the-parts holism of Earth Systems Science. As I understand it, a lot of the OOO philosophers regard Latour’s work as being quite close to their own thought; Harman in particular refers to Latour frequently, and has even written a book on him (which is still somewhere in my TBR pile). I’ve found what OOO I’ve read (which still isn’t much) to be interesting, but it lacks utility, a sense that I might use it to think with purpose beyond simply thinking; that utility is exactly what I get from Latour, and is presumably also the aspect of Latour’s work that makes him “close, but not close enough” for Harman. When discussing this with an academic philosopher, he suggested to me that “social theory” was “a category created to contain would-be philosophers who in some way ascribed to Marx’s dictum that the point of philosophy was not to interpret the world, but to change it”; I was quite delighted by that, even after it was made very clear that it wasn’t meant as a compliment.)

The last days of the Next Big Thing

Today, social media enables young people to engage with culture and politics in all kinds of ways that have nothing to do with music; from the 1960s to the 1990s, music was pretty much all there was. It seems likely that, in the broad sweep of cultural history, the period circa 1955 to circa 2000 will be a treated as a discrete epoch, and the cultish fanaticism that drove its successive countercultural waves – from Beatlemania to grunge, via punk, post-punk, New Romantics et al – will be seen as an analog-era curio. The regime of production and dissemination was the defining characteristic of the four-and-a-bit decades of its hegemony; the demise of that regime has led, ultimately, to the obsolescence of that particular iteration of pop culture.

TFW someone produces a good and coherent version of a vague theory you’ve been kicking around for a few years and done nothing with.

(Please read the whole thing before criticising it; one can acknowledge nostalgia without necessarily taking that feeling as an indication that things were actually and objectively “better” during your own salad days.)

The aesthetics of decentralisation

Despite the cacophony of political conjecture, the story of blockchain so far is a tale of financial speculation, in which the cash rewards reaped by bankers and venture capitalists are largely a result of the techno-utopian hype. Plus ça change, plus c’est la même chose. The prospect of decentralizing control does not absolve us of the hard work of politics, and blockchain has so far failed to transfer power to ‘We, the people’, whatever the white papers might suggest. Political economy cannot be replaced by technology alone. Today, technological wealth produced by and for society largely oils the machinery of capitalist accumulation. While we have yet to witness the decentralization of control, the collective wealth produced by of the decentralization of production — that is, the ‘sharing economy’, the big data industry, and other platforms that monetize our daily social interactions — remains firmly in the service of exploitative (centralized) corporations. Whether in logistics or social media, it is not so difficult — nor even particularly radical — to imagine decentralized, peer-to-peer services which produce value by and for the commonwealth. Nonetheless, it would require governance, by nationalization or other means: the network is not identical to the commons, and nor should we hope for it to be.

A super-chewy long-read [via Jay Springett]: “Systems Seduction: The Aesthetics of Decentralisation” by Gary Zhexi Zhang, one of ten winners in the Journal of Design & Science “Resisting Reduction” essay competition.


Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

Stating the bloody obvious

… those tech creators and tech billionaires who are influenced by Science Fiction seem to assume that because things in Science Fiction work in the society and culture of those created future-set universes, there is an expectation bias that they will work in our real life and present, without much testing or oversight.

Gadgets, services, and technologies work in Science Fiction because it is fiction. They work because it is a narrative, and as such, their authors or filmmakers showed them working. They work because in fiction, it is very easy to make things work, because they aren’t real and don’t need to actually work.

Realizing the unreal from fiction will not make that realization work in the same way in real life. It can’t. The context, timeframe, and people are different. Most importantly, Science Fiction is fiction.

Astonishing, really, that this even needs to be said — though it clearly does need to be said.

However, the author’s relentless capping of Science Fiction betrays what is likely the same superficial engagement with the genre demonstrated by those they are criticising: there’s plenty of science fiction in which the tech doesn’t work, and indeed which is totally about the tech not working, or working in ways orthogonal to its maker’s and user’s original (or at least originally stated) intentions; it’s also hard to square this piece with the effectively mainstreamed (but nonetheless totally wrongheaded) punditry to the effect that science fiction has gone too far in the tech-negative dystopian direction. But hey, when your research needs publicising and a venue has an obvious hook for your pitch, well, we’ve all been there, amirite?

That said, the author’s call for companies to hire social scientists to deal with these sorts of issues is something I’d support — though yer man Damien Williams makes the case far more effectively (not to mention eloquently). Meanwhile, re: science fiction, the distinction between the technological utopian mode and the critical utopian mode was old theory when I picked it up back in 2014, but it’s as relevant as ever. If people are going to turn to narrative forms as spaces of inspiration and reflection — and they clearly are, and clearly always have done — then we might as well use critical narrative form to counter the uncritical stuff, no?