Tag Archives: automation

partially-automated bi-utopian communism

I’ve been quietly impressed by the ubiquity of Aaron Benanav across a variety of venues as he promotes his recently-published book Automation and the Future of Work, of which I received a copy a while back. Benanav’s been a guest on blogs and podcasts aplenty, and I’m glad to have read and listened to some of them, despite not yet having gotten round to the book itself. I suppose it’s a mark of a successful promotion drive that these encounters have encouraged me to bump the book higher in my TBR queue—though that of course assumes that the message of the book itself is of interest.

Which it very definitely is. His main point, which is made all the more persuasively (for me, at least) for its lack of spectacle and hyperbole, is that the very commonplace thesis that “robots are coming for our jobs!” is wrong. Benanav’s refutation starts from the erroneous use of unemployment rates as a proxy indicator for lack of labour demand; the latter is very real, he argues, but the former misframes the issue in a way that leads to mistaken conclusions, via a focus on the technoutopian spectacle of OMG ROBOTS. The actual situation, he says, is “that 45 years of economic stagnation and welfare state retrenchment, rather than workplace automation, are the forces making for a severe global jobs problem. It is a problem that long predates recent high tech innovations.” But there’s an interesting bit of intellectual judo, here, in that Benanav then goes on to say that we can achieve something like the fully-automated-luxury utopia promised by the automation evangelists without the need for the automation: instead, we reorganise and redistribute the work that still needs doing.

What I didn’t expect—but maybe should have?—was that Benanav draws a fair bit on utopian theory (an interest rooted in a life-long interest in science fiction). He talks about two models for workers’ emancipation, the first being the old autonomy/worker-controlled-workplace vision, and the second being the perhaps more modern (and more utopian?) vision of being free of work in the sense of being able to quit and do something else “beyond work”. The former is more appealing to those of us who do what we might call non-bullshit jobs, who value what we do, but wish we were able to do it for a reasonable number of hours a week, without being steered by MBAs who don’t understand the work they’re trying to manage; the latter is more appealing to someone loading the dishwasher at Wetherspoons, or pushing pedals for thin tips on Deliveroo. Benanav’s point is that a successful vision of a reconfigured society needs to accommodate both of these utopian urges:

People within emancipatory politics are going to have to think about these two visions of emancipation and the way that they relate to work and the possibilities within them. The inspiring vision of the future will likely be one that speaks to both experiences: on the one hand, transforming meaningful work to be done better—with greater worker (and consumer) control—and, on the other hand, working less. There is a connection, although not a direct one, between these different concrete experiences of work and the sorts of places people find it easier or more meaningful to engage in struggle and conflict. My sense is that engaging with utopian literature, even the misguided techno-utopianism of the automation literature, is worthwhile as a way to build a stronger emancipatory movement.

Further on in this interview, he hints that this distinction is mirrored in a comparison of Morris’s News from Nowhere and Kropotkin’s The Conquest of Bread: in the former, work becomes the true fulfillment of life, while in the latter, the lack of work provides the space for fulfillment to be found (or created).

So, yeah: while I can’t yet recommend the book on the basis of direct experience, I’m pretty sure that I will be able to do so once I get round to reading it. In the meantime, maybe take the opportunity to listen to the man make his own arguments? This episode of New Left Radio is ideal:

Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.