Tag Archives: artificial intelligence

a metrics of labour other than time

Very interesting long paper by Matteo Pasquinelli; going back through Marx’s notion of the general intellect, he shows that none other than yer man Babbage theorised computing systems not only as a concretisation of labour but a crystallisation of preexisting biases in the workforce. Everything old becomes new again.

… the distinction between manual and mental labour disappears in Marxism because, from the abstract point of view of capital, all waged labour, without distinction, produces surplus value; all labour is abstract labour. However, the abstract eye of capital that regulates the labour theory of value employs a specific instrument to measure labour: the clock. In this way, what looks like a universal law has to deal with the metrics of a very mundane technology: clocks are not universal. Machines can impose a metrics of labour other than time, as has recently happened with social data analytics. As much as new instruments define new domains of science, likewise they define new domains of labour after being invented by labour itself. Any new machine is a new configuration of space, time and social relations, and it projects new metrics of such diagrams. In the Victorian age, a metrology of mental labour existed only in an embryonic state. A rudimentary econometrics of knowledge begins to emerge only in the twentieth century with the first theory of information. The thesis of this text is that Marx’s labour theory of value did not resolve the metrics for the domains of knowledge and intelligence, which had to be explored in the articulation of the machine design and in the Babbage principle.

Following Braverman and Schaffer, one could add that Babbage provided not just a labour theory of the machine but a labour theory of machine intelligence. Babbage’s calculating engines (‘intelligent machines’ of the age) were an implementation of the analytical eye of the factory’s master. Cousins of Bentham’s panopticon, they were instruments, simultaneously, of surveillance and measurement of labour. It is this idea that we should consider and apply to the age of artificial intelligence and its political critique, although reversing its polarisation, in order to declare computing infrastructures a concretion of labour in common.

Staring down Roko’s basilisk

Pete Wolfendale:

We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.

Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.

This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.

However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.

Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

There is no meaningfully superhuman way to install a ceiling fan

In the history of both technology and religion, you find a tension between two competing priorities that lead to two different patterns of problem selection: establishing the technology versus establishing a narrative about the technology. In proselytizing, you have to manage the tension between converting people and helping them with their daily problems. In establishing a religion in places of power, you have to manage a tension between helping the rulers govern, versus getting them to declare your religion as the state religion.

You could say Boundary AI problems are church-building problems. Signaling-and-prayer-offering institutions around which the political power of a narrative can accrete. Even after accounting for Moravec’s paradox (easy for humans is hard for machines/hard for humans is easy for machines), we still tend to pick Boundary AI problems that focus on the theatrical comparison, such as skill at car-driving.

In technology, the conflict between AC and DC witnessed many such PR battles. More recently VHS versus Betamax, Mac versus PC, and Android versus iOS are recognized as essentially religious in part because they are about competing narratives about technologies rather than about the technologies themselves. To claim the “soul” of a technological narrative is to win the market for it. Souls have great brand equity.

A proper brain-hoser of a longread from the latest episode of Venkatesh Rao’s Breaking Smart newsletter*; religion, sociotechnical change, artificial intelligence, societal alienation, ceiling fans. So much to chew on it took me an hour to pick a pull-quote; it is completely typical for Rao to just wander about like this between big-concept topics and find connections and comparisons, which is why I started reading him a long, long time ago.

* It appears you can’t see the latest episode in the archives, presumably until it is no longer the latest episode, because [newsletters]. Drop me a line if you want me to forward the email version on… or just trust me when I say that if you’re intrigued by the pull-quote, you should just subscribe anyway. Not like it’ll cost you anything, beyond a bit of cognitive bandwidth.

Systematized instrumental rationality

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.

Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?

Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.