In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.
Very interesting long paper by Matteo Pasquinelli; going back through Marx’s notion of the general intellect, he shows that none other than yer man Babbage theorised computing systems not only as a concretisation of labour but a crystallisation of preexisting biases in the workforce. Everything old becomes new again.
… the distinction between manual and mental labour disappears in Marxism because, from the abstract point of view of capital, all waged labour, without distinction, produces surplus value; all labour is abstract labour. However, the abstract eye of capital that regulates the labour theory of value employs a specific instrument to measure labour: the clock. In this way, what looks like a universal law has to deal with the metrics of a very mundane technology: clocks are not universal. Machines can impose a metrics of labour other than time, as has recently happened with social data analytics. As much as new instruments define new domains of science, likewise they define new domains of labour after being invented by labour itself. Any new machine is a new configuration of space, time and social relations, and it projects new metrics of such diagrams. In the Victorian age, a metrology of mental labour existed only in an embryonic state. A rudimentary econometrics of knowledge begins to emerge only in the twentieth century with the first theory of information. The thesis of this text is that Marx’s labour theory of value did not resolve the metrics for the domains of knowledge and intelligence, which had to be explored in the articulation of the machine design and in the Babbage principle.
Following Braverman and Schaffer, one could add that Babbage provided not just a labour theory of the machine but a labour theory of machine intelligence. Babbage’s calculating engines (‘intelligent machines’ of the age) were an implementation of the analytical eye of the factory’s master. Cousins of Bentham’s panopticon, they were instruments, simultaneously, of surveillance and measurement of labour. It is this idea that we should consider and apply to the age of artificial intelligence and its political critique, although reversing its polarisation, in order to declare computing infrastructures a concretion of labour in common.
We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.
Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.
This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.
However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.
“For Singularity to have a positive outcome requires a belief that, given enough power, the system will somehow figure out how to regulate itself. The final outcome would be so complex that while we humans couldn’t understand it now, “it” would understand and “solve” itself. Some believe in something that looks a bit like the former Soviet Union’s master planning but with full information and unlimited power. Others have a more sophisticated view of a distributed system, but at some level, all Singularitarians believe that with enough power and control, the world is “tamable.” Not all who believe in Singularity worship it as a positive transcendence bringing immortality and abundance, but they do believe that a judgment day is coming when all curves go vertical.
Whether you are on an S-curve or a bell curve, the beginning of the slope looks a lot like an exponential curve. An exponential curve to systems dynamics people shows self-reinforcement, i.e., a positive feedback curve without limits. Maybe this is what excites Singularitarians and scares systems people. Most people outside the Singularity bubble believe in S-curves: nature adapts and self-regulates, and, for example, when a pandemic has run its course, growth slows and things adapt. They may not be in the same state, and a phase change could occur, but the notion of Singularity—especially as some sort of savior or judgment day that will allow us to transcend the messy, mortal suffering of our human existence—is fundamentally a flawed one.“
Over the weekend John Naughton at Teh Graun provided some much-needed deflation regarding the religion of machine learning and “AI”. I am in full agreement with much of what he says — indeed, I have been singing from that songsheet for quite a few years now, as have a number of other Jonahs and Cassandras.
However, I feel the need to take polite objection to Naughton’s misrepresentation of Clarke’s Third Law. (You know the one: “any sufficiently advanced technology is indistinguishable from magic”.) While it’s quite correct to say that the thought-lords of Silicon Valley (and their PR people) have peddled Clarke’s Third as justification for and endorsement of whatever it is they’ve decided they’re trying to do this week, to assume that’s how Clarke meant it to be used is to do the man a disservice, and indeed to misparse the aphorism in exactly the same way that the techies have. (This seems to happen surprisingly often.)
The thing is, no one believed less in magic than did Clarke; those of a similar age to myself may recall him as a dogged debunker of woo and myth, both in books and on television. Firstly, Clarke’s Third does not conflate magic and technology; on the contrary, it merely points out that to anyone not initiated into either mystery-system, both mystery-systems are equally opaque with regard to cause and effect. Or, in other words, both magic and technology seem miraculous unless you have an understanding of how the trick is performed.
Which leads us to the second point: when Clarke said “magic”, he meant stage magic: illusion, prestidigitation, misdirection. He didn’t believe in the supernatural (though he took a while to come to that position, admittedly, after an early fascination with the paranormal), but he understood the power of showmanship when combined with a lack of knowledge in an audience — and he recognised that technology’s appeal lies exactly in its seeming magicality, its something-out-of-nothingness; that’s how you sell it.
It was true in the time of Edison and Tesla, and it’s still true now, that “technology” (which is itself a suitcase word that has come to refer to shiny consumer products rather than sociotechnical systems of practice) is largely an obfuscatory front-end to the provisioning capacities of infrastructure. That’s why Edison, cunning bastard that he was, worked so hard on developing usable light-bulbs: he understood that infrastructure is too abstract a proposition, but that applications are an easy sell. As such, Clarke’s Third Law is best understood as a proleptic critique of solutionism — though I suspect Clarke himself might have balked at that characterisation. (He was rather more an optimist than I am.)
There’s a lot more to this riff, and I’m currently rather too busy trying to find some gainful employment to write about it at length — but if you’ve 45 minutes to spare, and you’d like the full unpacking of Clarke’s Third Law as it relates to technology and infrastructure in the 21st Century (all wrapped up in a furious critique of transhumanism, which is basically Clarke’s Third elevated from mere business model to the status of a religion without a god), then y’all might want to watch the this video of a talk I gave in Munich last year: