Via Chairman Bruce comes news that various ongoing driverless car experiments are quietly leaving town while everyone’s busy worrying about other things. If such solutionisms are even a temporary casualty of the pandemic, then we’ve already found a silver lining to this particular cloud… as Sterling notes, it’s likely that the circumstances are providing a convenient excuse for pulling the plug on something that was massively overpromised in order to attract venture capital investment (and the innovation budgets of those cities lucky enough to actually have one). Might we see the “smart city” go the same way? We can but hope.
(Of course, there’s good odds that the same grifters behind driverless cars etc. will now pivot to pandemic “solutions”… but as already noted by people everywhere, individualist solutions look absurd against a pandemic backdrop, which inevitably highlights the necessity of collectivist systems.)
Anne Galloway on more-than-human design:
… I’m not a believer that technology under capitalism will be the planet’s salvation, and I tend to part ways with (commercial?) designers and technologists who aim to design more “precision” agriculture through “intelligent” machines, and I’m constantly watching for bad omens. The ethos of the More-Than-Human Lab draws on Donna Haraway’s “staying with the trouble” and tries to go beyond the design of human-nonhuman interactions to reimagine human-nonhuman relations. For me, this means not trying to “fix” the world, and resisting both purity and progress to live well together through uncertain and difficult circumstances.
The deep irony (?!) is that indigenous cultures all around the world and many non-Western religions have always understood that nature and culture aren’t separate, and that humans aren’t superior in our abilities or experiences. Western intellectual history and industrial capitalist societies have not allowed this kind of thinking to take hold except for amongst a fringe few, and I think this has played a pivotal role in the current climate crisis and the impoverished range of corrective measures on offer.
Chairman Bruce on AI ethics at LARB:
In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.
Interesting interview with Anna Wiener, The New Yorker‘s woman-on-the-ground in Silicon Valley. Her critique is informed by actually having spent a number of years in the trenches of tech, always on the non-coding side of the payroll.
Today’s iteration of Silicon Valley seems ahistorical, anti-intellectual, irreverent in a way that is more reflective of the current phase of capitalism than of any unique industry value. I feel the industry needs to be more closely tied to both the government and academia, and better integrated—not in the current NSA, Stanford-pipeline sort of way. We’ve lost, for example, the tradition of research labs. In the late twentieth century, the countercultural idealism hardened into a libertarian ethos, an anti-institutional, anti-government stance, and also this new form of hubris that was legitimized by venture capital. I think people are incredibly reluctant to surrender that underdog identity, regardless of how true it was, then or now.
Like the interviewer here, I read (and was blown away by) her memoir piece at n+1 back in 2016; if her just-about-to-drop book manages to sustain that same tension and vibe, it’ll be a great (but also enervating) read.
Very interesting long paper by Matteo Pasquinelli; going back through Marx’s notion of the general intellect, he shows that none other than yer man Babbage theorised computing systems not only as a concretisation of labour but a crystallisation of preexisting biases in the workforce. Everything old becomes new again.
… the distinction between manual and mental labour disappears in Marxism because, from the abstract point of view of capital, all waged labour, without distinction, produces surplus value; all labour is abstract labour. However, the abstract eye of capital that regulates the labour theory of value employs a specific instrument to measure labour: the clock. In this way, what looks like a universal law has to deal with the metrics of a very mundane technology: clocks are not universal. Machines can impose a metrics of labour other than time, as has recently happened with social data analytics. As much as new instruments define new domains of science, likewise they define new domains of labour after being invented by labour itself. Any new machine is a new configuration of space, time and social relations, and it projects new metrics of such diagrams. In the Victorian age, a metrology of mental labour existed only in an embryonic state. A rudimentary econometrics of knowledge begins to emerge only in the twentieth century with the first theory of information. The thesis of this text is that Marx’s labour theory of value did not resolve the metrics for the domains of knowledge and intelligence, which had to be explored in the articulation of the machine design and in the Babbage principle.
Following Braverman and Schaffer, one could add that Babbage provided not just a labour theory of the machine but a labour theory of machine intelligence. Babbage’s calculating engines (‘intelligent machines’ of the age) were an implementation of the analytical eye of the factory’s master. Cousins of Bentham’s panopticon, they were instruments, simultaneously, of surveillance and measurement of labour. It is this idea that we should consider and apply to the age of artificial intelligence and its political critique, although reversing its polarisation, in order to declare computing infrastructures a concretion of labour in common.