Tag Archives: AI

rocket from the crypto / two dead letters

Chairman Bruce appears to be repubbing longreads from the now defunct Beyond The Beyond blog. This is a weird experience for me—distinctly atemporal, to use the man’s own term—because I recall reading this stuff at the time. And so it’s familiar and just-like-yesterday, but also so alienated and impossibly historical… I mean, I can’t recall the last time I saw anyone so much as mention the New Aesthetic, but I certainly remember a time when it seemed like everyone was talking about it. (That feeling of atemporal synchronicity is being compounded, no doubt, by my having been going through some of my own published material from the same period over the last couple of weeks… with the added irony that said act of retrospection was to the end of writing a chapter about Sterling for an academic collection.)

TL;DR—middle-age is a headfuck. I kind of understand why my parents went so weird in their forties, now… though I’m not sure I yet forgive the particular direction in which they went weird. And they didn’t even have the internet!

Anyway, the essay in question is the Chairman’s response to the New Aesthetic panel at the 2012 SXSW, and the bit I’m clipping is less about the New Aesthetic than a side-swipe at AI that reads just as true (and just as likely to be ignored) today:

… this is the older generation’s crippling hangup with their alleged “thinking machines.” When computers first shoved their way into analog reality, they came surrounded by a host of poetic metaphors. Cybernetic devices were clearly much more than mere motors and engines, so they were anthropomorphized and described as having “thought,” “memory,” and nowadays “sight” and “hearing.” Those metaphors are deceptive. These are the mental chains of the old aesthetic, these are the iron bars of oppression we cannot see.

Modern creatives who want to work in good faith will have to fully disengage from the older generation’s mythos of phantoms, and masterfully grasp the genuine nature of their own creative tools and platforms. Otherwise, they will lack comprehension and command of what they are doing and creating, and they will remain reduced to the freak-show position of most twentieth century tech art. That’s what is at stake.

Computers don’t and can’t make sound aesthetic judgements. Robots lack cognition. They lack perception. They lack intelligence. They lack taste. They lack ethics. They just don’t have any. Tossing in more software and interactivity, so that they’re even jumpier and more apparently lively, that doesn’t help.

It’s not their fault. They are not moral actors and they are incapable of faults. It’s our fault for pretending otherwise, for fooling ourselves, for projecting our own qualities onto phenomena that we built, that are very interesting to us, but not at all like us. We can’t give them those qualities of ours, no matter how hard we try.

Pretending otherwise is like making Super Mario the best man at your wedding. No matter how much time you spend with dear old Super Mario, he is going to disappoint in that role you chose for him. You need to let Super Mario be super in the ways that Mario is actually more-or-less super. Those are plentiful. And getting more so. These are the parts that require attention, while the AI mythos must be let go.

AI is the original suitcase word; indeed, it’s a term that Minsky came up with to describe the way the goal of “AI” kept drifting, and coming up with the term and identifying the problem didn’t get him anywhere nearer to solving it. I was writing a report on AI last year in a freelance capacity (for a foundation in a location whose commitment to the Californian Ideology is in some ways even greater than that of California itself, despite—or perhaps because of—its considerable geographical, historical and sociopolitical distance from California), and tried to make this point, drawing on the tsunami of critiques of AI-as-concept and AI-as-business-practice that have emerged since then, both within the academy and without… but, well, yeah.

I guess we just have to conclude that the sort of person who decides to make Super Mario their best man is not the sort of person who’s going to take it well when you point out that Super Mario is a sprite… no one wants to be the first to concede the emperor is naked, particularly not when they’ve stripped off in order to join the parade. Nonetheless, given the residual enthusiasm for peddling that particular brand of Kool-Aid which still persists among the big global consultancies, the McKinseys and their ilk, there’s probably a few more years in business models offering “Super Mario solutions” before smarter, faster-moving players start focussing on practical applications without the pseudo-religious wrapper. Or, I dunno, maybe not? Seems like people will believe whatever the hell makes them feel like a winner these days, and the very unfalsifiable nebulousness of “AI” might make it all but bulletproof for that very reason. Every era has its snake-oils.

a cranky aspiration

Chairman Bruce on AI ethics at LARB:

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

a metrics of labour other than time

Very interesting long paper by Matteo Pasquinelli; going back through Marx’s notion of the general intellect, he shows that none other than yer man Babbage theorised computing systems not only as a concretisation of labour but a crystallisation of preexisting biases in the workforce. Everything old becomes new again.

… the distinction between manual and mental labour disappears in Marxism because, from the abstract point of view of capital, all waged labour, without distinction, produces surplus value; all labour is abstract labour. However, the abstract eye of capital that regulates the labour theory of value employs a specific instrument to measure labour: the clock. In this way, what looks like a universal law has to deal with the metrics of a very mundane technology: clocks are not universal. Machines can impose a metrics of labour other than time, as has recently happened with social data analytics. As much as new instruments define new domains of science, likewise they define new domains of labour after being invented by labour itself. Any new machine is a new configuration of space, time and social relations, and it projects new metrics of such diagrams. In the Victorian age, a metrology of mental labour existed only in an embryonic state. A rudimentary econometrics of knowledge begins to emerge only in the twentieth century with the first theory of information. The thesis of this text is that Marx’s labour theory of value did not resolve the metrics for the domains of knowledge and intelligence, which had to be explored in the articulation of the machine design and in the Babbage principle.

Following Braverman and Schaffer, one could add that Babbage provided not just a labour theory of the machine but a labour theory of machine intelligence. Babbage’s calculating engines (‘intelligent machines’ of the age) were an implementation of the analytical eye of the factory’s master. Cousins of Bentham’s panopticon, they were instruments, simultaneously, of surveillance and measurement of labour. It is this idea that we should consider and apply to the age of artificial intelligence and its political critique, although reversing its polarisation, in order to declare computing infrastructures a concretion of labour in common.

Staring down Roko’s basilisk

Pete Wolfendale:

We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.

Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.

This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.

However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.

Against the asymptote

Joi Ito:

“For Singularity to have a positive outcome requires a belief that, given enough power, the system will somehow figure out how to regulate itself. The final outcome would be so complex that while we humans couldn’t understand it now, “it” would understand and “solve” itself. Some believe in something that looks a bit like the former Soviet Union’s master planning but with full information and unlimited power. Others have a more sophisticated view of a distributed system, but at some level, all Singularitarians believe that with enough power and control, the world is “tamable.” Not all who believe in Singularity worship it as a positive transcendence bringing immortality and abundance, but they do believe that a judgment day is coming when all curves go vertical.

Whether you are on an S-curve or a bell curve, the beginning of the slope looks a lot like an exponential curve. An exponential curve to systems dynamics people shows self-reinforcement, i.e., a positive feedback curve without limits. Maybe this is what excites Singularitarians and scares systems people. Most people outside the Singularity bubble believe in S-curves: nature adapts and self-regulates, and, for example, when a pandemic has run its course, growth slows and things adapt. They may not be in the same state, and a phase change could occur, but the notion of Singularity—especially as some sort of savior or judgment day that will allow us to transcend the messy, mortal suffering of our human existence—is fundamentally a flawed one.