Category Archives: General

Announcements, comments, sideswipes, whatever

Unpacking the suitcase words

Half a dozen different people sent me this article for slightly different reasons; one has come to dread the listicle format, but this example is excellent, with every point well worth passing on. My talk in Munich last week was an extensive riff on Clarke’s Third Law, so I’ll not reprise that now; instead, I’ll highlight this bit:

Marvin Minsky called words that carry a variety of meanings “suitcase words.” “Learning” is a powerful suitcase word; it can refer to so many different types of experience. Learning to use chopsticks is a very different experience from learning the tune of a new song. And learning to write code is a very different experience from learning your way around a city.

[…]

Suitcase words mislead people about how well machines are doing at tasks that people can do. That is partly because AI researchers—and, worse, their institutional press offices—are eager to claim progress in an instance of a suitcase concept. The important phrase here is “an instance.” That detail soon gets lost. Headlines trumpet the suitcase word, and warp the general understanding of where AI is and how close it is to accomplishing more.

I hadn’t heard Minsky’s coining before, but I sure as hell know suitcase words when I see them; I tend to call them “hollow signifiers”, myself, but suitcase words is a far better formulation.

I’m less sanguine than Brooks regarding the intentionality of suitcase words, however; I have long been of the opinion, and am increasingly so, that the energetic trumpeting of under-paid, under-trained and under pressure journalists that results in this semiotic inflation is not seen as a bug by the “artificial intelligence” industry, but is in fact seen as (and quite possibly nurtured as) a feature to be relentlessly exploited. This would be why Elon Musk takes every opportunity to position “artificial intelligence” as a potential threat, even as his own companies are sinking billions into R&D programs; so long as people are talking about a suitcase word, whether positively or negatively, said suitcase word becomes a lever for attention, and thus for funding. Sell it as an angel, sell it as a devil… don’t matter how you sell it, so long as you’re selling, right?

Five theses for the future

(Or: what I did on my holiday, by Paul Graham Raven, aged 40 ¾)

Many thanks to the lovely people at Bayerischer Rundfunk for inviting me to their annual conference in Munich, putting me up in what looks to be possibly its most characterful hotel, and giving me a stage from which to expose the noxious back-stage ideologies of transhumanism to a receptive and insightful audience. Doing little video bits like this is a small price to pay for such a privilege… but let’s be frank, that’s a face made for radio.

Systematized instrumental rationality

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.

Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?

Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.