Category Archives: Technology

Smaller, better, faster, more!

The aesthetics of decentralisation

Despite the cacophony of political conjecture, the story of blockchain so far is a tale of financial speculation, in which the cash rewards reaped by bankers and venture capitalists are largely a result of the techno-utopian hype. Plus ça change, plus c’est la même chose. The prospect of decentralizing control does not absolve us of the hard work of politics, and blockchain has so far failed to transfer power to ‘We, the people’, whatever the white papers might suggest. Political economy cannot be replaced by technology alone. Today, technological wealth produced by and for society largely oils the machinery of capitalist accumulation. While we have yet to witness the decentralization of control, the collective wealth produced by of the decentralization of production — that is, the ‘sharing economy’, the big data industry, and other platforms that monetize our daily social interactions — remains firmly in the service of exploitative (centralized) corporations. Whether in logistics or social media, it is not so difficult — nor even particularly radical — to imagine decentralized, peer-to-peer services which produce value by and for the commonwealth. Nonetheless, it would require governance, by nationalization or other means: the network is not identical to the commons, and nor should we hope for it to be.

A super-chewy long-read [via Jay Springett]: “Systems Seduction: The Aesthetics of Decentralisation” by Gary Zhexi Zhang, one of ten winners in the Journal of Design & Science “Resisting Reduction” essay competition.

 

No such thing as magic: misinterpreting Clarke’s Third Law

Over the weekend John Naughton at Teh Graun provided some much-needed deflation regarding the religion of machine learning and “AI”. I am in full agreement with much of what he says — indeed, I have been singing from that songsheet for quite a few years now, as have a number of other Jonahs and Cassandras.

However, I feel the need to take polite objection to Naughton’s misrepresentation of Clarke’s Third Law. (You know the one: “any sufficiently advanced technology is indistinguishable from magic”.) While it’s quite correct to say that the thought-lords of Silicon Valley (and their PR people) have peddled Clarke’s Third as justification for and endorsement of whatever it is they’ve decided they’re trying to do this week, to assume that’s how Clarke meant it to be used is to do the man a disservice, and indeed to misparse the aphorism in exactly the same way that the techies have. (This seems to happen surprisingly often.)

The thing is, no one believed less in magic than did Clarke; those of a similar age to myself may recall him as a dogged debunker of woo and myth, both in books and on television. Firstly, Clarke’s Third does not conflate magic and technology; on the contrary, it merely points out that to anyone not initiated into either mystery-system, both mystery-systems are equally opaque with regard to cause and effect. Or, in other words, both magic and technology seem miraculous unless you have an understanding of how the trick is performed.

Which leads us to the second point: when Clarke said “magic”, he meant stage magic: illusion, prestidigitation, misdirection. He didn’t believe in the supernatural (though he took a while to come to that position, admittedly, after an early fascination with the paranormal), but he understood the power of showmanship when combined with a lack of knowledge in an audience — and he recognised that technology’s appeal lies exactly in its seeming magicality, its something-out-of-nothingness; that’s how you sell it.

It was true in the time of Edison and Tesla, and it’s still true now, that “technology” (which is itself a suitcase word that has come to refer to shiny consumer products rather than sociotechnical systems of practice) is largely an obfuscatory front-end to the provisioning capacities of infrastructure. That’s why Edison, cunning bastard that he was, worked so hard on developing usable light-bulbs: he understood that infrastructure is too abstract a proposition, but that applications are an easy sell. As such, Clarke’s Third Law is best understood as a proleptic critique of solutionism — though I suspect Clarke himself might have balked at that characterisation. (He was rather more an optimist than I am.)

There’s a lot more to this riff, and I’m currently rather too busy trying to find some gainful employment to write about it at length — but if you’ve 45 minutes to spare, and you’d like the full unpacking of Clarke’s Third Law as it relates to technology and infrastructure in the 21st Century (all wrapped up in a furious critique of transhumanism, which is basically Clarke’s Third elevated from mere business model to the status of a religion without a god), then y’all might want to watch the this video of a talk I gave in Munich last year:

There is no meaningfully superhuman way to install a ceiling fan

In the history of both technology and religion, you find a tension between two competing priorities that lead to two different patterns of problem selection: establishing the technology versus establishing a narrative about the technology. In proselytizing, you have to manage the tension between converting people and helping them with their daily problems. In establishing a religion in places of power, you have to manage a tension between helping the rulers govern, versus getting them to declare your religion as the state religion.

You could say Boundary AI problems are church-building problems. Signaling-and-prayer-offering institutions around which the political power of a narrative can accrete. Even after accounting for Moravec’s paradox (easy for humans is hard for machines/hard for humans is easy for machines), we still tend to pick Boundary AI problems that focus on the theatrical comparison, such as skill at car-driving.

In technology, the conflict between AC and DC witnessed many such PR battles. More recently VHS versus Betamax, Mac versus PC, and Android versus iOS are recognized as essentially religious in part because they are about competing narratives about technologies rather than about the technologies themselves. To claim the “soul” of a technological narrative is to win the market for it. Souls have great brand equity.

A proper brain-hoser of a longread from the latest episode of Venkatesh Rao’s Breaking Smart newsletter*; religion, sociotechnical change, artificial intelligence, societal alienation, ceiling fans. So much to chew on it took me an hour to pick a pull-quote; it is completely typical for Rao to just wander about like this between big-concept topics and find connections and comparisons, which is why I started reading him a long, long time ago.

* It appears you can’t see the latest episode in the archives, presumably until it is no longer the latest episode, because [newsletters]. Drop me a line if you want me to forward the email version on… or just trust me when I say that if you’re intrigued by the pull-quote, you should just subscribe anyway. Not like it’ll cost you anything, beyond a bit of cognitive bandwidth.

“Engineers try to do politics by changing infrastructure.”

From an interview with Fred Turner:

What are the “politics of infrastructure”? What does that phrase mean?

It means several different things. First, it involves the recognition that the built environment, whether it’s built out of tarmac or concrete or code, has political effects. I was joking earlier about reshaping the Forum, but I shouldn’t have joked quite so much, because the fact that the Forum was round encouraged one kind of debate.

Think about an auditorium where someone sits onstage and the audience watches, versus a Quaker meeting where everyone sits in a circle. They’re very different.

So, structure matters. Design is absolutely critical. Design is the process by which the politics of one world become the constraints on another. How are those constraints built? What are its effects on political life?

To study the politics of infrastructure is to study the political ideas that get built into the design process, and the infrastructure’s impact on the political possibilities of the communities that engage it.

Cited mostly because it’s something of a relief to hear a big-league talking head starting to come round to the ideas that a lot of my colleagues and friends have been working on for about the last decade or so. (But on the basis of personal experience, good luck trying to convince engineers that infrastructure is political; it’s among the discipline’s Great Unthinkables.)

And on that note, here’s a bonus snip from the same piece, on the (perceived?) libertarianism of the Valley:

… I think that the vision of the Valley as a libertarian space is a combination of actual libertarian beliefs held by people like Peter Thiel and a celebration of libertarian ideals by an East Coast press that wants to elevate inventor types. Steve Jobs is the most famous. East Coast journalists want to rejuvenate the American hero myth—and they’re going to find a world to do it in.

In order to make these heroes, however, they have to cut them off from the context that produced them. They can’t tell a context story. They can’t tell a structure story. They have to tell a hero story. Suddenly the heroes themselves look like solo actors who pushed away the world to become the libertarian ideal of an Ayn Rand novel. So I think it’s a collaboration between actually existing tech leaders and the press around a myth.

I have, for quite some time, been inclined to agree.

Unpacking the suitcase words

Half a dozen different people sent me this article for slightly different reasons; one has come to dread the listicle format, but this example is excellent, with every point well worth passing on. My talk in Munich last week was an extensive riff on Clarke’s Third Law, so I’ll not reprise that now; instead, I’ll highlight this bit:

Marvin Minsky called words that carry a variety of meanings “suitcase words.” “Learning” is a powerful suitcase word; it can refer to so many different types of experience. Learning to use chopsticks is a very different experience from learning the tune of a new song. And learning to write code is a very different experience from learning your way around a city.

[…]

Suitcase words mislead people about how well machines are doing at tasks that people can do. That is partly because AI researchers—and, worse, their institutional press offices—are eager to claim progress in an instance of a suitcase concept. The important phrase here is “an instance.” That detail soon gets lost. Headlines trumpet the suitcase word, and warp the general understanding of where AI is and how close it is to accomplishing more.

I hadn’t heard Minsky’s coining before, but I sure as hell know suitcase words when I see them; I tend to call them “hollow signifiers”, myself, but suitcase words is a far better formulation.

I’m less sanguine than Brooks regarding the intentionality of suitcase words, however; I have long been of the opinion, and am increasingly so, that the energetic trumpeting of under-paid, under-trained and under pressure journalists that results in this semiotic inflation is not seen as a bug by the “artificial intelligence” industry, but is in fact seen as (and quite possibly nurtured as) a feature to be relentlessly exploited. This would be why Elon Musk takes every opportunity to position “artificial intelligence” as a potential threat, even as his own companies are sinking billions into R&D programs; so long as people are talking about a suitcase word, whether positively or negatively, said suitcase word becomes a lever for attention, and thus for funding. Sell it as an angel, sell it as a devil… don’t matter how you sell it, so long as you’re selling, right?