Amend Malthus

So one is left with the thought that Malthus might just have been unlucky with his timing. It would have been hard for him to know that the small workings of coal he might have been able to observe were in fact a foretaste of the large scale mining of the 19th and 20th centuries, or that we’d stumble across more more-or-less free energy in the shape of oil in the 20th century.

[…] Malthus casts a doubt over the whole notion of progress and growth that has been our dominant discourse for the past 150 years, certainly in the countries that did well out of the Industrial Revolution. More: it has been our only permissible mainstream discourse. And if Malthus was unlucky in his timing, his argument still implies that we might, as a species, have been lucky rather than clever in stumbling across all of that easy energy. Which, in turn, casts a doubt over a large part of the story about human capacity and human development that is the story of the Enlightenment.

Andrew Curry at The Next Wave. Note that ragging on Malthus is a classic strategy of Wizards, as Malthus is arguably the pioneering Prophet.

Cold equations in the care vacuum

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.

I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.

Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.

Anthropo(s)cene

“Most of us have been or will be tourists at some point in our lives. We will travel to someplace at some moment in time in which we are visitors and are not planning to settle. It might be a trip to the coast or to the mountains or to a city, but we will be touring. Disliking tourists, therefore, is really a way to express a dislike for ourselves, our culture, and who we have become. Tourists dislike tourists because people dislike people. We dislike the fact that we always appear to want to consume more.”

From Phaedra Carmen Pezzullo’s Toxic Tourism: Rhetorics of Pollution, Travel, and Environmental Justice, cited in this bleak but important article on the super-toxic timebomb that is the Berkeley Pit of Butte, MT.

No such thing as magic: misinterpreting Clarke’s Third Law

Over the weekend John Naughton at Teh Graun provided some much-needed deflation regarding the religion of machine learning and “AI”. I am in full agreement with much of what he says — indeed, I have been singing from that songsheet for quite a few years now, as have a number of other Jonahs and Cassandras.

However, I feel the need to take polite objection to Naughton’s misrepresentation of Clarke’s Third Law. (You know the one: “any sufficiently advanced technology is indistinguishable from magic”.) While it’s quite correct to say that the thought-lords of Silicon Valley (and their PR people) have peddled Clarke’s Third as justification for and endorsement of whatever it is they’ve decided they’re trying to do this week, to assume that’s how Clarke meant it to be used is to do the man a disservice, and indeed to misparse the aphorism in exactly the same way that the techies have. (This seems to happen surprisingly often.)

The thing is, no one believed less in magic than did Clarke; those of a similar age to myself may recall him as a dogged debunker of woo and myth, both in books and on television. Firstly, Clarke’s Third does not conflate magic and technology; on the contrary, it merely points out that to anyone not initiated into either mystery-system, both mystery-systems are equally opaque with regard to cause and effect. Or, in other words, both magic and technology seem miraculous unless you have an understanding of how the trick is performed.

Which leads us to the second point: when Clarke said “magic”, he meant stage magic: illusion, prestidigitation, misdirection. He didn’t believe in the supernatural (though he took a while to come to that position, admittedly, after an early fascination with the paranormal), but he understood the power of showmanship when combined with a lack of knowledge in an audience — and he recognised that technology’s appeal lies exactly in its seeming magicality, its something-out-of-nothingness; that’s how you sell it.

It was true in the time of Edison and Tesla, and it’s still true now, that “technology” (which is itself a suitcase word that has come to refer to shiny consumer products rather than sociotechnical systems of practice) is largely an obfuscatory front-end to the provisioning capacities of infrastructure. That’s why Edison, cunning bastard that he was, worked so hard on developing usable light-bulbs: he understood that infrastructure is too abstract a proposition, but that applications are an easy sell. As such, Clarke’s Third Law is best understood as a proleptic critique of solutionism — though I suspect Clarke himself might have balked at that characterisation. (He was rather more an optimist than I am.)

There’s a lot more to this riff, and I’m currently rather too busy trying to find some gainful employment to write about it at length — but if you’ve 45 minutes to spare, and you’d like the full unpacking of Clarke’s Third Law as it relates to technology and infrastructure in the 21st Century (all wrapped up in a furious critique of transhumanism, which is basically Clarke’s Third elevated from mere business model to the status of a religion without a god), then y’all might want to watch the this video of a talk I gave in Munich last year:

Stating the bloody obvious

… those tech creators and tech billionaires who are influenced by Science Fiction seem to assume that because things in Science Fiction work in the society and culture of those created future-set universes, there is an expectation bias that they will work in our real life and present, without much testing or oversight.

Gadgets, services, and technologies work in Science Fiction because it is fiction. They work because it is a narrative, and as such, their authors or filmmakers showed them working. They work because in fiction, it is very easy to make things work, because they aren’t real and don’t need to actually work.

Realizing the unreal from fiction will not make that realization work in the same way in real life. It can’t. The context, timeframe, and people are different. Most importantly, Science Fiction is fiction.

Astonishing, really, that this even needs to be said — though it clearly does need to be said.

However, the author’s relentless capping of Science Fiction betrays what is likely the same superficial engagement with the genre demonstrated by those they are criticising: there’s plenty of science fiction in which the tech doesn’t work, and indeed which is totally about the tech not working, or working in ways orthogonal to its maker’s and user’s original (or at least originally stated) intentions; it’s also hard to square this piece with the effectively mainstreamed (but nonetheless totally wrongheaded) punditry to the effect that science fiction has gone too far in the tech-negative dystopian direction. But hey, when your research needs publicising and a venue has an obvious hook for your pitch, well, we’ve all been there, amirite?

That said, the author’s call for companies to hire social scientists to deal with these sorts of issues is something I’d support — though yer man Damien Williams makes the case far more effectively (not to mention eloquently). Meanwhile, re: science fiction, the distinction between the technological utopian mode and the critical utopian mode was old theory when I picked it up back in 2014, but it’s as relevant as ever. If people are going to turn to narrative forms as spaces of inspiration and reflection — and they clearly are, and clearly always have done — then we might as well use critical narrative form to counter the uncritical stuff, no?

Science fiction, science fact, and all that's in between …