Roamin’ roads

Via kottke, this tube-map-style atlas of Roman roads lands foursquare in a gloriously tangled Venn intersection of Things I Really Love:

Subway-style map of Roman roads in Europe by Sasha Trubetskoy

If that’s whetted your appetite, the Stanford ORBIS Geospatial Model of the Roman World will take you all the way down the rabbit-hole. Those with a more parochial bent may prefer the tube-map atlas of Roman roads in the British Isles. (There are more of them than you think.)

Systematized instrumental rationality

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.

Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?

Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.

Consider the possibility

I’ve spent more time than I’d like to admit hanging around the online communities of the kind of people we are worried about reaching here, and I am here to tell you: They are using their critical thinking skills.

They are fully literate in concepts like bias and in the importance of interrogating sources. They believe very much in the power of persuasion and the dangers in propaganda and a great many of them believe that we are the ones who have been behaving uncritically and who have been duped. They think that we are the unbelieving victims of fraud.

Which is not to set up some kind of false equivalency between sides. But I do want us to consider the possibility that we don’t need to talk across that barrier, and that it might not be possible to talk across it. That we need to consider that if it’s true that vast swaths of the voting populace are unbelieving victims of fraud, that there’s not much we can do for them. That we may need instead to work to invigorate our allies, discourage our enemies, and save the persuasion for people right on the edge.

But, again, I’m saying all of this to you as someone who has not figured this out.

Tim Maly.


Glad to see the debate on UBI is starting to get beyond the surface gosh-wow. From a bit at Teh Graun:

In their incendiary book Inventing the Future, the authors Alex Williams and Nick Srnicek argue for UBI but link it to three other demands: collectively controlled automation, a reduction in the working week, and a diminution of the work ethic. Williams and Srnicek believe that without these other provisions, UBI could essentially act as an excuse to get rid of the welfare state.

W & S are smart to suggest those provisions, but I’d suggest there are a few others necessary to avoid the trap that the aspiring nosferatu of the Adam Smith Institute are so keen to spring.

So, look: the state sets a standard rate of UBI, presumably on the basis of some basic standard of living; perhaps they even put it on an inflationary ramp so it increases over time. Lovely: everyone can afford the basics, and you can work to level up from there is you want to.

However, if housing provision is still predominantly handled by the private sector, rents would rapidly raise to the highest point that the UBI would bear, coz rentiers gonna extract rents. Ditto privatised medicine. Ditto food production. Ditto infrastructural provision. In an unreformed market economy, whatever the set rate of UBI was would be inadequate very quickly — like, a matter of years rather than decades, if not faster. Because when we talk about markets being efficient, that’s what we really mean: their rapid maxing out of all possible rent extraction in any given system. (Yeah, you though efficiency was all about using less, didn’t you? That’s a useful illusion, which is why you’re encouraged to keep it. But no: market efficiency is exactly the opposite, in that the efficient market leaves nothing unused.)  In a nation of legitimised thievery and tollbooth economics, putting money in the poor man’s pocket serves only to enrich the thieves over the long run; hence the poorly-disguised boners around the C-suite table at the ASI, no doubt.

This is not to dismiss UBI, to be clear; it’s a rational and achievable reform of state welfare systems. But in the absence of land reform, significant regulation of businesses, and the partial or total renationalisation of infrastructure and housing, it will fail, and fail fast. If you want to provide the basics to everyone, you’re going to have to intervene in the systems of provision… and you can bet your bottom dollar that the ASI won’t be genuflecting to that idea any time before the heat-death of the universe.

A threshold phenomenon

This whole fake news phenomenon is hugely important and historically significant. At the moment I’m completely captivated by the strength of an analogy between the Gutenberg era and the internet era, this rhythmic force coming out of the connection between them. Radical reality destruction went on with the emergence of [the] printing press. In Europe this self-propelling process began, and the consensus system of reality description, the attribution of authorities, criteria for any kind of philosophical or ontological statements, were all thrown into chaos. Massive processes of disorder followed that were eventually kind of settled in this new framework, which had to acknowledge a greater degree of pluralism than had previously existed. I think we’re in the same kind of early stage of a process of absolute shattering ontological chaos that has come from the fact that the epistemological authorities have been blasted apart by the internet. Whether it’s the university system, the media, financial authorities, the publishing industry, all the basic gatekeepers and crediting agencies and systems that have maintained the epistemological hierarchies of the modern world are just coming to pieces at a speed that no one had imagined was possible. The near-term, near-future consequences are bound to be messy and unpredictable and perhaps inevitably horrible in various ways. It is a threshold phenomenon. The notion that there is a return to the previous regime of ontological stabilization seems utterly deluded. There’s an escape that’s strictly analogous to the way in which modernity escaped the ancien régime.


My tendency is not to draw a huge distinction between [scientists and artists]. In all cases one’s dealing with the formulation or floatation of certain hypothesis. I am assuming that every scientist has an implicit science fiction. We all have a default of what we think the world is going to be in five years time, even if it’s blurry or not very explicit. If we haven’t tried to do science fiction, it probably means we have a damagingly conservative, inert, unrealistic implicit future scenario. In most cases a scientist is just a bad science fiction writer and an artist, hopefully, is a better one. There is, obviously, a lot of nonlinear dynamism, in that science fiction writers learned masses from scientists, how to hone their scenarios better, and also the other way around. Science fiction has shaped the sense of the future so much that everyone has that as background noise. The best version of the near future you have has been adopted from some science fiction writer. It has to be that science is to some extent guided by this. Science fiction provides its testing ground.

Nick Land.

Science fiction, science fact, and all that's in between …