Tag Archives: regulation

theatre of expertise / expertise of theatre

This one’s been doing the rounds in infrastructure-wonk circles, and deservedly so. I’m usually distrustful of any organisation that includes the term “governance innovation” in its moniker; CIGI is a Canadian thinktank founded by the guy who helmed RIM, none of which serves to fundamentally allay that instinctive suspicion, but this is nonetheless a serious, nuanced and in-depth piece on the tech/policy interface, the likes of which is vanishingly rare in the era of the Hot Take. This is the nut of it:

First, the digitization of public institutions changes the balance of government power, by shifting a number of political issues out of public process and framing them instead as procurement processes. Whereas questions around executive authority were historically defined in legislation, they’re often now defined in platform design — and disputes are raised through customer service. This shift extends executive power and substitutes expert review for public buy-in and legitimacy, in ways that cumulatively result in a public that doesn’t understand or trust what the government does. Importantly, the transition from representative debate to procurement processes significantly changes the structures of engagement for public advocates and non-commercial interests.

The second structural problem results when nuanced conversations about the technical instrumentation of a publicly important governance issue are sensationalized. For example, focusing on COVID-19 contact-tracing apps instead of the large institutional efforts needed to contain infection frames the issues around the technology and not the equities or accountability required to serve public interest mandates. One of the reasons for this is that experts, like everyone else, are funded by someone — and tend to work within their own political, professional and economic perspectives, many of which don’t take responsibility for the moral or justice implications of their participation. Consultants tend to focus on technical solutions instead of political ones, and rarely challenge established limits in the way that the public does.

Said differently, technologies are a way to embed the problem of the political fragility of expertise into, well, nearly everything that we involve technology in. And public institutions’ failure to grapple with the resulting legitimacy issues is destabilizing important parts of our international infrastructure when we need it most.

I don’t agree with all of it, but my disagreements are productive, if that makes any sense: there’s a language here for legitimation via expert discourses (or the lack thereof) which is worth engaging with in more detail. Reading it alongside Jo Guldi’s Roads to Power would be interesting, if time permitted: one of the many things that marvellous book achieves is to explain the (surprisingly early) establishment of the technological expert as not just a political actor, but more particularly an actor in the more formalised theatre of statecraft, thus sowing the seeds of what McDonald is discussing in this piece.

(Damn, I really need to re-read that book… though it seems I loaned it to someone and never got it back. Guess it’s time to hit the requisition system again…)

Qui autem temperet moderatores?

… [UK] consumers have overpaid for the natural monopolies and other networks underpinning many of these markets for at least the past 15 years. Because of patchy reporting from regulators, it’s impossible to document the full extent of these overpayments. However, this research finds that regulators have systematically set prices too high, leading to consumers facing unnecessarily high bills – that is, bills well in excess of what is required to deliver the necessary investment in these essential services.

We’re able to put concrete figures on these overpayments for water, energy, telephone and broadband infrastructure. Our conservative estimate is that that excess figure is £24.1bn. We find that the errors in energy and water have cost consumers £11bn and £13bn respectively.


… just focusing on the technicalities would neglect a simpler explanation: regulators have been out-resourced and outgunned. If this was just a story of errors in financial modelling, the errors would sometimes fall in consumers’ and sometimes in investors’ favour. But this is not what we see: instead, the errors are biased. Indeed, as we show below, this has sometimes been a conscious strategy from regulators: fearing under-investment, they have ‘aimed up’ on capital costs, choosing higher values than their estimates indicated they should.

Monopoly Money report from Citizen’s Advice

Systematized instrumental rationality

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.

Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?

Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.