A while ago I referred to my reluctantly writing about “AI” with a very ugly simile, but I now realise that I was in fact doing a sort of reverse projection: both figuratively and literally, both as discourse and as technology, “AI” itself is the dog eating its own vomit.
This will be an old-school sort of blog post, in that it is a direct response to another blogger, though the post to which it responds might usefully be considered as an exemplar of a more general case. Nonetheless, I want to take the unusual step of addressing its author directly, because I am given to believe that the author is that rarest of creatures, namely someone who sometimes reads what I write here.
So, Matt: I know you to be a good man with good intentions, and you have a well-earned reputation as one of your generation’s great designers and technologists. I am replying to you directly because I think there is a chance you might listen. I hope you will take this as it is meant, namely as a sort of red-teaming response, a critique aimed at persuading you away from a destructive direction of travel. Your logic is internally consistent, but dangerously circular, and founded on a fatal assumption—and I think that, at some level, you know it, because it’s all right there in the post itself.
I’m going to start by summarising the argument, which begins with the claim that generative models can be assumed to be, or likely to become, on a par with GPS as a technological infrastructure. This leads to the proposition that “AI” should be seen as a national strategic resource.
Next comes the assumption that “we’ll be using AIs to automate business logic with value judgements […] and also to write corporate strategy and government policy”. You continue:
No, this isn’t desirable necessarily. We won’t build this into the software deliberately. But a generation of people are growing up with AI as a cognitive prosthesis and they’ll use it whether we like it or not.
I will return to this point, as it is crucial, but let me point out immediately that in the writing of this paragraph you are already enacting the automation of value judgements, already handing over the social to the technological.
It’s not that “they” will “use it whether we like it or not”; you are submitting, right here and now, via this capitulatory claim of technological inevitability. You have already enacted a surrender you claim to be “not necessarily desirable”. You are lamenting the departure of the horse through a stable door that you have decided must nonetheless remain open.
There is a choice here, but you have begun your argument by choosing in advance to throw that choice away.
The next point is hardly controversial: generative models regurgitate the ideologies implicit in their training data. This leads you to posit the likelihood of models with alignments that correspond to the ideological positions of powerful nation-states; your choice of China and the US as illustrative examples is telling, for reasons I will return to.
You then identify a consequence: that nation-states must “retain independent capacity to stand up new AIs”, which will require a corpus of appropriate training data.
However, as you go on to note, the body of potential training data is already polluted by the rapid introduction of derivative content generated by “AI”. Returning to GPS and mapping as an analogy, you mention the extraordinary cost of maintaining a global dataset that might stand against the risk of mapping becoming a corporate monopoly function. But then, in an unexplained logical leap, you argue that “AI” requires not a shared, open-source alternative to monopoly, but rather a retreat into state-managed national monopolies.
From the point of view of national interests, each country (or each trading bloc) will need its own training data, as a reserve, and a hedge against the interests of others.
How to achieve this? States should “take a snapshot of the internet and keep it somewhere really safe”, like the Svalbard seed vault, and put it in the care of “librarians and archivists” who understand the importance of “acquisition and accessioning”.
I have already pulled out what I consider to be the fatal assumption of this argument, namely the capitulation to technological inevitability: to paraphrase, “look, ‘AI’ is a thing now, so people are gonna use it in ways that I can recognise are going to be bad, but the genie is out of the bottle, so we might as well get good at asking it for stuff.”
(I think of this as a version of Stewart Brand’s hubristic claim of the Promethean mantle, remixed for a world that is increasingly unwilling to accept the notion of technology as an unquestionable good.)
You use the examples of parole assessments, mortgage decisions… you know, your children’s mortgage decisions; your neighbours’ parole assessments. There are literal decades of research exposing the ideologically-inflected bias of algorithmic decision-making just like this, and it is a very ugly history; scholars of such matters have repeatedly pointed out that such biases long precede the introduction of computerised automation, which has only served to make those decisions faster, crueller, and more inscrutable. (Seeing Like A State, innit.) You take this trend as an inevitability, a given: technology has won, and we might as well get used to it.
With zero apologies for channelling the ghost of Cromwell, Matt: I beseech you, in the bowels of Christ, think it possible you are mistaken.
Because this assumption of inevitability is the move that pushes you into a circular logic, an ant-mill in which the very knowledge you propose to protect and preserve is doomed to degradation and decay. Consider your own admission:
… the world’s data will never be more available or less contaminated than it is today.
In other words, the groundwater is already polluted, and that pollution is projected to continue, inevitably. But what is the source of that pollution? It is the very models whose continued validity you are proposing to protect.
But OK, we’ll take “a snapshot of the internet”. Which internet? Will we filter out the bits that don’t fit the national ideology? Are all the once-lauded ideals of hypertextual interconnection to be thrown out in favour of walled gardens of narrow knowledge that favour certain politically-informed worldviews? If so, whose politics, whose worldviews?
But OK, we take a snapshot of an internet, and stash it somewhere future-proof. Again, the choice of illustrative example here is telling, because the seed vault at Svalbard is threatened with flooding due to climate change. Preserving things for futurity is hard, because the world in which we attempt to do so has a tendency to surprise us with things that we hadn’t thought we needed to worry about, because we were too caught up in the assumption that our prevailing paradigms were inevitable, unending, and fully understood.
To your credit, you recognise the difficulty of preservation, and suggest that it be assigned to archivists and librarians. But those archivists and librarians, along with other scholars and knowledge-management professionals from the same side of the epistemological fence, would be the first to point out that your implicit assumption of “facts” as static, quantifiable resources which might be stockpiled in a vault somewhere is, at best, naive. The archive is a living thing, an open system; truth is made, not found; science is a practice, not a product.
Furthermore, those same archivists and librarians have been at the vanguard of the chorus of voices arguing against the capitulation to generative models, the first to recognise the risk of the groundwater pollution that you have accepted as inevitable and even necessary. If you truly value the civilisational-infrastructural work that they do, and have done for hundreds if not thousands of years, will you not listen when they warn you against systematically shitting in the village well? Will you not listen when they tell you that “AI” is the epistemological equivalent of nuclear weapons?
I will reiterate: your argument is circular, and contains an open admission that the circle in question is destined to shrink and diminish for as long as you remain committed to it. To illustrate my point, transpose the argument to climate change: “emitting more atmospheric pollutants is not necessarily desirable, but people have grown up with cars and datacenters, so they’re probably gonna do it anyway, and the atmosphere’s never going to be cleaner than it is now; therefore we might as well just lean and get a jump on the competition”.
Knowledge is not a resource like coal, to be dug up, stockpiled and burned to power economic growth; it’s a commons, a shared resource like the atmosphere. We know what happens when you enclose a commons; furthermore, if you pollute any part of it, you pollute it all.
Perhaps you can buy your way out of the worst effects of that pollution, at least in the short term; you may even be able to do so using the same technologies that caused the pollution! But in the long run, those who cannot afford that luxury will know who to blame… as will those, looping back to your first capitulation, whose mortgages were refused, whose paroles were rejected.
This is the thing with building walls and fences: the folk you exclude tend to get a bit resentful about it, and history gives us a pretty good steer on how that resolves over the longer haul.
My choice of nuclear weapons as an analogy is not hyperbole: you have fallen into the same zero-sum game-theoretical fallacies that informed the architects of the nightmare of the Cold War. Yours, like theirs, is a rational argument—but rationality absent the counterbalance of interpretive subjectivity is its own form of insanity, as those terrifying decades (should have) taught us.
Your theory is seemingly happy to countenance the potential destruction of everything you claim to hold dear, in order of securing the chance to “win”—but to win what, exactly? Hegemony of the last nation standing? An algorithmically-managed future with USian characteristics rather than Chinese ones? Dominion over a metaphorically irradiated wasteland where no new knowledge can grow, and we merely recombine whatever can be statistically derived from our inevitably decaying and dwindling stockpiles of ideologically-pure thought-leadership screeds?
Back in the day, you played an important part in building an internet which—like any other utopia—never came close to realising its idealistic dreams, but which nonetheless represented a yearning and a sincere belief in the possibility of a world better than the one that came before.
What happened to those dreams, Matt? How did they get replaced with a future in which the best we can imagine is a return to feudal squabbling between regional factions muttering darkly about the blasphemies of their enemies, while they shovel the dwindling knowledge of the past into the insatiable maws of machines they’ve somehow mistaken for gods?
I want no part of that utopia. I hope that perhaps, if you think it through again, you might be able to see past your well-intentioned but occasionally blind faith in technology as a material manifestation of Progress, and recognise that you don’t really want it either.
Leave a Reply