Don’t tell my mother I said this*, but as time goes by I’ve really come to appreciate the value of paying attention to the experience of old hands.
Exhibit A comes with what I will gladly concede is a hefty slice of self-aggrandisement. Rodney Brooks has been working with robots and autonomous vehicles for almost as long as I’ve been alive; as he notes at the top of this post, he was Moravec’s experimental assistant during Moravec’s doctoral research. He’s seen the hype cycles fall and rise and fall again.
So it’s worth taking him seriously when he argues that the current hype cycle has actually been pretty damaging to the cause of autonomous vehicles in the long run. Furthermore—and this is the self-aggrandising bit—it’s worth taking him seriously on what he says about the way in which technological paradigm shifts necessarily involve a substantial change at the infrastructural layer. And that’s “infrastructure” not in the software-firm sense of “do I have enough virtual servers ready on AWS”, but rather in the literally and figuratively concrete sense of “infrastructure”:
They reinforced the idea that we would have one-for-one replacement of human drivers with driverless cars. In all other cases where mankind has changed how we transport people and goods we have had massive changes in infrastructure. These range from agrarian to empire and Roman roads (still the outline of most major road routes across Europe), wharves in ports, inland canals, railroad tracks, paved roadways, freeways, airports, and world-wide air traffic control.
The tech enthusiasts, used to large scale deployment of software rather than physical objects, assumed that the world would stay the same, and instead we would just have driverless vehicles amongst human driven vehicles. This assumption was the source of my two critiques in 2017.
I have also noted that autonomous trains are still not very widely deployed, and where they are they have different infrastructure than human driven trains, including completely separate tracks. I have ridden them in many airports and out in and under cities in Toulouse and Tokyo, but they are not widespread. In the US the only significant self-driving trains outside of airports are to the west of Honolulu in Oahu, still not quite making it into the downtown area.
The dis-service of self driving predictions is that for the last dozen years we stopped talking about how to instrument our roads to make autonomous vehicles safe. I think we could have had self driving cars much more quickly if we had made offboard changes to our infrastructure, rather than imagining that everything would be done onboard.
(This is a point that I came to realise during the course of my own doctoral work. It’s also one of the points that a former consultant for a well-known international design’n’architecture firm, charged with examining said PhD, assumed to be negated ipso facto, because drones! Upton Sinclair had the right of it: “It is difficult to get a man to understand something when his salary depends on his not understanding it.” The only train these hucksters care about is the one carrying the gravy.)
Exhibit B is from Filip Piekniewski, who’s not quite as old a hand as Brooks, but is an actual machine learning researcher, rather than a shark in a suit with a bundle of share options in a firm with the letters “AI” situated somewhere in its whimsical name. Piekniewski, who I imagine is very concerned about the imminent prospect of an “AI winter” that will be something more like an ice age, rolls out the metaphor of religion to address blind faith in “science” more generally, and in the ongoing suitcase-word apotheosis in particular. Pulling no punches:
AI scene is like a mixture of Vatican clergy and the Wizard of Oz – a group of devoted ecclesiastics and plethora of smoke and mirror machines to convince the populus to “prepare for the inevitable” or “buy my guide to chatGPT or you will be obsolete” kind of bullshit. Of course some of these “smoke-and-mirror” machines could be useful things, but the people who actually use them for something to benefit society rarely brag about it.
All of this religious activity is boosted by social media in a giant cacophony of irrational claims and an orgy of hype. Pertaining to AI this cacophony is full of cherry picked examples, survival bias, positivity bias, evaluating on training data, clever-Hans style prompting and generally a litany of things straight from the book on “how to lie with statistics”. Watching this strikes me as really no different from a religious turmoil, people screaming in panic that the lord is upon us and calling for any sacrifice to be put on the altar. Calling the skeptics heretics and ridiculing any skepticism as blasphemy.
The outro is worth quoting too—partly because it’s blazing with the genuine frustration of someone who wants to build good things but whose options for doing so are being stymied by misaligned priorities toward specious bullshit, but partly because it rather serendipitously aligns with Brooks’s point above:
… not all technology benefits society equally. There is an infinite number of things we could build, but most of them would be useless. Take self driving car for example – a seemingly fabulous technology, that under closer inspection fails to solve any of the problems it was advertised to solve:
- does nothing to alleviate city congestion
- does nothing to make transportation more affordable
- does nothing to make transportation faster and more efficient
In reality if we spent all the money we largely wasted developing that sort of tech, we could have built thousands of kilometers of high speed railway, subway lines and light rail systems to connect choked US suburbs. Yes perhaps it would not be as futuristic and amenable to techno-optimistic bullshit talk, but would our lives be better today being able to say take a high speed train from LA to Vegas, than having ever-promised and non-existent autonomous vehicle which at best may just be a more clumsy, less reliable and slow version of Uber?
One day we may wake up in this country and notice to our great surprise we’ve made a whole slew of stupid decisions in the name of irrational religions cloaked in modern looking technological fabric. That day will not be the highly anticipated technological singularity but might in fact turn into a day of judgement of our technocratic elites. And that day may indeed be coming soon.
I’m not so sure about that whole “day of judgement” thing, though I wouldn’t want to rule it out. But what’s interesting here—the weak signal, if you want to put it that way—is that the actual engineers and scientists who are supposed to be delivering the promised revolution are at the point of assuming that the dumb promises and priorities of the money-men are about to foreclose on the more realistic and genuinely useful possibilities of their work.
I had a conversation last week with someone who asked me whether I worried that I was doing myself a professional disservice by being so publicly critical and skeptical of “AI” and so forth. I replied that I was very sure I was leaving money on the table by doing so, but that in terms of professionalism as I conceptualise it, the unprofessional thing would be to follow the hype against both my instincts and my intellect, like a stray dog running after the butcher’s van.
A little longer ago—in the wake of this post, in fact—someone else came at me with the argument that it’s easy to sit on the sidelines and snipe, and that I should have more respect for people with “skin in the game”. I didn’t respond to that one, because I long ago stopped assuming I owed random hate-mail a rational response. But for the record, here’s my position: the skin I have in the game is my professional reputation, such as it is. I may not be investing in (or shorting) particular firms as an expression of my confidence, but that’s because I have no confidence in the market as a mechanism of the change I want to see in the world. If you want to measure my commitment to what I believe, then measure what I say against the far louder massed cacophony of self-described “futurists” who two years ago were peddling crypto and “the metaverse”, who are currently peddling “AI”, and who—I’m fairly certain—will be peddling whatever fresh suitcase of bullshit has rolled out of the Valley in two years time.
A common jab at tech skeptics is the old saw about “when the facts change, I change my mind”. But there’s the thing: the facts haven’t changed, so my mind hasn’t changed either. Much is made of how “postmodernists” are to blame for “the death of truth”, but as I’ve remarked before, blaming postmodernism for the death of truth is like blaming your oncologist for your metastasising cancer.
One final quote, then, to close with:
To be a strategist is to tell people what they don’t want to hear. It’s about making choices people don’t want to make. It’s about having conversations they don’t want to have. It’s about advancing unpopular ideas and going against the grain and making people uncomfortable, because, let’s face it, we humans prefer comfortable consensus to strategic coherence.M L Cavanaugh
If you want to think about strategy, rather than have smoke blown up your arse or your skiffy fantasies indulged, drop me a line.
[ * — On matters in which she is qualified and demonstrably competent, which is to say the care and rehabilitation of human feet, I trust my mother’s old-hand opinions completely. The other stuff, well, that’s rather more contentious. ]