I think of the Gartner Hype Cycle as a Hero’s Journey for technologies. And just like the hero’s journey, the Hype Cycle is a compelling narrative structure. When we consider many of the technologies in use today, we tend to recall that they were overhyped when they first arrived, but eventually found their way to mainstream usage. But … is that really how technologies emerge and gain adoption? After analyzing every Gartner Hype Cycle for Emerging Technology from 2000 to 2016 – all seventeen years of the post dotcom era – I’ve come to believe that the median technology doesn’t obey the Hype Cycle. We only think it does because when we recollect how technologies emerge, we’re subject to cognitive biases that distort our recollection of the past…
Not at all incidentally, the Hero’s Journey is ubiquitous in the narratives of innovation studies and corporate foresight, and dominates the discourse in sociotechnical systems research. To quote briefly from my (very nearly finished) thesis, on the matter of the innovation model known as the Multi-Level Perspective:
… the MLP is, in effect, a generic story-form that relies on pre-established permutations of certain archetypal characters, set-
tings and events. Much as with an airport thriller novel or superhero movie, you always end up with the same basic arc of plot: in the case of the MLP, that generic story is known as “transition”, and it follows the journey of a hopeful young innovation on its adventures through the sociotechnical landscape, struggling against the incumbent regime until it finally achieves the “market dominance” which was its destiny and birthright.
In other words, every new gadget is Frodo, setting out to disrupt the oppressive sociotechnical hegemon of Sauron. The corollary is that every “change agent” and “innovator” sees themselves as bloody Gandalf.
If sci-fi convincingly simulates another world, it gives the reader ways of imagining our world otherwise. Science fiction is more, not less, “realist” than literary fiction. It does not produce the fiction of a severed part of a world, as if the rest was predictable from the part. It produces a fiction of a whole different world as real.
VR/AR is ad-tech. Everything built in studios (except for experimental projects from independent artists) is advertising something. That empathy stuff? That’s advertising for nonprofits. But mostly VR is advertising itself. While MTV was advertising musicians, the scale and creative freedom meant that it launched careers for people like Michel Gondry, Antoine Fuqua, David Fincher, Spike Jonze, Jonathan Dayton and Valerie Faris, etc. A band from a town like Louisville or Tampa could get in touch with a local filmmaker and collaborate on a project and hope that 120 Minutes picks it up. There were entry points like that. And the audience was eager to see something experimental. But a VR audience is primed to have something like a rollercoaster experience, rather than an encounter with the unexpected. The same slimy shapeshifter entrepreneurs that could just as well build martech or chatbots went and colonized the VR space because they have a built in excuse that it took film “fifty years before Orson Wells.” Imagine that. A blank check and a deadline in fifty years.
The always-insightful Joanne McNeil. Everything the Valley does is marketing; that they’re still flogging away at a horse two decades dead tells you everything you need to know about what the word “innovation” really means.
So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.
This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.
Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?
Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.
Science fiction, science fact, and all that's in between …