If sci-fi convincingly simulates another world, it gives the reader ways of imagining our world otherwise. Science fiction is more, not less, “realist” than literary fiction. It does not produce the fiction of a severed part of a world, as if the rest was predictable from the part. It produces a fiction of a whole different world as real.
VR/AR is ad-tech. Everything built in studios (except for experimental projects from independent artists) is advertising something. That empathy stuff? That’s advertising for nonprofits. But mostly VR is advertising itself. While MTV was advertising musicians, the scale and creative freedom meant that it launched careers for people like Michel Gondry, Antoine Fuqua, David Fincher, Spike Jonze, Jonathan Dayton and Valerie Faris, etc. A band from a town like Louisville or Tampa could get in touch with a local filmmaker and collaborate on a project and hope that 120 Minutes picks it up. There were entry points like that. And the audience was eager to see something experimental. But a VR audience is primed to have something like a rollercoaster experience, rather than an encounter with the unexpected. The same slimy shapeshifter entrepreneurs that could just as well build martech or chatbots went and colonized the VR space because they have a built in excuse that it took film “fifty years before Orson Wells.” Imagine that. A blank check and a deadline in fifty years.
The always-insightful Joanne McNeil. Everything the Valley does is marketing; that they’re still flogging away at a horse two decades dead tells you everything you need to know about what the word “innovation” really means.
So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.
This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.
Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?
Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.
I’ve spent more time than I’d like to admit hanging around the online communities of the kind of people we are worried about reaching here, and I am here to tell you: They are using their critical thinking skills.
They are fully literate in concepts like bias and in the importance of interrogating sources. They believe very much in the power of persuasion and the dangers in propaganda and a great many of them believe that we are the ones who have been behaving uncritically and who have been duped. They think that we are the unbelieving victims of fraud.
Which is not to set up some kind of false equivalency between sides. But I do want us to consider the possibility that we don’t need to talk across that barrier, and that it might not be possible to talk across it. That we need to consider that if it’s true that vast swaths of the voting populace are unbelieving victims of fraud, that there’s not much we can do for them. That we may need instead to work to invigorate our allies, discourage our enemies, and save the persuasion for people right on the edge.
But, again, I’m saying all of this to you as someone who has not figured this out.