In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’
Margaret Boden at Aeon, arguing that the inability of machines to care precludes the “robot takeover” scenario that’s so popular a hook for thinkpieces at the moment.
I tend to agree with much of what she says in this piece, but for me at least the worry isn’t artificial intelligence taking over, but the designers of artificial intelligence taking over — because in the absence of native care in algorithmic systems, we get the unexamined biases, priorities and ideological assumptions of their designers programmed in as a substitute for such. If algorithmic systems were simply discreet units, this might not be such a threat… but the penetration of the algorithm into the infrastructural layers of the sociotechnical fabric is already well advanced, and path dependency means that getting it back out again will be a struggle. The clusterfuck that is the Universal Credit benefits system in the UK is a great example of this sort of Cold Equations thinking in action: there’s not even that much actual automation embedded in it yet, but the principle and ideals of automation underpin it almost completely, with the result that — while it may perhaps have been genuinely well-intended by its architects, in their ignorance of the actual circumstances and experience of those they believed they were aiming to help — it’s horrifically dehumanising, as positivist systems almost always turn out to be when deployed “at scale”.
Question is, do we care enough about caring to reverse our direction of travel? Or is it perhaps the case that, the further up Maslow’s pyramid we find ourselves, the harder we find it to empathise with those on the lower tiers? There’s no reason that dignity should be a zero-sum game, but the systems of capitalism have done a pretty thorough job of making it look like one.
The science fictional project is mainly a historical project, and to the extent there is any such thing as a futurological project, that would also be a historical project, so this isn’t a good distinction to try to make. I don’t think there are any valid futurisms or futurologies. I think most people who describe themselves as futurists or futurologists are claiming too much, almost to the point of being scam artists, especially if they charge people fees for them to come in and do consultations, as sometimes happens in the business world, or as a form of “edutainment”. Because the future can’t be predicted […] it’s best to leave all this at the level of science fiction, which for me is mainly a literary genre.
For me, science fiction has a kind of double action as a genre, and the image I use to convey this thought is the 3-D glasses you wear at 3-D movies to create the false impression of three dimensionality. Through one lens, sf tries to describe one possible future in great detail; not a prediction, but a modeling exercise or scenario. Not “this Will happen,” but “this Could happen.” Then the other lens is simply a metaphorical or symbolic portrayal of what’s going on right now. “It is as if we are all zombies being predated on by vampires”—this is my current candidate for the best metaphor for our times, even though people are too scared to write that one down, it seems. Anyway more traditional examples are “it is as if the working class are robots who may revolt,” or “it is as if cities are spaceships detached from Earth,” both older sf metaphors. Cyborgs are great images of us now, as Donna Haraway showed long ago. On it goes that way through that lens, symbolist prose poems of great power. Then, when the images coming through the two lens coalesce to a single vision in the mind’s eye, what pops into visibility is History itself, often deep time, casting into the future as well as back to the past. That’s how science fiction works and what it does.
Science fiction has been a marvelous escape from the dead end much “literary fiction” is in now, stirring the dead ashes of the great modernist works, and getting caught up in the narcissism of late capitalist bourgeois neurosis. SF is outsider art, looked down on by official literary culture, and that’s such a great place to be. It’s outside the MFA system, outside postmodernism, it’s even replacing the postmodern with the Anthropocene, historicizing and politicizing everything, able to take on science and use science’s exploding new vocabulary— well, there are many reasons why science fiction is the great realism of our time, and some of them are because of the traps it has avoided, either by its own efforts or by others misunderstanding and rejecting it.
Kim Stanley Robinson interviewed at Big Echo.
Rejecting the traditional Marxist idea that the working classes were the seedbed of change, [Deleuze and Guattari] wanted a broader umbrella under which to unite all marginalised groups. They claimed that those oppressed by patriarchy (women), racism (people of colour) and heteronormativity (what we’d now call the LGBT community) were all suffering thanks to the same machinery of despotic and imperial capitalism. It’s only by bringing together these ‘minoritarians’ that an anti-capitalist revolution could succeed. Because the philosophical image of the individual is based on the apparently autonomous figure of the white male subject, it is through a process of ‘becoming-woman’, and of ‘becoming-minoritarian’, that the spectre of individuality can finally be banished.
Instead of treating different fields of enquiry as cut off from one another, Deleuze and Guattari tried to show where one discipline seeps into another, challenging the centrality of any one of them. Ultimately, they aimed to open thought onto its outside, pushing against the tendency for theoretical work to close in on itself.
Short-ish essay at Aeon. For all the word-salad spilled in the name of “interdisciplinarity”, the academy — or rather, more fairly, the bodies which govern the academy through the distribution of funding — are still pretty determined to prevent that seepage between silos; D&G’s work makes it easy to understand why that might be the case.
(Well, OK, not easy, because reading D&G isn’t easy… but nothing worth doing ever is.)
Firms are best understood as political entities, rather than merely economic organizations. Of course they have economic dimensions. But saying that they are merely economic organisations would be as reductive as to say that states are merely economic organizations. A firm certainly contains the legal structures of capital investment – this is what the legal structures of the corporate charter are for. But a firm is much more than a corporation in the legal sense: it requires the contributions of those who invest their labour in the joint endeavour (the employees, but sometimes also independent contractors or suppliers or users). That whole institutional reality has been missed by economic and legal theories. My suggestion is that it is time to enter into a reconstructive and institutionalist perspective that makes it possible to recognize the firm beyond the corporation: as a political entity where labour investors, crucial actors in the common endeavour of the firm, have not yet been granted the same political rights (i.e. the rights to participate in governing the joint endeavour) as those granted to capital investors. In other words, it is a political entity owned by no one (shareholders only own their shares, as legal scholar Robé has so aptly kept reminding us) in need of being democratized.
Isabelle Ferreras interviewed at Justice Everywhere.
So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.
This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That’s where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.
Mike Travers. Reading this rather wonderfully reframes Elon the Martian’s latest calls for the regulation of artificial intelligence… you’re so right, Elon, but not in quite the way you think you’re right.
Of course, Musk also says the first step in regulating AI is learning as much about it as possible… which seems pretty convenient, given how AI is pretty much the only thing anyone’s spending R&D money on right now. Almost like that thing where you tell someone what they want to hear in a way that convinces them to let you carry on exactly as you are, innit?
Mark my words: the obfuscatory conflation of “artificial intelligence” and algorithmic data manipulation at scale is not accidental. It is in fact very deliberate, and that Musk story shows us its utility: we think we’re letting the experts help us avoid the Terminator future, when in fact we’re green-lighting the further marketisation of absolutely everything.