We’re reaching a point where it will be possible to fundamentally alter the nature of what it is to be human. But should we use these technologies to do the same things to members of the animal kingdom? In the aggregator today was a post at Betterhumans, where Simon Smith sums up a discussion he had with colleagues about the ethics of animal uplift. Now there’s a real stickler of an issue for futurists.
‘Uplift‘ is a term used by transhumanists to describe the raising of a creature’s intelligence or abilities by technological means (eg pharmacological, biological gene-hacking, nanoware, whatever – the means are irrelevant, the results are the focus). The term has other potential uses; for example, a space-faring race that gave another race its technologies would be considered to be performing an uplift too, though of a different type. There is a commonality between the ideas though, namely the notion of inflicting change on others that are perceived to be in need of being ‘raised to your own level’.
I first encountered the term in David Brin‘s ‘Uplift’ series of novels, where it is used in both contexts; the human race has been uplifted technologically, and now has spaceflight tech that allows it to partake in a wider galactic society, but the humans have also uplifted some of the animals of earth, boosting the intelligence and sentience of dolphins and primates to a point where they are on a par with their human benefactors. All the implications of these changes are a large part of the books, especially at the character level, and there are definitely some factors to bear in mind, should we consider performing such procedures on the animals of the real, non-fictional world.
At this point I should say that, although transhumanism fascinates me, I wouldn’t describe myself as a transhumanist. That’s not to say I oppose their ideas – on the contrary, they are striving for things that have great meaning for me too. But I am more of a singularitarian, at least at the moment, and at least as far as I can tell. I’m not so interested in enhancing my human body as I am in transcending it, and getting the hell away from the whole messy necessities of physicality. But enough of my intrinsic body-hatred – the point I’m trying to make is that I am in no way opposed to the notion of augmention of the human being, the creation of ‘man plus’. You want it? Have it. It’s your body, yours to risk and/or recreate as you see fit. I demand the same freedoms with my body; I’m just not yet sure I want to go the case-modder-performance-overclocking route. 😉
So, human ‘self-uplift’, no worries. But the uplift of other beings, especially beings over whom we already (in general) have an innate sense of superiority? There I’m not so sure. Smith’s post at Betterhumans explores a few arguments, and I shall expand on one here.
It’s the ‘when is a dog not a dog’ argument. In other words, you perform uplift on an animal to a point where it has a human-comparable intelligence – is it still the animal it was before? The obvious answer to that is in the negative, and I don’t think there would be much disagreement over that. The disagreement is over whether that would be a bad thing or not.
Smith’s companions say we have an obligation to raise others to our level, and compare it to the ethical duty they feel we would have to do the same to humans with learning disabilities. All of a sudden I’m getting major ‘Flowers for Algernon‘ alarms going off in my head – which is odd for me, as I’m not prone to kneejerk responses to technologies. The mandatory uplift of disabled humans smacks slightly of eugenics – logical, yes, but somehow sinister. (A better response would surely be to use technologies to ensure that people weren’t born with those disabilities, and then give them the choice to make for themselves.) For this reason, the forced uplift of animals strikes me as being an act of astonishing hubris – well intentioned, but extraordinarily overconfident in the superiority of mankind over all other known forms of life.
For me at least, change is about choice. Obviously we can’t choose all the things that happen to us, but that makes it all the more important that the ones we can influence are left open for us to explore as we will. Uplift of ourselves definitely falls in this category. Make it available, let people pick their own path. If it’s as good as you say it will be, people will surely flock to the gates when they see the results.
But a dog, or an ape or a dolphin, doesn’t have the same context to work with. We can make at least some kind of informed choice about consciousness and our ability to change it, because we already partake in a form of consciousness that allows us to understand that there are different ways of being. As much as the mirror test that Smith’s companions mention may allow us to assume that an animal has some self-awareness, I think anyone would be hard-pressed to argue that they have the same understanding of different mind-states that we humans do. We can imagine being a dolphin (however unlikely and spurious those imaginings might be), we can imagine being a super-intelligent human; we can do comparisons like this because they are part of the way we have evolved socially. It’s software, not hardware.
Now, dolphins apes and dogs all have a sense of society and community to a greater or lesser degree, and research is digging up more and more commonalities between the human condition and that of animals as time goes by. But I’m not sure that they have the software to even conceptualise being anything other than what they are already. Forcing uplift on them, if this were the case, could actually be an act of unimaginable cruelty. Given that the proponents of this idea are claiming it to be not only an ethical obligation but a way of demonstrating our care and respect towards these animals, I see a discontinuity.
Smith suggests a way of giving the animals the choice, by placing the uplift vector into food for example, and giving them the choice of regular or brain-boosting goodies at mealtimes. If they liked the experience of slight uplift, they would return a la Pavlov to the uplift bowl, and the choice would be in their hands/paws/flippers. This hinges on the idea of incremental uplift (which I find technologically unlikely – surely it’ll be an all-or-nothing procedure?), and on the notion that animals (or humans, for that matter) always choose what is best for themselves based on the limited evidence to hand. Many consciousness-expanding activities that humans engage in follow a similar model, but often end messily (the psychedelic movement is an illustration – not everyone can handle the results, no matter how enjoyable the initial journey may be). In other words, the animals might just carry on for the sake of the buzz they’re getting, without fully understanding what they are doing to themselves. Overall, this seems to me like an argument that supports animal uplift, but wishes to wash its hands of the responsibility for it.
I’m not ruling it out totally, of course. Maybe someday we’ll learn to communicate with animals in their own idiom. Then maybe we could explain what uplift was, and see if they were interested in the process for themselves, in their own context of thought. I think that, unless we make some significant changes to the human condition as it stands, they may not be too keen anyway!
It’s an interesting point for debate, but I’m pretty clear on where I stand. I’m all for human augmentation; it is our right. Some would even say it’s our destiny. But I don’t think we are in any position to make such choices for another race of lesser sentience than ourselves. Let the animals be, and concentrate on improving the human. As my father used to say, ‘when it comes to engineering you should stick to what you truly understand…and even then, only experiment on your own machines’.