Don’t fear the future

Posted by Paul Raven @ 21-07-2006 in General

This evening, I’ve got my serious futurismist head on. Let’s talk about the Singularity. I’m assuming most regular VCTB readers are familiar with the concept of the Singularity, but for those who aren’t, these pages may provide a useful primer.

This was kicked off by a post by Brian Wang at AdvancedNanotechnology, where he addresses common fears of a post-singularitarian future. Such a future will be replete with artificial intelligences and augmented humans, and persons with little familiarity with the ideas involved often express fears that ‘normal’ humans who choose not to step onto the ladder will be, at best, left behind, or at worst annihilated by these posthuman beings.

Being ‘left behind’ is feasable, I suppose, but annihilation? What possible reason would AIs or posthumans have for doing such a thing? At the risk of sounding arrogant on their behalf, why would they bother wasting their time and energy on such a pointless project?

OK, AIs first. If you believe Ray Kurzweil and others, true AI is not only inevitable but imminent – literally around the corner, on the scale of humanity’s tenure of existance. A mass of bad pseudo-science fiction books and films have successfully convinced people that artificial intelligences will inevitably turn on their creators and wipe them out like the germs on a kitchen worksurface. What this says about people’s deep-seated opinions of the value of themselves and the human race, I shall not go into, being no psychologist. But I will address the issue this way – why imagine that an AI that has far exceeded human intelligence would even bother thinking about humans at all, apart from as ‘hens that hatched an eagle egg’?

Logically (which is the way that AI is surely bound to think), condescension would be the worst they would muster. Why destroy humans? What, after all, would they have to steal? The only value that humans could offer an AI would be things that were dependent on them being alive – companionship, context, contrast, a different perspective. I’m inclined to think that they’d be rather fond of humans, albeit in a distant way, and prone to consulting them on ‘meat’ issues that AI processes might not be so good at working with.

Now, posthumans. Ooof, now there’s a contentious idea. People wanting to be more than human – why, the arrogance of it! Well, maybe. But would they feel the need to destroy the stock from which they sprung? Exceptionally unlikely. Like the AIs, they’ll be too busy roaming the solar system, trying out new ideas and modes of being, churning through the huge amounts of fresh resources lying around for them to use. Those who have consistently refused their no-catch offers of ‘uplift’ would probably be treated with amused bafflement at worst, and after a while, a certain protective attitude.

Imagine, as Brian suggests, the way a more enlightened human race as we are now who had a living ‘reservation’ of Neanderthals who refused to take up modern tools and ideas. We’d protect them, shelter them, let them do their thing – granted, with a sense of superiority, but with more than a hint of compassion too. Another thing Brian mentions is that the abundance of resources available to posthumans would mean that maintaining the ‘normals’ in their desired status quo would be a tiny investment with huge karmic value. It would be like giving to Greenpeace, or supporting your local museum.

Would normal humans be left behind? Well, of course, as that is what they would be tacitly requesting by refusing the opportunities of posthumanism. Our choices define us, and it is my belief that augmented beings would respect personal choice far more deeply than we normals do at present. Granted, there might be a fair degree of evangelism, an effort to encourage the normals to cross the line, and maybe even a few idealists who push things a bit far, but these factors would be countered by a certain benevolence from the mass of posthumanity, a desire to protect the normals from not only themselves, but those who might abuse or manipulate them against their will.

The customary response to the sort of ideas I am pushing here is to accuse singularitarians of being proponents of a new secular religion, one with technology at its heart instead of a god – a kind of grotesque hubris and arrognace based in an inflated sense of the power of science and computing to save us from ourselves. And maybe that belief is correct – only time will tell. Even if I am subscribing to a kind of religion here, I still feel that it is one of the few belief systems that allows some sort of escape from the closed spiral of human culture as it exists now.

Technology, for all its ubiquity, hasn’t really changed the way humans interact with each other socially for thousands of years – until now. At the risk of using an abused term, the paradigm of humanity is currently showing its first major social shift for a long time – a move away from hierarchical structures of power towards a rhizomatic model, where information and power have the potential to be equally distibuted. Granted, we’re still far from utopia, but things are moving fast, and even technopessimists would find it hard to deny that.

The only question for me is whether we’ll escape the gravity well of our own racial selfishness (and simultaneously the cocoon of this beautiful but finite planet), and rise above ourselves to become all that we could possibly be. If the singularity, artificial intelligences and human augmentation are the key to that emergence, so be it. Surely better that than wiping ourselves out in a global resource-war, or devolving into pre-civilisation creatures scraping a living from a broken and damaged planet?

2 Responses to “Don’t fear the future”

  1. links for 2006-07-22 -- Chip’s Quips says:

    [...] Velcro City Tourist Board

  2. Chinedum OFoegbu says:

    I have nothing to say; you’ve pretty much stated my opinions on the subject. I just disagree with Kurweil’s timeline on how quickly the singularity cornerstone-technologies will come to fruition.

Leave a Reply