In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.
… the cities of the future won’t be “smart,” or well-engineered, cleverly designed, just, clean, fair, green, sustainable, safe, healthy, affordable, or resilient. They won’t have any particularly higher ethical values of liberty, equality, or fraternity, either. The future smart city will be the internet, the mobile cloud, and a lot of weird paste-on gadgetry, deployed by City Hall, mostly for the sake of making towns more attractive to capital.
Whenever that’s done right, it will increase the soft power of the more alert and ambitious towns and make the mayors look more electable. When it’s done wrong, it’ll much resemble the ragged downsides of the previous waves of urban innovation, such as railways, electrification, freeways, and oil pipelines. There will also be a host of boozy side effects and toxic blowback that even the wisest urban planner could never possibly expect.
*The “better future” thing is jam-tomorrow and jam-yesterday talk, so it tends to become the enemy of jam today. You’re better off reading history, and realizing that public aspirations that do seem great, and that even meet with tremendous innovative success, can change the tenor of society and easily become curses a generation later. Not because they were ever bad ideas or bad things to aspire to or do, but because that’s the nature of historical causality. Tomorrow composts today.
*Also, huge, apparently dispiriting disasters can burn off the ground for profound new growth, so the glum and morbid bad-future notion is just as false and silly as this kind of socially-engineered forced-optimism.
*This is not a counsel of despair. It’s atemporality, it’s like an agnosticism. People don’t really require any “better future” per se. Nobody ever receives such a thing. There’s no possibly utopian arrangement which is better for everybody, since society is composed of radically disparate elements with orthogonal needs. People can’t even permanently content their own personal selves. If a guy longs for an X-Prize and wins it, he doesn’t stay permanently happy. A guy with that personality type is gonna look around in near-desperation for something else to radically over-achieve.
One of the reliable bright lights in the gloom of my January is the annual Bruce Sterling and Jon Lebkowsky show, a.k.a. their State of the World conflab at The Well. All sorts of chewy futurism and near-field hindsight going on, as always, but sometimes it’s a minor aside that snags my mind, like this little zap at transhumanism:
“… you’re never going to put some magic cyberdevice inside your human body that has no human political and economic interests within its hardware and software. All human artifacts, below the skin or above them, are frozen social relationships. If you’re somehow burningly keen to consume a thing like that, you’d better, as William Burroughs liked to put it, have a look at the end of the fork.”
The great joy of my first semester of my PhD has been being formally introduced to the basics of sociological theory, and thus discovering that a lot of the woolly notions I’d come to independently have been thought far more thoroughly and comprehensively before, by smart people who gave those ideas proper names. Through this lens it’s even more apparent than before that the echoing lacuna at the heart of Movement Transhumanism — the canonical ‘philosophy’ expounded by Dr Max Biggerbetterfastermore and friends, rather than the more personal morphological meddlings of the grinders and back-alley self-modders — is the notion of any system of social relations beyond the mechanisms of soi-disant anarchocapitalist “free market” economics.
If nothing else, it goes some way to explaining the overlap between MT and the Neoreactionaries: both seem to assume that inconvenient truths might be moved aside by merit of resetting the sociopolitical clock to a time before anyone had formulated them. Not just a river in Egypt, eh?