As mentioned previously, yours truly is off to a ‘Cafe Scientifique’ about conscious machines this evening. So what better time to discuss robot ethics? This involves not just the ethics of the boffins building the things, but the ethics wired into the robots themselves; the programmed-in codes of conduct that they will be constrained by. Luckily for us, the European Robotics Research Network (Euron) are already on the case, drawing up proposals for these constraints before it becomes too late. (Link pinched from Engadget)
And in their opinion, too late could be very soon. That’s not to say they predict a Terminator-style hostile take-over of the world by disgruntled servitor machines. But they do seem to think that there will be other concerns:
Other dilemmas may arrive sooner than we think, says Christensen [of Euron]. “People are going to be having sex with robots within five years,” he said.
Blimey. That’ll be a lot of geeks and bloggers with lower stress levels, then…
Joking aside, it is looking increasingly likely that we will have robots that can do a lot more than hoover the floor, and remarkably soon. And once they start having even a very basic sentience, and/or a level of autonomy of operations, and/or the ability to learn and develop on their own, there’s a whole can of worms waiting to burst open on a planet that hasn’t even decided whether or not animals should be given certain inalienable rights.
Science fiction author Isaac Asimov wrote his famous ‘3 Laws of Robotics’ way back in the 1950s, as part of the book ‘I, Robot’ (which was recently butchered by Hollywood into an almost unrecognisable form). The laws he proposed were:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by a human being, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
It has been said (by Adam Roberts in his ‘History of Science Fiction’) that these laws are more useful for generating good story plots than actually delimiting a robot’s behaviour, but it’s fair to say that they might make a good starting point for a more comprehensive codex.
But even if certain rules were programmed into a robot, if it has the ability to learn then it may have the potential to think its way around the restrictions. Of course, there’s nothing to say that some robots might develop their own forms of altruism and benevolence, but it’s just as likely in that case that some might develop self-interest or nihilism. If we create artificial intelligences in our own image (which is, in my opinion, the only way we’ll be able to create it at all, at least at first), there is surely a possibility that our creations will come to possess our own capacities for both good and evil (however they may be defined at the time this occurs).
For example, imagine you have a robot that can surf the web, collate data and create content of its own. Will the intellectual property rights of that data belong to the robot, its owner, or its manufacturer? Would it be possible to sue a robot blogger for libel, or plaigiarism?
Or maybe there’ll be robocops for riot control, able to make autonomous decisions on how to handle a situation. Could they be given the right to use force, and in what circumstances? Again, who is responsible for any legal repurcussions?
And to raise the (currently) risible notion of robot sex again; would it be possible for a robot to press charges of rape or sexual assault against a real person, or even another robot? Or for a person to press the same charges against a robot? Until their status as citizens is established, there are huge avenues of potential abuse to be explored. It would be a terrible shame to see humanity revive the practice of slavery, using the artificiality of its subjects as justification (which is in many ways similar to how the slavery of humans was justified).
You can’t rape an autonomous vacuum cleaner (although you could conceivable have sex with it, and knowing humans, people probably already have – the tales of people with vacuum related injuries turning up in casualty departments are too common to be completely unfounded). But something with a mind of its own, however limited? That’s another question entirely. In some ways, it might be considered even worse to sexually abuse a robot with limited intelligence, if for instance it was considered analogous to abusing a human with learning disabilities.
As much as it’s interesting (and a little amusing) to know that people are working on these philosophical questions already, I am inclined to believe that, as with most human endeavours, we will end up making it up as we go along, with more than a few nasty screw-ups and mistakes along the way. It’s the way we humans learn, after all. But who knows? Maybe we’ll see robotic rights rallies in my lifetime; or slums populated by robotic workers, kept away from the rest of the population by a new form of apartheid; maybe we’ll even see robot politicians. Oh, hang on a minute – we may be at that last point already!