Conscious Machines

A few months ago, I attended a ‘Café Scientifique’ event here in Portsmouth, which was funded and organised by the local University and the City Council in partnership. The guest speaker was Professor Igor Aleksander of University College, London. His field of study is Neural Systems Engineering – as the title of his talk (“Who’s afraid of conscious machines?”) suggested, he is involved in attempting to create artificial consciousness – my report from that evening provides more details.

He has appeared on television and in a wide selection of publications as an interviewee; he is also the author of a recent book entitled ‘The World in My Mind, My Mind in the World: Key Mechanisms of Consciousness in People Animals and Machines’.

Prof. Aleksander was kind enough to agree to me interviewing him for further thoughts on this most intriguing of subjects – what follows is the result of a few weeks of email exchanges.

***

AA: When I heard your presentation, you made a great effort to point out that what you do is a very different thing from ‘artificial intelligence’ research. I think the average man would consider the two fields to be the same thing. How would you define the difference between your field of study and the field of AI, and is there any common ground at all?

Prof. Aleksander: “AI is over 50 years old. It is aimed at producing systems that behave in an intelligent way. This is done by means of writing rules into the computer that achieve this behaviour. There have been successes – chess-playing machines, for example. But this tells us nothing about how the brain achieves its intelligence through evolution and learning rather than being programmed.

I try to model what is known of the brain on a computer, and check neurological hypotheses about what it is in the brain that makes us conscious. Rather than being a programmer who programs intelligence into a system, I try to build systems which support the emergence of something which, when it emerges in living organisms, we call consciousness. This is more basic than Artificial Intelligence. I believe that you have to be conscious to be intelligent.”

Are chess-playing computers really intelligent? It has been said that they are really just a function of brute force calculation, that there is no real ‘intelligence’ involved, just algorithmic analysis at a massively fast scale.

“Are chess-playing machines over-rated? I don’t know, but I can’t beat Kasparov and a machine has done it. So it fulfils the description of AI as machine behaviour which, if done by humans, would be said to require intelligence. Personally, I don’t think that chess is a particularly high indicator of intelligence…chess requires a planning skill which most intelligent people need not have…we could argue for ages.

That’s why I don’t like the word ‘intelligence’ in AI. If we call a machine that recognises people ‘intelligent’, is my baby grandson intelligent because he recognises me? It’s just taken for granted. But if, someday, he becomes a great philosopher or a successor to Bill Gates, someone may say that he is intelligent. It seems that we use double standards for the word: one when applied to machines and another when applied to living things.”

What models of the mind are particularly influential on your work? Does Bohm and Pribram‘s holographic model hold any water with you, for example?

“Bohm and Pribram, as well as Longuet-Higgins, have enjoyed presenting a holographic model where phenomena are hidden in a non-phenomenal medium like a hologram. That is, you look at it and it contains 3D information which only appears when it’s lit in the right way. It’s interesting, but it does not help us to relate our own inner sensations to what we know about the neurology of the brain.

Also, I am certainly not impressed by people who say “The mind? No problem. It’s the software of the brain”. That is, they have a brain-as-a-computer view of things. While the two could be said to be information-driven devices, they operate on completely different principles. The brain is a system highly specialised by evolution; specialised to be a sensitive perceiving, imagining, attending, planning and emoting object. It is intelligent because it is so highly geared towards learning to interact with a world, the complexity of which has to be managed and understood.

The computer is an all-purpose system that has to be programmed to behave, whether in stupid or intelligent ways. In a computer system, the only one who may have understood something is the programmer, not the computer. I am more attracted to models that take the brain for what it is: a network of networks of brain cells. It fascinates me to discover how this kind of system can actually represent the world, for the best purposes of the owner of that brain.”

What applications do you see your early successes having in the ‘real world’, if you are in fact seeking after commercial applications at all?

“It is an error to look for ‘applications’ of the science of making machine models of conscious organisms too early. There may be some applications in the guidance of vehicles, explorers on distant planets and so on.”

So your work is more like a proof-of-concept mission?

“At the moment the big glittering challenge is to produce a clear model that makes me, and possibly those who listen to me, feel that some of the mysteries of ‘being conscious’ are beginning to be clarified. But if you had a robot driving your car, would you rather that it were conscious or unconscious? So the applications will come, but let’s establish the science first.”

I’m not sure that most people would want a conscious robot driver, to be honest, because we tend to associate consciousness with the possession of a personality. If you made a conscious car-driving robot, would it develop the sort of emotional responses that humans do? For instance, might some robots find themselves scared of great heights, and hence driving over-carefully on hairpin switchbacks? Might some of them get obsessive about driving slowly for safety…or conversely driving too fast for kicks?

“There’s one important principle involved in the computational modelling of consciousness: being conscious does not mean being a living human, or even a non-human animal. For an organism to be conscious is for it to be able to build representations of itself in a world that it perceives as being ‘out there’, with itself at the centre of it. It is to be able to represent the past as a history of experience, to attend to those things that are important to it, to plan and to evaluate plans – these are the five axioms.

So, given a human driver, we find that its attention could be diverted by all sorts of things, not usually too damaging to its survival mission. But a conscious robot car driver could have a consciousness that is much better focussed on driving than a human’s. So the emotional responses of the robot may be far less damaging than, say, road rage! Yes, it should be scared (to the right degree) of great heights, so as not to drive over cliffs. And if it drives too fast for kicks, it may not do so for very long, in an evolutionary sense!”

So, will it be necessary (or even possible) to place constraints on conscious machines and the way their consciousness expresses itself?

“The point I am making is that there are many methods and intensities of being conscious. A mouse and I in the same environment would be conscious of different things – the things that are important to us. The intensity of my consciousness varies during the day. So if I make a machine that adheres to the five axioms, it would do so with respect to its own needs. One such ‘need’ might be a programmed-in rule like ‘thou shalt not be reckless’ (a bit like ‘thou shalt not kill’ for humans).”

Will we have to start developing legal rights for non-human entities?

“We should not stray onto the story-line of Capek Rossum’s ‘Universal Robots’ (robots that are envious of the rights of humans). Robots, by definition, are things made by people. Procreation is not necessary for consciousness. So we will have to address ‘rights’ issues that begin to look a little like Asimov’s laws of robotics. In the real world, this would boil down to engineering practice.

For example, the design of aeroplanes is constrained by engineering practice to not to be dangerous to those who use them, to protect the investment that has been placed in them and not to damage things in general. It is odd to refer to this as the ‘rights’ of an aeroplane! Rules about robots must be more like those for an aeroplane, rather than those for humans or other living creatures. In some sense, ‘rights’ go with biological life as protection.”

Where do you envisage science and technology heading in the next few decades? Are there any projects or ideas on the horizon you are looking forward to, or indeed, any that you dread to see?

“Science and technology in general? I know what should happen, but I also know it will not. In my ideal world, things would be called science only if they seek to discover things we do not know already: ‘how do neurons make minds’, for example, or ‘what causes a variety of diseases, particularly mental ones?’ Or, ‘how unique is planet Earth in its ability to support what we call life?’

But I worry about science and technology being more and more ‘project-led’, where the choice of project is driven more by the potential benefit to a company, or the not very far-sighted strategies of funding committees. I dread seeing publicity-seeking science (e.g. implanted chips that open doors), but feel that much can be done in really helping those who have suffered damage (e.g. implanted chips that restore movement).

I particularly worry about increasing gaps between the sciences. For example, neurophysiologists do not understand computational analysis, and computer scientists oversimplify the findings of neurology. I would like to see interdisciplinary education in the sciences and engineering as the basis of learning about science. Divisions into specialisms should be built on a general understanding of science.”

Comments and pingbacks

5 responses to “Conscious Machines”

  1. Dave avatar

    Thanks for an excellent interview with a truly interesting man. I’ve recently read a book called “mind wide open” (I’ve forgotten the authors name – sorry) which was all about how our minds work and it’s really gotten me interested in the whole intelligence (artificial or otherwise) debate.

    It gives you a real sense of how difficult it will be to really create proper artificial intelligence.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.