I can say with certainty that there were at least forty or so people in the UK last night who weren’t obsessing over the fate of twentytwo overpaid men in shorts and one air-filled leather sack. We had far more interesting things to think about. So, my second visit to a Portsmouth Cafe Scientifique, and I enjoyed it even more than the first one! (No offense, Dr. Nichol, should you be reading this; you were interesting too, but this was right up my alley, so to speak.)
The guest speaker was Professor Igor Aleksander of Imperial College, London; his specialist field is Neural Systems Engineering, and he is the author of a recent book entitled ‘The World in My Mind, My Mind in the World: Key Mechanisms of Consciousness in People Animals and Machines’. As the title of his talk suggests, he is involved in attempts to create artificial consciousness. I will attempt to summarise his introduction to the topic here.
(Apologies for lack of pictures; I took some with my camera, but examination of the results has demonstrated it was plainly not up to the task involved…)
He firstly mentioned the recurrance of ‘sinister conscious machines’ in science fiction stories, to illustrate the notion that we are, generally, quite afraid of the concept. (I was tempted to run off a list of science fiction authors who are anything but afraid of conscious machines, but that would have spoiled the mood a little, wot? 😉 ) But this fear is irrational; as he says, we all know plenty of conscious machines already. We ourselves, and all human beings, are essentially conscious machines (henceforth abbreviated to CMs); biological ones, granted, but machines nonetheless. Machines are simply defined as systems of many interconnected parts that fulfil a certain set of functions; that definition covers us just as well as a washing machine or a computer. But that is an unpleasant concept for people to swallow, and this is where some of this fear of CMs may stem from – as well as from the fact that we don’t know any non-biological CMs (yet).
In the last decade or so, consciousness has finally entered the science world as an acceptable topic of serious study; previously, it was considered an unscientific spinoff, and there are still pools of resistance even now. This acceptance has stemmed from neuroscience – the study of how the brain works, partly advanced by Francis Crick’s ‘Astonishing Hypothesis’, which might be summarised as ‘consciousness must correlate to a physical process in the brain’. Prof. Aleksander believes that this doesn’t quite go far enough, and that we should be looking at neural causation of consciousness rather than simple correlation.
There is a philosophical problem here right away, David Hume’s Problem of Causation; to hideously oversimplify again, this states that ‘we cannot perceive causation’. The traditional get-around for this is that theory is the link that fills the gap between observed events and their causes. Prof. Aleksander believes theories are not enough; any theory of the brain must be almost as incomprehensible as the brain itself.
He uses computer simulations in his research, exposing his creations to virtual worlds that they will, hopefully, begin to perceive. Artificial intelligence, a branch of computer science that is just celebrating its 50th anniversary, has been an extraordinarily successful field, but its work is almost useless for the study of consciousness, largely due to the resistance to certain concepts by computer scientists. The concept that causes the hangups is phenomenology.
There are a number of meanings of the term, but in this context, phenomenology means that you start thinking about a philosophy of mind from what you feel internally. Prof. Aleksander breaks down his definition of consciousness into five parts, or axioms, as follows:
- ‘An awareness of being oneself in an out-there world’
- Attention (how we choose what we are conscious of from one moment to the next)
- Planning skills
He believes that one day it will be possible to build a machine that has the ability to announce that it is conscious of being a machine.
At this point, Prof. Aleksander’s introductory expositions ended; which was lucky for me, because my brain was full and my note-taking wrist was aching. After a break to allow everyone some bar/toilet access, there began a lengthy and fascinating question-and-answer-cum-debate session. I am unable to summarise this in detail for a number of reasons. Firstly, I was too busy listening and thinking to take detailed notes on each point raised. Secondly, the nature of the discussion (and to an extent the topic itself) doesn’t lend itself to simple quotable definitive answers. Thirdly, I had a couple of beers, and was too involved in the moment; a certain affirmation of the third axiom, you might say!
The overall point to make here is that although what Prof. Aleksander does is irrefutably scientific, it is science that wanders deeply into the territory of philosophy; and whenever you start asking philosphical questions, not only do definite answers go out of the window, but you also get a panoply of different yet equally valid ideas bubbling forth from the participants. A few cogent moments made it to paper, though:
When asked how he would know when he had succeded in his goal of making a CM, Prof. Aleksander said that artificial consciousness hasn’t had the successes that artificial intelligence has, and that nine times out of ten an experiment fails. But a mark of success would be a machine that has a need for interaction with its world; it has to be asked what a CM is actually conscious of.
The notion of physical causation of consciousness was brought up, asking whether there was a need for a ‘new physics’ to explain the results, or could one rely on emergent properties. The answers here were complex, but seemed (to me) to say that, to an extent, the brain is treated in a ‘black box’ manner, and that replication of the properties are the important thing, not explaining mechanically how the brain produces them in the first place.
Other topics included: the notion that human language is an inherent limit and barrier to discussing consciousness, seeing as language is so integral a part of our conception of everything, including consciousness itself;
the question of the role of self-awareness and cultural identity (outside of the internal consciousness), which was largely ascribed to axiom 2 (imagination);
and whether the notion of conscious machine was being confused or interpolated with social machine, which was largely ascribed to axiom 3 (attention).
There was much more lively discussion, which included yours truly mentioning the Turing test (and showing myself up as an utter layman among all the genuine scientists present), a mention of the need for a ‘taxonomy of consciousness’, and lastly the notion of attribution, in other words that to define a thing as being conscious is an act of observation that is largely based in the consciousness of the observer (or at least, that’s what it seemed to mean from where I was sat).
So, plenty of food for thought, especially for under-educated science fiction freaks like myself! The really excellent news, as far as I am concerned, is that Prof. Aleksander expressed an interest in doing an interview with me via email. Unless he thinks better of it in the meantime, I hope to be able to roll out some seriously fascinating material in the near future – I get the feeling I could quiz this guy for months and never run short of questions. As always, regular VCTB readers will be the first to know of any news on this front.
The next Cafe Scientifique is on July 18th; when I get the email telling me who’s talking about what, you’ll know where to look.