You are on page 1of 9

Medical Hypotheses (2006) 66, 11651173

http://intl.elsevierhealth.com/journals/mehy

Interactive dualism as a partial solution to the mindbrain problem for psychiatry


N. McLaren
*

Northern Psychiatric Services, P.O. Box 282, Sanderson, NT 0813, Australia


Received 7 November 2005; accepted 13 December 2005

Summary With the collapse of the psychoanalytic and the behaviorist models, and the failure of reductive biologism to account for mental life, psychiatry has been searching for a broad, integrative theory on which to base daily practice. The most recent attempt at such a model, Engels biopsychosocial model, has been shown to be devoid of any scientic content, meaning that psychiatry, alone among the medical disciplines, has no recognised scientic basis. It is no coincidence that psychiatry is constantly under attack from all quarters. In order to develop, the discipline requires an integrative and interactive model which can take account of both the mental and the physical dimensions of human life, yet still remain within the materialist scientic ethos. This paper proposes an entirely new model of mind based in Chalmers interactive dualism which satises those needs. It attributes the causation of all behaviour to mental life, but proposes a split in the nature of mentality such that mind becomes a composite function with two, profoundly different aspects. Causation is assigned to a fast, inaccessible cognitive realm operating within the brain machinery while conscious experience is seen as the outcome of a higher order level of brain processing. The particular value of this model is that it immediately offers a practical solution to the mindbrain problem in that, while all information-processing takes place in the mental realm, it is not in the same order of abstraction as perception. This leads to a model of rational interaction which acknowledges both psyche and soma. It can ll the gap left by the demise of Engels empty biopsychosocial model. c 2006 Elsevier Ltd. All rights reserved.

Introduction
From the theoreticians point of view, the last 20 years have not been kind to psychiatry. One by one, the major theories on which we have based our claim to specialist status have been shown to be seriously decient. Psychoanalytic theory,

* Tel.: +61 8 8945 5399; fax: +61 8 8945 5866. E-mail addresses: info@futurepsychiatry.com, jockmcl@ octa4.net.au.

behaviorism and biological models do not provide a general theory for psychiatry [1]. The last broad attempt to conceptualise psychiatry, Engels biopsychosocial model, was empty [2]. Since then, there have been sporadic efforts [3,4] but these are often little more than semantic manipulations. Oddly enough, and despite the need, there is nothing in psychiatry like the human genome project, a huge, coordinated attempt to overcome an intractable problem. The end result is that psychiatrists now have nothing that amounts to an inclusive, integrative

0306-9877/$ - see front matter c 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.mehy.2005.12.023

1166 approach to mental disorder. At best, psychiatry is a protoscience, to use Kuhns apt term [5]. In my view, it is inappropriate to expect psychiatry to be coequal with the other, more successful sciences, such as biology, physics and chemistry. They are part of the approach known as reductive materialism but I have argued that mind cannot be reduced to its neurological substrate [6]. There cannot be a non-mentalist theory of mind. However, all mentalist theories suffer from the same failing, their inability to explain how the immaterial mind can act upon the material body (and, of course, vice versa). How can information about the physical world, which is perceived through the sense organs, be transferred to the soul for processing and then back again without breaching the laws of matterenergy conservation? At present, there are really only two starters in the race for a natural theory of mind, functionalism and dualism. In this paper, I will derive an hypothesis of interactive dualism which, I believe, goes a long way toward lling the conceptual gap that Engel identied years ago.

McLaren 2. Experiences of the purely internal world fantasy images, daydreaming and talking to oneself, recollections, bright ideas and sudden hunches. 3. Experiences of emotion and affect bodily pains, tickles, sensations of hunger and thirst, intermediate storms of anger, joy, hatred, embarrassment, lust, astonishment and the least corporeal visitations of pride, anxiety, regret, ironic detachment, rue, awe, icy calm, etc. Right at the end of his work, in the last paragraph, in fact, Dennett conceded that he hadnt explained anything at all: My explanation of consciousness is far from complete. One might even say that it was just a beginning. . .I havent replaced a metaphorical theory, the Cartesian Theatre, with a non-metaphorical (literal, scientic) theory. All I have done, really, is to replace one family of metaphors and images with another. . .Its just a war of metaphors, you say. . . (p. 455). Given the task he set himself, it is my view that he could never have succeeded. In the rst place, his explanation of consciousness says nothing about cognition or intelligence. He might defend this omission by saying that intellect has nothing to do with consciousness but he had already indicated his theory was not just a theory of consciousness but a theory of mind. Section II is entitled An empirical theory of mind, while the title of Chapter 9 is The architecture of the human mind. By Chapter 13, The reality of selves, his essay had degenerated into incoherence as he jumbled a huge range of mental functions into another, undened intracranial entity called The Self. I believe that this confusion led to his failure to explain consciousness. Quite clearly, conscious experience is merely a subset of all mental events, as are decisions regarding the boundaries of self and the disorganised range of mental functions he set as his basic task. The larger theory which will account for all of these phenomena is what we call a theory of mind. There cannot be a stand-alone theory of conscious experience, separate from the phenomenologically distinct but closely related mental events subserving knowing (cognition). Attempting this task is similar to offering a theory of locomotion for the left leg only: it might make the theoreticians task easier, but it wont explain why the body stands up, let alone why it moves. In particular, there cannot be a stand-alone theory of conscious experience which mixes cognitive and experiential elements as though they were one and the same thing, yet ignores intellect, the mechanism of cognition.

Functionalism
Functionalism is the view that mental states are dened by their causes and effects (Baker LR, in [7]). That is, the relationship between the cause, the associated inner states and the effect denes a mental state. To a functionalist, a pain is the type of state associated with pinpricks, burns, etc., (the input), which causes other inner states such as worry, and typically causes avoidant expressive behavior (the output). Functionalism denes common, folk notions such as beliefs, attitudes, ambitions, etc., in terms of their association with certain types of behaviour: If two people, on seeing a ripe banana, are in states with the same causes and effects, then, by functionalist denition, they are in the same mental state say, having a sensation of yellow. Needless to say, this does not endow a mental state with anything like the quality we would like. One of the most inuential functionalist philosophers, Daniel Dennett, has published a book boldly entitled Consciousness Explained [13]. In this, he dened the basic task of any model of consciousness as explaining the following phenomena: 1. Experiences of the external world such as sights, sounds (he included language), smells, slippery and scratchy feelings, temperature, limb position, etc.

Interactive dualism as a partial solution to the mindbrain problem for psychiatry His failure is effected by the functionalist aim of explaining consciousness away, of showing that it isnt really what it seems to be. Quite clearly, Dennett wasnt sure what he was trying to explain but, when he got there, he claimed it wasnt what it seemed to be. But sensory experience is only what it seems to be: take away the experience and there isnt anything left. It is that sense of what seems to be that a theory of consciousness must explain, but only as part of a larger theory of mind.

1167

space. . . (p. 287). What this amounts to in practice remains unexplained by the end of the book. Necessarily, experience is still a raw given.

Tasks of a theory of mind


A basic theory of mind for psychiatry must explain a diverse range of mental phenomena: 1. Exteroceptive sensations such as sight, sound, smell, touch, pain, temperature, position and vibration sense, pressure, sexual sensations, balance, etc. 2. Interoceptive sensations such as hunger, thirst, tiredness, nausea, loss of breath, etc. 3. Emotions such as anxiety, anger, joy, humor, sadness, etc. 4. Compound emotions such as triumph, despair, suspicion, guilt, familiarity/novelty, yearning, etc. 5. Cognitive functions such as knowing that, calculating, deciding, judging, recalling, being aware of, believing, intending, hoping for, planning, being certain that, realising, meaning, implying, deceiving, getting a joke, detecting injustice, taking a hint, taking offence, etc. These events fall naturally into two classes, the experiential (groups 14) and the knowledge-based (group 5). While the knowledge-based or cognitive functions can be reduced to their elements [11], experiences are irreducible. Clearly, a theory of the experiential or conscious element (wrongly named a theory of consciousness), is only part of the larger model of mental life. While the two classes of mental events have certain features in common, there are crucial differences between them that allow us to delineate a working model of mind. In the rst place, experiences are immediate, irreducible, ineffable and have no informational content. We simply have them or experience them but we cannot analyse them further. Sensations are thrust upon us, complete in every detail but are forever private. We talk about them as though they were public knowledge but they are not. Experiences can only be dened ostensively: Its red, like a ripe tomato. Philosophically, they are brute facts or raw givens, something which cannot be explained to somebody who hasnt had them. The other class, of cognitive or knowledgebased functions, is entirely different. These are not experiences but are processes or executive functions which occur outside awareness. They are fast, silent (unconscious), reducible, communicable (as information) and have no experiential

Natural dualism
This point leads directly to the major objection to the functionalists, that they do not take consciousness seriously. Accordingly, at the other side of the philosophical arena, we nd one of the oldest of all concepts of mind, a resurgent dualism. Following the failure of the supernatural dualism of Popper and Eccles [9], the modern variant is in the form of what Chalmers [10] terms a natural dualism. Chalmers starts with a materialist ontology, i.e., the notion that there is nothing in the universe beyond matter and energy and their interactions. A complete understanding of the particles of the universe and their associated energy states would tell us everything there is to know about the universe, its past, present and future. But there is more to it than just things bumping in the dark, because some material bodies have the capacity to know, to decide and to feel. And it is this internal aspect, this awareness of being something, that both requires and dees a stock materialist explanation. Difcult as it may be, this inner experience must still be taken seriously. Chalmers argued that, within materialism, a natural dualism, the notion that conscious experience is both real and natural, is the only viable account of mind. The mind arises from the brain by some constant and ultimately denable relationship, probably as a product of brain organisation. He posited two aspects to mind, the experiential or conscious element, and the executive, knowledge-based or psychological realm. This formulation immediately leads to what he termed two mindbody problems and one mindmind problem. What Chalmers calls the psychological realm, the realm of non-conscious decisions, knowledge, action, etc., is conceptually nothing new. These days, every house has dozens of dumb machines that can make decisions based in vast stores of information and massive data ows. However, experience, the other half of mind, remains an enigma: The structure of experience is just the structure of a phenomenally realised information

1168 content. Thus, we can never catch ourselves in the act of making a decision. We can think about lifting an arm but the instantaneous decision is forever a mystery: we can never see it in action. Similarly, I dont willfully decide something is familiar: that information is provided gratis by processes I cannot see in action. I will never know where my jokes come from, they are simply thrust upon me, complete, and I simply communicate them. This is true of all cognitive decisions. I decode an anagram but dont know how I do so. I jump to catch a ball but monkeys can do it, too. I drive a car along a bush track, changing gears perhaps every 10 m or more often, without ever once saying to myself, Engine straining, go down to second. Even if I say out loud: Better change down to second, I have already made the decision before I say it. And so on. All of this takes place at a level I cannot introspect and, even though I can retrospectively reduce the processes to their substeps, I cannot catch myself in the act of actually doing them. Some people object to this conclusion on the basis that we often do consciously reect upon what we will say and how we will say it, and that this therefore confounds my formulation of cognitive functions as non-conscious. My response is rstly, that the vast majority of human decisions are effected without any such mental commentaries, that this is in fact quite rare and cannot be used as a general model of human decision making. Secondly, this type of reection is unnecessary. If I say to myself: No, otiose would be better than unnecessary, then I have already decided to change the word. The commentary is otiose because the decision has already been reached. As an intellectual being, knowledge arrives, answers are given, compositions are thrust upon me without my knowing where they come from. Knowledge is not random but is determined by organising principles, or rules. The clearest example of a rulebased, cognitive process is speech. I speak without any idea of how I will say it. I know what I want to say (roughly) but the actual speech as it eventuates may surprise me as much as it does you. I do not yet know the third next word I will use but it will be in context, it will be grammatically correct (mostly) and you will understand it, all without any intervention of what might be called consciousness. I believe this shows that the experiential and the decision-making or executive realms, while intimately related, are nonetheless profoundly different in nature. When it comes to the informational or executive mental functions, we now have powerful working models of automated decision-making, including

McLaren very good biological models from animals. However, we have no models of conscious experience. When looking at the phenomena which require explanation, we are like a man trapped in a huge, greased glass bowl. He cannot even start to climb out as he can gain no purchase on his dilemma. No matter what he tries, he invariably ends up back where he started.

Turings automated, non-conscious decision-maker


To understand automated decision-making, we need to go back 55 years to one of the seminal papers of the IT revolution, Alan Turings paper entitled Computing machinery and intelligence [12]. He showed that, as long as we can reduce the questions or decisions to an elemental form in which the machine simply has to answer yes or no, then a machine can mimic human intelligence. This established the concept of a universal computing machine but, conversely, he also showed that any human decision can be automated. What this means, but doesnt seem to have been widely appreciated, is that any observable human output state can, in principle, be reproduced in a suitable machine. Therefore, and this goes beyond the argument in his paper, there is no a priori reason to suppose that such output states are anything other than strictly non-conscious, blind processes. Needless to say, if we wish this statement to have any meaning, we need to dene human output states. Dennett was perfectly explicit: human output states which can be reproduced in machines include any and all conscious experiences: If all the control functions of a human wine tasters brain can be reproduced in silicon chips, the enjoyment will ipso facto be reproduced as well [8, p. 31]. Is this logically possible? I can see no compelling case against it but I suggest that, for a working theory of mind for psychiatry, it doesnt actually matter. With no derogation of our sentience, we can dene the private, experiential realm out of the equation. Turing showed the way. The essence of the difference between the experiential realm and the executive lies in the fact that, as argued, the experiential occupies no causative role in the generation of observable behavior, including emotions. The behaviorists noted this a long time ago, in, for example, the aphorism that we do not run because we are frightened, we are frightened because we run. They wanted to write the conscious realm out of the causation of behavior. The decision that an event is dangerous takes place before the experience of

Interactive dualism as a partial solution to the mindbrain problem for psychiatry fear can be generated. Behaviorists assumed this type of decision was necessarily a biological event. Here, I am using the term decision in its broadest sense, that somewhere, somehow, the intact and healthy brain decides that an event constitutes a risk and activates a series of neurophysiological output states (physiological changes, emotions) which we experience as the fear complex. But I am already jerking away before I feel frightened, and I feel frightened before I can say to myself, Look out, thats a snake. Indeed, if I had to rehearse that sentence in my head before I could move back, I would be bitten nine times out of ten. If I say to myself, Theres a snake, I have already decided what it is, and that decision generated the fear and motor reaction without so-called conscious intervention. The crucial point of any output state, including behavior and emotion, is that it is immediate and unconsidered, i.e., the experiential realm has no primary or causative role. My brain decides for me what emotions I will experience but, in this context, we have to be very careful just what we mean by brain. I mean some executive decision maker that is at once fast, silent and forever outside awareness. Something automated. This is where Turings universal computing machine comes into its own. He reasoned that universal computing machines with large memory stores have an almost innite output capacity and thus, they can mimic any other discrete state machine. Given sufcient computing capacity and a large memory store, he concluded, the question of whether a machine can think becomes otiose as we wont be able to tell the difference between a machine and a human. With the proviso that we dont deny our private, non-causative experiential state, there is therefore no reason to suppose that, in making decisions, humans are doing anything more than machines do. That is, all that counts in a working model of mind is that humans follow rules while computing decisions that govern their output states (behaviour, emotions, etc.). This can be rephrased in practical terms. A Turing machine consists of an input tape, a memory, a read/rewrite head and an output tape. The input tape is simplied to the point of inanity, such that the data can be in one of only two forms, a one or a zero. All the machine has to do is read each datum sequentially, compare it with the memory store and decide whether to leave it as it is or change it. Needless to say, the memory has to be in the same form as the input data otherwise it cant be compared, and the output will also be in the same form because nothing has changed it. As long as the questions can be reduced to a form where they can be manipu-

1169

lated by a yes/no machine of this type, we can compute any output state. Logicians agree that only certain recursive questions cannot be answered in this form, but they wont concern us here. Is there anything of this form in the central nervous system? In brief, yes, there is. The CNS is most denitely of a form which would support a universal computing function. I will go further than that by raising a challenge: Nobody can ever show that the human central nervous system is not, in essence, a Turing machine. Turings model was purely hypothetical, but can we identify elements in the brain which support this model? Yes, but we must rst separate the output states into causative and non-causative, otherwise the model will break down. Turings original model was only concerned with computing output states because it was on these that the features we class as uniquely human were based. Nobody can claim that seeing the color red is uniquely human, partly because we have no way of knowing whether we all have the same experience, and partly because birds seem to be very good at picking it, too. Thus, the experiential realm becomes a nuisance, standing in the way of a neat model that can explain everything we do without worrying about what we feel about it (which, of course, was the behaviorist ambition). Psychiatry needs a model that explains what people do: if experience is private, universal and non-causative, we can dispense with it. It doesnt matter whether or not we both experience red when we look at a tomato, all that counts is what we do about it, including what we say we feel about it. I experience the color red, but the experience is initiated for me by the color receptors in my retina, long before the visual input enters the brain: the decision that tomatoes are red is strictly non-conscious. In principle, the CNS meets the requirements of Turings machine. It consists of receptors which receive energy inputs from the external world and convert them into a ow of digital data in the afferent nerves. The data ows are then manipulated at a series of points on their way back to the brain where further manipulation takes place but, crucially, always in the same form as they left the receptors. That is, there is no place in the CNS where the color of a tomato is physically reproduced. The entirety of human mental function is symbolically denoted. There is not an identity relationship between mind and brain just because mind is a symbolic function, and symbols, by their very denition, cannot be reduced to the substrate which carries them [6]. There is therefore no conceptual gap between the input and output.

1170 Since human memory is also in the form of coded impulses, we now have the essential elements of a universal computer within the structure and function of the CNS. There is an input state in coded form, a memory store which does not convert to a different realm (i.e., there is no breach of matterenergy conservation laws), a means of manipulating data in the same codes, and efferent tracts leading to effector organs which respond to exactly the same form of information, i.e., discrete impulses in nerve pathways. The crucial feature here is that at no point does the data ow move from one realm to another, it stays wholly within the physical realm. This is not to say that the symbols are in the physical realm, because they are not. By their very nature, symbols are irreducibly insubstantial. If it can ever be shown that the data ow itself does move, that it somehow jumps from the natural to the supernatural, then this model will break down and Eccles will have the last laugh. I think I am on safe ground. So far, what I have proposed is this: a split between the two great classes of mental events into a causative executive realm whose function is fast, silent and unreportable but whose outcome is behavioral (public), and an experiential realm which is wholly private, non-causative and irreducible. We have excellent grounds for supposing that the executive realm is instantiated in a form proposed many years ago and proven a myriad times since, that it breaks no rules of the material universe, and that it can achieve any denable human output (behavioral) state, including speech. We have a model of memory in which the instructions for manipulating the data are coded in the same form as the input and output data, we have working models of unconscious decision making in every desktop calculator. In short, all that is missing is a means of accounting for the experiential realm. Once again, Alan Turing showed the way, although Im not sure if he knew he did.

McLaren Of course, he didnt call it a soul or homunculus, that would give the game away, instead he called it a Self. His Self, he insisted, was as much a biological secretion of the brain as a birds nest or a beavers dam and therefore was scientically acceptable even when souls arent. Now the problem with the concept of the homunculus is not, as he supposed, that it necessarily invokes ghostly ectoplasm but simply that it explains nothing. We have certain mental functions to explain. If we cannot explain them in the physical realm, it avails us naught to attribute them to a little inner man because they still have to be explained in his little head. If we cant explain them in the rst head, then we cant explain them in a second, so we have to postulate a further little man inside the little man, i.e., we have started an innite regress. This is the reason homunculi are non-scientic, not because they are forbidden stuff which we cant localise in space. Unfortunately, Dennetts Self is non-scientic just because he endows it with the capacity to make decisions. Eccles did this, too, he postulated a ghostly Self which poked its ngers in the cerebral pie, read what it wanted and then sent its decisions back to the brain. This is clearly an innite regress. The only way out of this impasse is to propose, as I have done, that the experiential realm is entirely a causative dead-end, that it has no more executive powers than a cinema screen. However, the cinema analogy is potentially misleading because there is no audience of one in the head to view the screen. Any model of mind involving a stream, eld or screen that the mental elements occupy, oat in or are projected on, is necessarily an innite regress and is ipso facto non-scientic. There is therefore no place in a scientic model for a stream or eld of consciousness, nor are the mental contents invested in consciousness in a particular part of the brain or by bathing them in chemicals or inner light or whatever. All of this is non-scientic. The experiential realm, Consciousness, the Self, soul or spirit, just has to be a functional dead-end otherwise it sets up an innite regress. Necessarily, the conscious realm is pure experience with no capacity to observe or decide. Remove the experience, as in deep sleep, coma or anesthesia, and there is nothing. Fortunately, we have suitable models for a nonlocated, insubstantial, non-causative entity. It is a mystery why Dennett proposed a biological, executive Self when he had already given an example of such a model, which he called a virtual machine: Human consciousness. . .can be best understood as the operation of a von Neumannesque virtual machine implemented in the parallel architecture

Generating conscious experience


In Consciousness Explained, Dennett spent hundreds of pages of diligent criticism panning the concept of the Cartesian homunculus, the soul, spirit or little man inside the man. He objected to this because he believed it necessarily involved ectoplasm or spirit stuff but, almost at the end of his work, he suddenly invoked a real, functional homunculus to complete his explanation (of mental function). In every respect, it functioned as a soul, it did everything a soul traditionally did as, without it, Dennett couldnt explain a thing.

Interactive dualism as a partial solution to the mindbrain problem for psychiatry of a brain that was not designed for any such activities. The powers of this virtual machine vastly enhance the underlying powers of the organic hardware on which it runs. . . ([8, p. 210]; his emphasis; he refers to the Hungarian-born mathematician and logician, John von Neumann). I propose that the experiential realm is just one such virtual machine, but not a machine in any other than the most general sense because it doesnt actually do anything. The experiential realm adds a fascinating dimension to life, it is mostly good fun and life would be very different without it. However, as blind sight shows, we could still get by without it just because all that counts in observable (and therefore communicable) human affairs takes place at the fast, silent level of non-conscious decision-making. Somehow, by manipulating its informational input, the brain generates a sense that being alive is something which being dead is not, and that this sense is over and above decision-making. Remember that unconscious people respond to noxious stimuli without any sense of pain, that sleeping people pull up the blanket when they get cold, we decide to wake up to go to the toilet and mothers sleep through the TV but wake when they hear their babies cry. Decisions are both causally effective and unconscious in every sense of those words. Experience is fully conscious (even if its not remembered) and causally ineffective. Nothing of causal importance takes place in the experiential realm but it can certainly hurt, because thats what hurt means. The next question is whether the CNS could generate a virtual machine of the type this model requires. For an answer, we need to go back to Turing. I dont know whether Turing explicated this point or it was done later, for him, but an important point of the universal Turing machine is that, with sufcient memory and computing power, it can simulate any nite or discrete state machine. That is, the computer can generate virtual machines, a property which has long been exploited. Most work on parallel computing, for example, is done on suitably programmed serial computers. The online auction house, eBay, and all internet banks are virtual machines. Remember, of course, that virtual machines are independent of their substrate, so Dennetts proposal for a sentient silicon wine taster is not as outrageous as it seems. I propose that the experiential or conscious realm is just that, a virtual discrete state machine generated in the computational space of the much more powerful universal computing machine, the brain. I suggest this would satisfy Chalmers hypothesis: The structure of experience is just the struc-

1171

ture of a phenomenally realised information space. . . Is the physical brain of a form which could support such a mechanism? Most certainly, it is. Everything we know about the structure and function of the CNS supports the notion that it processes vast data inputs by cascades of stereotyped computation [13]. The basic cerebral unit, the cortical module, which is approximately 300 lm wide and 3 mm deep, contains about 10,000 neurones. There are about a million such modules, and each neurone has something of the order of 10,000 connections, so the concept of mechanised data processing ts in very neatly here. I would suggest, however, that the cortical module is not the minimal functional element; rather, each neurone should be seen as a microprocessor in its own right. With something of the order of one hundred trillion connections, it seems unlikely that the brain doesnt have enough computing power to generate a virtual machine. A quick glance at a diagram of the microstructure of the cerebral cortex or of the cerebellum leaves no doubt that the burden of disproving this hypothesis has well and truly shifted to the spiritualists.

Discussion
Since the collapse of the 19th century models (psychoanalysis, biologism and behaviourism), psychiatrists have been in search of a model which integrates the psyche and the soma. Indeed, so keen has their search been that they embraced the so-called biopsychosocial model without ever bothering to check its details. If, at any time over the past three decades, they had done so, they would have found it had none. This would have forced them into the embarrassing position of having to acknowledge that modern psychiatry is operating in a theoretical vacuum. The model outlined in this paper offers a means of solving the conceptual gap between the mind and the body by postulating a split between an effective cognitive realm from an ineffective experiential realm. In a sense, this amounts to epiphenomenalism but not of the traditional type. Ordinarily, this term refers to models in which the mind is an epiphenomenon of the biological substrate. All effective activity takes place biologically and adherents of these views are, in the main, dismissive of the mentality of human mental life. My model is totally different. It states that the mind has two irreducibly mental components, cognition and conscious experience, which together

1172 account for the whole of human mental life. In this model, mere biology does not generate output states. One mental component, conscious experience, is a real but ineffective product of the other. Only this way can we avoid the trap of the innite regress implicit in all models in which the conscious element has its own decision-making capacity. Above all, it allows us to rely on known principles of physically based data processing in accounting for the ability of the mind, including animals, to make the myriad decisions on which daily life is based. This model is diametrically opposed to the biological approach which has gained the ascendency in psychiatry over the past 25 years. Biologism was perhaps a necessary reaction to the unrestrained psychologism which gave psychiatry such a bad name but it has its limits [6]. It cannot comprise the basis of a general theory for psychiatry just because of its inability to account for the most central elements of human mental life. The notion of a non-causative experiential realm seems to cause anxiety among some psychiatrists but I believe this is due to a misconception of the nature of mental causation. Previously, scientic theorists dismissed all mental life as fanciful just because they could not account for it within their narrow concepts of science. But they still had to account for decision-making, so they split it away from mentalism, trying to reformulate it as mere biology or mere reex. On the other hand, those who believed that mental life counts resisted the split because, if they lost control of decision-making, it seemed they also lost their claim to relevance. In human affairs, decisions count whereas even the most devout mentalist has to concede that deafness or colour blindness does not diminish humans. The older generation need not fear that by accepting a non-causative experiential realm, they are consigning mentalism to the epiphenomenalist rubbish bin. In suggesting that we can build a model of mind (and thence of mental disorder) without formally explaining the nature of experience, I am not implying that conscious experience doesnt exist, or is irrelevant, nor am I slipping it off the table while nobody is looking. I am saying that disordered conscious experiences, which comprise the core of mental disorder as we dene it, are secondary to disturbances in the cognitive realm. While we will use drugs, etc., try to reduce the impact of those experiences, we dont need a theory of conscious experience to be able to explain the causation of mental disorder. I do not need a theory of perception to know that this experience is pain, its pattern indicates appendicitis and it should best be

McLaren managed this way. Similarly, I do not need a theory of perception to know that this experience is an emotion, its pattern indicates anxiety and it should best be managed this way. Psychiatry is, after all, a pragmatic discipline. In this approach, the explicit and implicit belief states of the individual govern mental life. The model states that humans are sentient, rule-governed creatures, not id-driven and certainly not mere organisms. Logically, we have to have rules before we can know anything: Homo iure stans, the rule-abider, comes before Homo sapiens. By this means, we can readily account for personality and personality disorder, while our understanding of formal mental disorder moves beyond unknown chemical imbalances of the brain. The application of this biocognitive model to practical psychiatry requires some reorientation of the current, categorical model. These points will be explored in further publications. Nothing in this model breaks any rules of the material universe. There is no ectoplasm oating around to bridge causative gaps, there are no innite regresses, no irrefutable elements, no question-begging pseudo-solutions, no miracles and no hidden tricks. I have not relied on any models which arent already in use or which other people havent devised and implemented in other elds. This is not an irrefutable, Byzantine monstrosity like psychoanalysis, nor does it test ones credibility by saying, like the behaviorists, that consciousness doesnt exist. I havent tried to reduce the mind to its substrate, nor tried to explain it away by legerdemain. Furthermore, I have indicated exactly where this model can break down. The critical element is that the coded information, including all the memory stores in which the rules are coded, ows from input receptors, through the computing machinery to the effector organs while remaining wholly in the physical realm as discrete nervous impulses. The data ow does not at any point jump from the material to the immaterial realm, except with this proviso: information is coded, so it is never in the physical realm. It is always somewhere, oating in a private virtual space generated by the brain.

Conclusion
This is wholly and irreducibly a mentalist account of human behaviour, yet it is rmly based in the physical structure of the brain. As such, it is a model for psychiatry, rather than a psychological theory which does not take account of the

Interactive dualism as a partial solution to the mindbrain problem for psychiatry structurally dened limits of the CNS. It leads to an integrative model of mental function and dysfunction which can satisfy psychiatrys current intellectual vacuum. For the rst time in the history of psychiatry, we have the outline of a model which offers realistic solutions to a number of major problems. As a general theory of psychiatry, it restores the essence of humanity, our mentalism, to rightful primacy.

1173

References
[1] Available from: www.futurepsychiatry.com. [2] McLaren N. A critical review of the biopsychosocial model. Aust NZ J Psychiat 1998;32:8692. Available at [1]. [3] Kendler KS. Toward a philosophical structure for psychiatry. Am J Psychiat 2005;162:43340.

[4] Greeneld SA. Mind, brain and consciousness. Brit J Psychiat 2002;181:913. [5] Kuhn TS. The structure of scientic revolutions. 2nd ed. Chicago: University Press; 1970. [6] McLaren N. Is mental disease just brain disease? The limits to biological psychiatry. Aust NZ J Psychiat 1992;26:2706. Available at [1]. [7] Audi R. Dictionary of philosophy. New York: Cambridge University Press; 1995. [8] Dennett DC. Consciousness explained. London: Penguin Books; 1993. [9] Popper KR, Eccles JC. The self and its brain. New York: Springer Verlag; 1977. [10] Chalmers DJ. The conscious mind: in search of a fundamental theory. New York: Oxford University Press; 1996. [11] Dennett DC. Brainstorms: Philosophical Essays on Mind and Psychology. Hassocks, Sussex: Harverster Press; 1979. [12] Turing AM. Computing machinery and intelligence. Mind 1950;59(236):43360. [13] Malenka RC, Nicoll RA. Long-term potentiation a decade of progress? Science 1999;285:1870.

You might also like