THE MIND-BODY PROBLEM
Matter Exists, The Mental Does Not
Written by Dr. Nash Popovic
This view is known as reductive materialism or materialistic monism. It is based on the belief that the mind can either be identified or reduced to the brain (or body) activity. For a true materialist the ‘mind’ is nothing more than a way of describing certain electrical impulses and chemical processes in the brain and the rest of the body. Thoughts or emotions are mere folk terminology: consequently, the laws of nature govern these processes, and freedom of choice is just an illusion.
Support for materialism does not amount to much. Some of its proponents admit that they are motivated by such considerations as Occam’s razor or a general belief that everything is reducible to one kind of entity. The reason why this perspective seems plausible to many is that brain injuries or certain chemicals can alter mind states. However, although this proves that the brain affects the mind, it is not evidence that the mind is the brain. To make a comparison, if a car breaks down or runs out of petrol, the driver is forced to stop her journey. This is not a proof, though, that the driver does not exist, or that she is identical to or a product of mechanical processes in the car’s engine. Yet, materialistic interpretations make a similar leap of faith in an attempt to explain the mind solely by brain processes. It is more or less left to philosophers to argue this point as there is no scientific evidence that the mind can be reduced to the brain – only that they correlate. Hanfling (1980, p.52) illustrates that correlation and identification are not the same:
…connection is not enough. To say that rain is connected with a fall of the barometer is very different from saying that rain is a fall of the barometer; and the same is true of sensations and brain-processes – even assuming that they could be correlated in the same sort of way. To go from correlation to identification requires a further step. Is this further step also a matter of science?
Materialism still has an appeal because it offers an easy solution to the problem that bedevils dualism: how states of the mind (expectations, volitions, feelings) can initiate physical movements. If the mind is identified with the brain, this issue becomes trivial: the one part of essentially the same system affects another in a way that a car engine affects the wheels. This solution, however, creates even bigger problems and requires sacrificing much of what it tries to explain. The mental must be completely eliminated because if it is acknowledged, it would have to be material, and therefore spatial and accessible from the third person perspective. Yet, this does not seem to be the case – hence the various attempts to side-track this essential aspect of human experience. The irony is that there is so much bickering about the best way to do so, that materialists appear to be the most effective critics of each other. We shall consider the most prominent of these attempts: behaviourism, eliminativism, indent theories, and functionalism.
The eagerness of psychologists to present themselves as objective and show that the mind can be studied, as it were, from the outside, resulted in a behaviourist movement that reigned for fifty years (roughly between 1915 and 1965). A radical form of materialism, it reduces all mental states to phenomena that can be observed and measured (i.e. behaviour). A particularly influential form was the ‘logical behaviourism’ espoused by philosopher Gilbert Ryle in his book The Concept of Mind. He also coined a disparaging phrase for dualism – ‘the dogma of the ghost in the machine’. Once the mental is identified with behaviour, it is easy to discard the mind altogether. Ryle, for example, argues that one cannot expect to find a mind over and above all the various parts of the body and its actions, for the mind is just a convenient label for certain physical actions. He uses the metaphor of a university that does not exist out and above the buildings and people that make a university. So, as university, the mind does not really exist, it is just a different description level of various dispositions and capacities. However, there are many objections to this perspective:
- Ryle may be right to compare the mind with the concept of ‘university’ in so far as the mind does not exist independently from its constituents (it is just a name for a sum of mental processes). However, identifying these processes (thoughts, images, feelings, desires, etc.) with their visible manifestations leads to some absurd consequences. For example, from this perspective, no distinction can be made between the person who is really in pain and the person who acts convincingly that he is in pain. Moreover, what about those who do not show any external expressions of mental events (e.g. a person who is paralysed or meditating)? Are they not feeling and thinking? Although public evidence for one’s thoughts or feelings must indeed come from observable behaviour, it is a real leap of faith to assume that they can be reduced to it.
- Identifying mental events with behavioural tendencies leaves qualia out of the equation. The experience of being in pain cannot be simply reduced to a disposition to scream, wince or say ‘I am in pain’. Feeling the pain is too important to be ignored. Philosopher Kripke argues that the behaviourist account of the mental fails because the subjective character of an experience (or, ‘its immediate phenomenological quality’, as he calls it) is the essential property omitted by such analyses.
- Behaviourism naturally does not allow the possibility that behaviour can be caused by mental events like beliefs. According to this view, such events do not exist independently of behaviour – they are just dispositions to behave in a certain way. For example, a person does not take an umbrella because she believes that it is going to rain, but because she has a disposition to take an umbrella. This, however, blatantly contradicts common experience and common sense.
- The behaviourist claim that people learn about their own beliefs by monitoring their behaviour and by listening to what they say also seems absurd. It would mean that we cannot have any beliefs that we have not first acted upon or verbally expressed.
- A logical consequence of such a position is that the mind is fully determined by its environment (‘nurture’), but this does not leave any room for choice, creativity and other common phenomena – note that if they did not exist in the first place, they could not be just invented, since invention involves creativity.
Not surprisingly, behaviourism was eventually rejected, even by those who subscribed to materialism, and other ways have been sought to dispose of the mental. We start with the most radical one, called Eliminativism.
Eliminativism is an extreme materialist doctrine that attempts to solve the ‘mind-body problem’ by completely eliminating the mind. It simply denies the existence of the phenomena that cannot be explained from a materialistic perspective. Eliminativists maintain that mentality is nothing more than folklore, and advocate the replacement of everyday psychological concepts (such as feelings, desires, beliefs, intentions, etc.) in favour of neuroscientific ones. This position has been championed by Patricia and Paul Churchland and Daniel Dennett. The latter, in his rather over-confidently entitled book Consciousness Explained, asserts that consciousness – and our sense that we have a self – is an illusion. However, there are several reasons why these assertions cannot be justified:
- A claim that common mental phenomena are illusions – akin to optical illusions, is not only unsubstantiated, but also misleading. Unlike perception, which is mediated by the senses that can play tricks on us, phenomenological experience is direct and cannot be an illusion. Only its interpretations can be, because they require a correspondence to events or objects outside one’s mind. But, the sense of self, being aware, or intending are not interpretations (we do not project them onto or seek correspondence with something ‘out there’). They are prime examples of unmediated experiences. So, we may be wrong believing that they are in the brain (or not in the brain), but they cannot be dismissed as an illusion. For instance, if a person says that she feels pain, she may be mistaken about many things (e.g. where the pain is coming from) but not that she is experiencing, that she is aware of pain. Unless it is suspected that the person is deliberately lying, saying something like, “No, you are wrong, you are not aware of any pain”, simply does not make sense.
- The complex organisation of a brain is merely a mechanism, and being a mechanism it cannot be sufficient to account for what Chalmers calls the hard problem. He points out that ‘…it is conceptually coherent that brain processes could be instantiated in the absence of experience’ (1995, p.208). If they may happen without experience, referring to the physical processes alone cannot adequately explain why experience arises.
- Qualia cannot be eliminated without losing the essential quality of consciousness. A particular part of the brain may be active when one feels sad, for example, but this electrochemical process (even if it were the physical cause of the experience) is not the same thing as a phenomenological experience of sadness.
- If the mind is the brain, no discrepancies between these two should be possible. Yet, there are several empirical findings of dis-synchrony, such as temporal discrepancies between neural events and the related experiences apparent in backward masking, antedating and a commonly perceived slowing down of time in acute emergencies (Popper and Eccles, 1977, p.362).
The conclusion is that although eliminating the mind would make life easier for those who study consciousness, it does not seem to be a viable option. Roger Sperry, one of the most distinguished neuroscientists in the second half of the 20th century, declared in his paper ‘Changing priorities’:
Current concepts of the mind-brain relation involve a direct break with the long-established materialist and behaviourist doctrine that has dominated neuroscience for many decades. Instead of renouncing or ignoring consciousness, the new interpretation gives full recognition to the primacy of inner conscious awareness as a causal reality. (1981, p.7)
Identity theories, a variant of the above, claim that mental events and brain processes are identical, just different descriptions. The oldest version is the so-called type-identity theory. It states that mental events and physical ones are just two ways of looking at the same thing (like H2O and water). The fact that many people have no knowledge of brain processes but know intimately their own thoughts and feelings is not a problem per se (they are contingently identical). However, there are several other reasons why this theory (besides those already mentioned) does not seem plausible:
- Identity theory fails to explain why a particular neural activity causes a certain experience (e.g. why the excitation of C-fibres is associated with pain, not itching).
- Logical differences between propositions about the mind and propositions about the brain pointed out by many philosophers cannot be ignored. Brentano’s intentionality is one example: mental states are about something, while brain states are not, they just are.
- Another problem is that the same kind of mental event can correlate with different neural mechanisms. It is hard to believe that the same mechanism underlies pain for all the different actual and possible pain-capable organisms. Thus, it seems that there is no single physical kind with which pain, as a mental kind, can be identified.
- If thoughts and brain states are identical, they should share the same properties and be located in the same place (as water and H2O do). However, mental events have attributes that physical events do not have (and vice versa). Brain processes are very different from mental images, thoughts, experiences, intentions, beliefs, etc. These differences cannot be seen as only different descriptions. A mental image, for example, cannot be described in terms of brain activity without losing the essential qualities of that image. Philosopher of mind, Thomas Nagel, writes:
The idea of how a mental and physical term might refer to the same thing is lacking, and the usual analogies with theoretical identification in other fields fail to supply it. They fail because if we construe the reference of mental terms to physical events on the usual model, we either get a reappearance of separate subjective events as the effects through which mental reference to physical events is secured, or else we get a false account of how mental terms refer (for example, a causal behaviourist one). (1981, p.401)
- If H2O is just a different name for water, water must be always present when H2O is present. This is not the case with the brain and the mind. There are brain processes that do not have corresponding mind events, and arguably vice-versa, as a pioneering scientist in the field of human consciousness Benjamin Libet claims: ‘…it is possible that some mental phenomena have no direct neuronal basis’ (2004, p.184).
- It is hard to explain from this position why we are aware of some brain processes and not others, or how we can alternate between being aware and not being aware of certain neural events without altering them (e.g. the sensations associated with sitting).
- The implication of this theory is that the brain processes would always produce the same mental events, and that mental events are always associated with identical brain processes. Yet, this is unlikely. The same brain state may not always produce exactly the same thought, and the same thought may involve a somewhat different brain state today than yesterday. Mental events cannot be segmented and isolated from general experience as physical processes can. To overcome this problem, a so-called token-identity theory was developed. Unlike type-identity theory, this one allows that mental events of the same type need not all be brain states of the same type. The theory though does not explain how it is possible, which makes the relationship between the physical and the mental more mysterious than even in dualistic interpretations.
Functionalism became a popular theory of the mind in the late 20th century, not only because it looked more promising than identity theories and behaviourism, but also because it could be linked to the development of Artificial Intelligence that was gathering momentum at that time. It claims that a functional role, rather than the intrinsic features of a system, determine a mental state. Mind events can be observed in terms of the relation between input, existing brain state and output. In other words, the mind is a name for the various functions that the brain performs, and the brain can be seen as a sophisticated and complex computer (with a consequence that the organic basis is not necessary and can be replaced, for example, with a silicon one). Although an improvement to the previous theories, functionalism can also be criticised on several counts:
- Searle’s famous mind experiment, known as the ‘Chinese room’, highlights the difference between processing information, and thinking and understanding in human terms, a distinction that the proponents of functionalism tend to neglect. The person in the room does not understand Chinese. Through a letterbox come various Chinese characters printed on cards. On a table in the room is a book, and the task of the person is to match the Chinese character on a card with a Chinese character in the book. The book will then indicate another, different, Chinese character which is paired with the first one. The person takes this other character from the pile of cards that he has and pushes it back out through the letter box. The cards coming into the room are questions written in Chinese. Those the person pushes back out are his answers, also in Chinese. Even though he does not understand Chinese, it appears from outside that he is giving intelligent answers in that language. Yet, the person does not have any relevant experience or understanding; he is simply manipulating what for him are meaningless characters. Artificial Intelligence is in the same position as the person in the ‘Chinese room’. Like him, it just manipulates symbols without genuinely understanding what they refer to. Thus, the functionalist model does not capture a complete picture of the mind. Güzeldere writes:
Something essential to (at least our common sense conception of) consciousness, it was largely believed, was necessarily left out in characterizing consciousness only by specifying its functional role in the cognitive economy of human mentation and behaviour. (1995a, p.49)
- The other way to pinpoint the inadequacy of the functionalist account is to compare human behaviour with the behaviour of purely mechanical complex systems such as machines, robots or ‘zombies’ (a term used in this area of study to signify something that looks like a human, but does not have conscious experience or freedom of choice). Any significant differences would indicate that a living organism cannot be reduced to the functions of her physical component. The obvious one is a lack of two essential characteristics of life: experience and agency. Let us consider pain again. Pain is functional in many ways. In principle, a machine may be programmed to process or react to the same stimuli in a similar fashion, without, in fact, experiencing pain. Yet, human beings do not react only functionally to pain, they do a lot of things that are not functional: they may jump up and down, bite their hand, swear, or even masochistically enjoy their pain. These may not all be functional in respect to the body, but serve some purpose for the one who is experiencing the pain (these are reactions to the experience, rather than to an informational aspect of the pain or its cause). There is not any need for robots or ‘zombies’ to exhibit such reactions, and therefore they would not be present if living organisms were the same. Moreover, as Hodgson (1994, p.213) points out, humans reactions are far less causally determined than what one would expect from machines:
If our brain operates like computers, appropriate behaviour to avoid harm and remedy damage could simply be the output of unconscious computation, and pain would be superfluous. As it is, pain gives us notice of the desirability of such behaviour, and a motive to engage in it; but it leaves us with a choice to do something else if we judge other considerations to be more important.
There are many other manifestations of human behaviour that are unlikely to be found in inanimate systems: self-determination, insecurity, anticipation, curiosity, intolerance of boredom, creativity, surprise, choice based on future perspective (e.g. expectations), guessing, indecisiveness, or desire to survive often irrationally expressed in acute situations (robots would have no use for panic, and such a reaction is too complex to spontaneously occur as a result of their malfunctioning). Of course, it may be theoretically possible to make a machine that could simulate these characteristics, but this would be only a form without a content. Writer Stanislav Lem (1981, p.306) makes the same point:
One can, to be sure, program a digital machine in such a way as to be able to carry on a conversation with it, as if with an intelligent partner. The machine will employ, as the need arises, the pronoun “I” and all its grammatical inflections. This, however, is a hoax! Nothing will amuse such a machine, or surprise it, or confuse it, or alarm it, or distress it, because it is psychologically and individually No One… One cannot count on its sympathy, or on its antipathy. It works towards no self-set goal; to a degree eternally beyond the conception of any man it “doesn’t care”, for as a person it simply does not exist… It is a wondrously efficient combinatorial mechanism, nothing more’.
We can safely conclude that ‘the explanation of functions does not suffice for the explanation of experience. Experience is not an explanatory posit but an explanandum in its own right…’ (Chalmers, 1995, p.209). Functionalists, of course, may disagree, but, as a pioneer of nonlinear dynamics, Alwyn Scott, puts it, ‘computer functionalism is a mere theory of the mind, and our experiences of qualia and free will are widely observed facts of nature. What are we to believe – our own perceptions or a belief handed to us by [reductionist] tradition?’ (1998, p.76).
It seems that reductive materialism cannot find a plausible way to exclude the mental from the equation without losing what is essential:
None of these theories can count as solutions to the perennial mind-brain problem; they are rather, evasions of the problem. They provide various linguistic formulae for sustaining the pretence that there is no problem as traditionally supposed. However, since they all involve a denial that subjective experience constitutes an irreducible datum, they must be deemed to fall at the first post. (Beloff, 1994, p. 33)
It may be worthwhile to summarise the problems with this perspective in relation to the criteria of reasoning specified in the part ‘The Method’.
Incompleteness – materialism leaves out the essential elements associated with the mental: awareness (experience, qualia), intent (initiating mental acts that can affect the body too) and self (as that which is aware and intends). The philosopher Schopenhauer described such a view as the philosophy ‘of the subject who forgot to take account of himself’. More recently, the neuroscientist Nunn makes a similar comment:
Recognising the incapacity of ideas of this sort to account for qualia, Dennett took the desperate, and in our view mistaken, step of arguing that they do not really exist in quite the way that everyday introspection suggests (1994, p.127).
Materialism cannot acknowledge qualia because they would have to be material, which would mean that we should be able to find them. This would also imply that there are physical events that have that property and some that do not, which leads to property dualism that is hard for a materialist to accept and justify. Yet, the subjective cannot be reduced to the objective without losing its essential character. Nagel writes:
Any reductionist program has to be based on an analysis of what is to be reduced. If the analysis leaves something out, the problem will be falsely posed. It is useless to base the defence of materialism on any analysis of mental phenomena that fails to deal explicitly with their subjective character. For there is no reason to suppose that a reduction which seems plausible when no attempt is made to account for consciousness can be extended to include consciousness. Without some idea, therefore, of what the subjective character of experience is, we cannot know what is required of physicalist theory. (1981, p.392-393)
Creativity (e.g. finding original solutions, or simply formulating a new sentence) is another phenomenon that materialists cannot adequately explain, considering that the brain processes, according to them, should be solely governed by natural laws.
Inconsistency – biologist J. B. S. Haldane was first to point out that materialism is self-defeating: it cannot claim to be supported by rational argument. If materialism is true, we cannot know that it is true. If opinions are nothing more than the electro-chemical processes going on in the brain, they must be determined by the laws of physics and chemistry, not logic. Developing this argument further, Karl Popper writes:
Materialism… is incompatible with rationalism, with the acceptance of the standards of critical argument; for these standards appear from the materialist point of view as an illusion, or at least as an ideology. (1977, p.81)
Even illusions themselves are a discretely mental category (computers do not have illusions), which is what materialists deny. In other words, they claim that having an illusion is itself an illusion – a contradiction in terms.
Incongruence – materialistic interpretation also cannot adequately account for some experimental data such as split brain experiments (forebrain commissurotomy – the removal of the corpus callosum), blind sight, temporal discrepancies between neural events and mental experiences, intentional activation of a brain module without corresponding action, and binding (the integrated character of experience that is not reflected in the brain). An eminent scientist and brain surgeon, Wilder Penfield, remarks:
Because it seems to me certain that it will always be impossible to explain the mind on the basis of neuronal action within the brain, and because it seems to me that the mind develops and matures independently throughout an individual’s life as though it were a continuing element, and because a computer (which the brain is) must be programmed and operated by an agency capable of independent understanding, I am forced to choose the proposition that our being is to be explained on the basis of two fundamental elements… mind and brain as two semi-independent elements. (1975, p.80)
 The irregular or chaotic dynamic of some natural systems cannot be equated with creativity as, unlike them, creativity is usually goal orientated.
 These empirical findings will be addressed in more detail later on.
In summary, materialists are right that the brain has an essential role in mental processes, but they do not provide convincing support for the claim that the mind can be fully reduced to it. Above quoted author, Alwyn Scott, concludes, ‘…this chasm between the details of a mechanistic explanation of the brain and the ever-present reality of conscious awareness, has continued to yawn… Reductive materialism fails to bridge that gap’ (1995, p.101). We will turn now to the opposite perspective, that it is all in the mind.