Neuroscience wants to be the answer to everything. It isn’t
There are many reasons for believing the brain is the seat of consciousness. Damage to the brain disrupts our mental processes; specific parts of the brain seem connected to specific mental capacities; and the nervous system, to which we owe movement, perception, sensation and bodily awareness, is a tangled mass of pathways, all of which end in the brain. This much was obvious to Hippocrates. Even Descartes, who believed in a radical divide between soul and body, acknowledged the special role of the brain in tying them together.
The discovery of brain imaging techniques has given rise to the belief that we can look at people’s thoughts and feelings, and see how ‘information’ is ‘processed’ in the head. The brain is seen as a computer, ‘hardwired’ by evolution to deal with the long vanished problems of our hunter-gatherer ancestors, and operating in ways that are more transparent to the person with the scanner than to the person being scanned. Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.
In 1986 Patricia Churchland published Neurophilosophy, arguing that the questions that had been discussed to no effect by philosophers over many centuries would be solved once they were rephrased as questions of neuroscience. This was the first major outbreak of a new academic disease, which one might call ‘neuroenvy’. If philosophy could be replaced by neuroscience, why not the rest of the humanities, which had been wallowing in a methodless swamp for far too long? Old disciplines that relied on critical judgment and cultural immersion could be given a scientific gloss when rebranded as ‘neuroethics’, ‘neuroaesthetics’, ‘neuromusicology’, ‘neurotheology’, or ‘neuroarthistory’ (subject of a book by John Onians). Michael Gazzaniga’s influential study, The Ethical Brain, of 2005, has given rise to ‘Law and Neuroscience’ as an academic discipline, combining legal reasoning and brain imagining, largely to the detriment of our old ideas of responsibility. One by one, real but non-scientific disciplines are being rebranded as infant sciences, even though the only science involved has as yet little or nothing to say about them.
It seems to me that aesthetics, criticism, musicology and law are real disciplines, but not sciences. They are not concerned with explaining some aspect of the human condition but with understanding it, according to its own internal procedures. Rebrand them as branches of neuroscience and you don’t necessarily increase knowledge: in fact you might lose it. Brain imaging won’t help you to analyse Bach’s Art of Fugue or to interpret King Lear any more than it will unravel the concept of legal responsibility or deliver a proof of Goldbach’s conjecture; it won’t help you to understand the concept of God or to evaluate the proofs for His existence, nor will it show you why justice is a virtue and cowardice a vice. And it cannot fail to encourage the superstition which says that I am not a whole human being with mental and physical powers, but merely a brain in a box.
The new sciences in fact have a tendency to divide neatly into two parts. On the one hand there is an analysis of some feature of our mental or social life and an attempt to show its importance and the principles of its organisation. On the other hand, there is a set of brain scans. Every now and then there is a cry of ‘Eureka!’ — for example when Joshua Greene showed that dilemmas involving personal confrontation arouse different brain areas from those aroused by detached moral calculations. But since Greene gave no coherent description of the question, to which the datum was supposed to suggest an answer, the cry dwindled into silence. The example typifies the results of neuroenvy, which consist of a vast collection of answers, with no memory of the questions. And the answers are encased in neurononsense of the following kind:
‘The brains of social animals are wired to feel pleasure in the exercise of social dispositions such as grooming and co-operation, and to feel pain when shunned, scolded, or excluded. Neurochemicals such as vasopressin and oxytocin mediate pair-bonding, parent-offspring bonding, and probably also bonding to kith and kin…’ (Patricia Churchland).
As though we didn’t know already that people feel pleasure in grooming and co-operating, and as though it adds anything to say that their brains are ‘wired’ to this effect, or that ‘neurochemicals’ might possibly be involved in producing it. This is pseudoscience of the first order, and owes what scant plausibility it possesses to the fact that it simply repeats the matter that it fails to explain. It perfectly illustrates the prevailing academic disorder, which is the loss of questions.
Traditional attempts to understand consciousness were bedevilled by the ‘homunculus fallacy’, according to which consciousness is the work of the soul, the mind, the self, the inner entity that thinks and sees and feels and which is the real me inside. We cast no light on the consciousness of a human being simply by redescribing it as the consciousness of some inner homunculus. On the contrary, by placing that homunculus in some private, inaccessible and possibly immaterial realm, we merely compound the mystery.
As Max Bennett and Peter Hacker have argued (Philosophical Foundations of Neuroscience, 2003), this homunculus fallacy keeps coming back in another form. The homunculus is no longer a soul, but a brain, which ‘processes information’, ‘maps the world’, ‘constructs a picture’ of reality, and so on — all expressions that we understand, only because they describe conscious processes with which we are familiar. To describe the resulting ‘science’ as an explanation of consciousness, when it merely reads back into the explanation the feature that needs to be explained, is not just unjustified — it is profoundly misleading, in creating the impression that consciousness is a feature of the brain, and not of the person.
Perhaps no instance of neurononsense has been more influential than Benjamin Libet’s ingenious experiments which allegedly ‘prove’ that actions which we experience as voluntary are in fact ‘initiated’ by brain events occurring a short while before we have the ‘feeling’ of deciding on them. The brain ‘decides’ to do x, and the conscious mind records this decision some time later. Libet’s experiments have produced reams of neurobabble. But the conclusion depends on forgetting what the question might have been. It looks significant only if we assume that an event in a brain is identical with a decision of a person, that an action is voluntary if and only if preceded by a mental episode of the right kind, that intentions and volitions are ‘felt’ episodes of a subject which can be precisely dated. All such assumptions are incoherent, for reasons that philosophers have made abundantly clear.
So just what can be proved about people by the close observation of their brains? We can be conceptualised in two ways: as organisms and as objects of personal interaction. The first way employs the concept ‘human being’, and derives our behaviour from a biological science of man. The second way employs the concept ‘person’, which is not the concept of a natural kind, but of an entity that relates to others in a familiar but complex way that we know intuitively but find hard to describe. Through the concept of the person, and the associated notions of freedom, responsibility, reason for action, right, duty, justice and g uilt, we gain the description under which human beings are seen, by those who respond to them as they truly are. When we endeavour to understand persons through the half-formed theories of neuroscience we are tempted to pass over their distinctive features in silence, or else to attribute them to some brain-shaped homunculus inside. For we understand people by facing them, by arguing with them, by understanding their reasons, aspirations and plans. All of that involves another language, and another conceptual scheme, from those deployed in the biological sciences. We do not understand brains by facing them, for they have no face.
We should recognise that not all coherent questions about human nature and conduct are scientific questions, concerning the laws governing cause and effect. Most of our questions about persons and their doings are about interpretation: what did he mean by that? What did her words imply? What is signified by the hand of Michelangelo’s David? Those are real questions, which invite disciplined answers. And there are disciplines that attempt to answer them. The law is one such. It involves making reasoned attributions of liability and responsibility, using methods that are not reducible to any explanatory science, and not replaceable by neuroscience, however many advances that science might make. The invention of ‘neurolaw’ is, it seems to me, profoundly dangerous, since it cannot fail to abolish freedom and accountability — not because those things don’t exist, but because they will never crop up in a brain scan.
Suppose a computer is programmed to ‘read’, as we say, a digitally encoded input, which it translates into pixels, causing it to display the picture of a woman on its screen. In order to describe this process we do not need to refer to the woman in the picture. The entire process can be completely described in terms of the hardware that translates digital data into pixels, and the software, or algorithm, which contains the instructions for doing this. There is neither the need nor the right, in this case, to use concepts like those of seeing, thinking, observing, in describing what the computer is doing; nor do we have either the need or the right to describe the thing observed in the picture, as playing any causal role, or any role at all, in the operation of the computer. Of course, we see the woman in the picture. And to us the picture contains information of quite another kind from that encoded in the digitalised instructions for producing it. It conveys information about a woman and how she looks. To describe this kind of information is impossible without describing the content of certain thoughts — thoughts that arise in people when they look at each other face to face.
But how do we move from the one concept of information to the other? How do we explain the emergence of thoughts about something from processes that reside in the transformation of visually encoded data? Cognitive science doesn’t tell us. And computer models of the brain won’t tell us either. They might show how images get encoded in digitalised format and transmitted in that format by neural pathways to the centre where they are ‘interpreted’. But that centre does not in fact interpret – interpreting is a process that we do, in seeing what is there before us. When it comes to the subtle features of the human condition, to the byways of culpability and the secrets of happiness and grief, we need guidance and study if we are to interpret things correctly. That is what the humanities provide, and that is why, when scholars who purport to practise them, add the prefix ‘neuro’ to their studies, we should expect their researches to be nonsense.
Roger Scruton’s The Face of God is out this week from Bloomsbury/Continuum.