ISJ 2 Index | Main Newspaper Index
Encyclopedia of Trotskyism | Marxists’ Internet Archive
From International Socialism 2 : 73, December 1996.
Copyright © International Socialism.
Copied with thanks from the International Socialism Archive.
Marked up by Einde O’Callaghan for ETOL.
Alex Callinicos is absolutely correct to recommend Daniel Dennett’s writings on the mind and evolution. [1] Dennett gives clear introductions to exciting areas of science and philosophy. Moreover he is a consistent materialist, and socialists always have something to learn from them. However, he belongs to a very different philosophical tradition to classical Marxism. Callinicos mentions this in passing, but neither identifies the root of the difference nor what effects it has.
The underlying problem is with how we understand cause in the natural world. Dennett, along with Richard Dawkins and the vast majority of other practising scientists, is an empiricist. This means that he ultimately agrees with David Hume’s definition of cause as simply being a correlation between events: if you find that whenever A happens, then B follows, and that if A doesn’t then B doesn’t, then this is what we mean by A causing B. For example, if you flick a switch, then the light comes on. If you don’t then it won’t. If this is the case then by definition the switch caused the light to come on. Richard Dawkins is quite explicit in his agreement – at least in how the concept of ‘cause’ is used in practice:
Philosophers, possibly with justification, make heavy weather of the concept of causation, but to a working biologist causation is a rather simple statistical concept. Operationally we can never demonstrate that a particular observed event C caused a particular result R, although it will often be judged highly likely. What biologists in practice usually do is to establish statistically that events of class R reliably follow events of class C ... Statistical methods are designed to help us assess, to any specified level of probabilistic confidence, whether the results we obtain really indicate a causal relationship. [2]
Dennett, being a philosopher, is never going to be tied down that simply, but he too basically agrees: ‘If one finds a predictive pattern of the sort just described one has ipso facto discovered a causal power – a difference in the world that makes a subsequent difference testable by standard empirical methods of variable manipulation.’ [3]
The problem with this definition of cause is that it doesn’t give us a way of looking beneath the surface appearances of events, to their underlying reality. Marx once noted that if the world worked just as it appeared to, then there would be no need for science. Only when we understand the processes going on beneath the surface can we understand how the situation can change. Take a simple example. Most people who bought a council house during the 1980s voted Tory in 1987; most who refused, or couldn’t afford to, didn’t. The empiricist must draw the conclusion that buying a council house causes voting Tory. There is certainly an element of truth in this, but we all know that the real situation is far more complicated. A specific economic and political situation, including rising property prices, caused both house buying and Tory voting. Therefore they were correlated. If the situation changes, for example, if falling house prices produce negative equity, then the link between house ownership and voting intentions changes. These days it is those trapped with big mortgages compared to their income who most want to strangle Major. The empiricist definition of cause raises a correlation between events into a real mechanism, and tends to obscure the complexities and dynamics of real life. [4] You can see why it has always been the favourite philosophy of the British ruling class.
The empiricist definition of cause tends to mix up the processes of description and explanation. A description of a series of events just picks out significant regularities in them. An explanation, on the other hand, starts from a description, then relates these regularities to forces and mechanisms that are not present in the data described, but rather go on beneath the surface. We only know a description is correct if we can ‘ground’ it in an explanation.
For example, before Darwin discovered natural selection, many biologists agreed that species evolved; they just argued about the mechanism that caused it. The most widely held theory was Lamarck’s. He believed that organisms passed on characteristics that they developed during their life – for example, baby giraffes would have long necks because their parents had to stretch theirs in order to reach leaves on trees. This theory was fairly adequate at describing the biological data at the time and certainly as adequate as Darwin’s theory. In fact, natural selection seemed to contradict two known facts. The first was that it would have meant evolution worked incredibly slowly – it would have taken longer than then known age of the earth. It also required characteristics of organisms to be passed on in an all or nothing fashion, when any child of a tall and a short parent knows that they tend to get averaged out. Natural selection wasn’t generally accepted until the start of the century, when the work of Mendel on genetics was rediscovered. Genetics provided an underlying mechanism for natural selection that resolved the problems of how characteristics get passed on. Until then the two theories fitted the known data equally well. According to an empiricist, they would be equally true. It is only once we look beneath the surface that we can see which one describes a real process.
Surface appearances are not comprised of unarguable facts, but are shaped by current theories. For example, the age of the earth was calculated from the rate at which objects cool down and the earth’s current temperature – in the same way that you can tell how long a pie has been out of the oven by how hot it is. It was later discovered that the earth generates heat, and so it would have taken longer to cool down. The age of the earth was recalculated as 4.6 billion years, and there was found to be enough time for evolution through natural selection to take place. Darwin was right despite (some of) the then known facts.
This argument about cause has implications for our understanding of the mind. Callinicos is completely uncritical of Dennett’s notion of intentionality – the property of our thoughts that they are about something external to us. Intentionality is assumed every time we describe someone (including ourselves) as thinking that so and so is the case, or wishing that such and such would happen. This might all sound too obvious for words, but it isn’t. Many cultures throughout history have ascribed intentionality to all sorts of things – winds, trees, rivers – that we wouldn’t want to. Others would explain human actions, not in intentional terms like beliefs and desires, but in terms of spirits, humours, the hand of God, and so on. The philosophy of phenomenology makes a practice of trying to eliminate rational intentionality from explanations of human behaviour. The question is, how do we know we are right, and that these other explanations are wrong? Postmodern philosophers would argue that we can’t claim that our interpretation is any more correct, and that they are all just different ways of looking at the world. Do we have any better answer than this?
If we are to be consistent materialists, then we have to assume that intentionality is something to do with the way our brains and bodies work, and their relationship to the things that we think about. To put it at its most crude, what makes our thoughts of a cat about a cat (rather than just being a purely private, internal, affair)? The simple answer is that our thoughts can cause our bodies to reach out and grab a cat, and so bring our thoughts into contact with their contents through our actions. Of course we have as yet very little idea about how all this works, [5] but we have to assume, as Engels did, that some kind of explanation is possible. This does not imply that mental phenomena will some day be reduced to the physical, but that some kind of relationship can be found. Until we find such a causal explanatory link, then our intentional descriptions remain just that, descriptions. And like any description unsupported by an understanding of the underlying mechanism, like Lamarck’s description of evolution, they are susceptible to being found incorrect.
Dennett would disagree fundamentally with this. According to him, what makes intentionality real (what he calls ‘being a true believer’) is not its potential to be united with the rest of science, but rather just the fact that it works. If we describe people’s behaviour using beliefs and desires (what Dennett calls ‘taking the intentional stance’), then we can make incredibly accurate predictions. I can talk to someone on the phone and arrange to meet them somewhere. I could then predict that they would leave their house, that they would pick up an umbrella if it was raining, what route they would walk, what they would do if they had to cross a road and there was a car coming, and so on. All this belief and desire knowledge is very good at describing what people do, and for Dennett this means that it is true: when we say that someone believes something, all we actually mean is that it is useful to suppose that they believe it: ‘... all there is to really and truly believing that p is being an intentional system for which p occurs as a belief in the best (most predictive) interpretation’. [6] In other words, intentionality is a real, material, causal property simply because it is a very good description. This is the old empiricist equation of correlation and cause. This is dangerous. The ancient Greeks managed to build huge military and political empires, for which they needed a very good predictive description of human intentional behaviour. According to Dennett this would make their theory of humours and spirits as ‘true’ as our scientific approach. Does Callinicos really want to allow this?
This also has implications for how we think about artificial intelligence computer systems. For example programs called ‘expert systems’ have been written that can answer questions about, for example, cats. They can tell you what they look like, what they eat, how long they live, and so on. If it was a person answering the questions you would be convinced that they had a pretty good idea of what a cat was. But a cat could leap on top of the computer and it wouldn’t know anything about it; the computer’s knowledge exists in a form completely separated from the material world. Nonetheless, because the computer gives such a convincing impression of someone who knows what they talking about, then Callinicos is forced to follow Dennett in saying that it has real beliefs, even though it is completely cut off from the world that it is supposed to know about. This is a version of dualism – the claim that ideas are not part of the material world – and is something that Marxists have fought long and hard against.
Having failed to identify the real disagreements between Dennett and ourselves, Callinicos seems almost surprised when Dennett turns out to agree with the theory of the ‘selfish gene’. He shouldn’t be, since it is a direct consequence of Dennett’s empiricist view of cause. The heart of the selfish gene theory is what Francis Crick (who co-discovered the structure of DNA) calls the ‘central dogma of biochemistry’. This states that every aspect of the physiology and behaviour of the organism that carries them (called traits) is caused by particular genes. Since traits affect how successful the organism is at reproducing – and so passing on its genes to the next generation – then natural selection is really a competition between genes to survive. It is this theory that allows the theory’s best known advocate, Richard Dawkins (and Dennett), to describe organisms (including humans) as lumbering robots programmed by our selfish genes.
Now it is true that changes in particular genes are correlated with changes in traits. Moreover, the chances of the organism successfully reproducing, and so passing on its genes to the next generation, are correlated with these traits. According to an empiricist this means that a specific gene caused a specific trait, and that the specific trait caused the gene to spread. Given this simple causal picture you can have an account of natural selection that doesn’t even mention organisms, just genes. [7] The real picture is far more complex. Traits are properties of organisms that are produced by a process of development. This process is the product of the interaction of the organism with its genes and the natural and social environment. We have to understand how all these things combine if we are to understand biology and human culture, just as we had to understand the forces underlying council house buying and Tory voting.
I have also got a couple of niggles about Callinicos’s account of natural selection. The basic problem is that he sees natural selection as being about the evolution of organisms adapted to given conditions. However, this ignores the fact that it is the products of evolution that themselves determine (albeit unconsciously) what those conditions will be. As Lewontin puts it, organisms are the objects and subjects of evolution – they construct their own environment.
As a result Callinicos gives a one sided picture of natural selection being primarily about competition for resources. It is equally true to say that evolution favours organisms that avoid competition. Fighting is very expensive; it is usually better to find ways to co-operate. Species such as flies, rats and humans are successful because of their flexibility in locating alternative resources, compared to butterflies, squirrels and chimps. [8] Instead of competing for a given set of resources, they find a new environment.
This also has implications for our understanding of determinism in evolutionary histories. Stephen Jay Gould claims that, if the tape of evolution were rewound and allowed to unfold again, then it could look very different. Callinicos, following Dennett, argues that, since both versions face the same initial conditions, then they are bound to follow similar paths. Once we understand that organisms define their own conditions, then the variability in the initial population can quickly lead to very divergent histories. Although this might seem like a purely abstract argument, we can in fact test it in practice. It is now possible to reproduce a form of natural selection in computer simulations, called ‘artificial evolution’. Even with the incredibly simple environments and organisms being simulated today, it is possible to see very different results from repeated evolutionary ‘runs’. Natural selection isn’t about finding optimal solutions to given problems. Organisms define their own problems and solutions simultaneously.
Times are tough for materialist philosophers of mind; and when times are tough, you can’t be too choosy about your friends. Nonetheless this doesn’t mean that you should be unclear about, or bury, your differences. The differences between Dennett and the tradition of this journal are not incidental; they are deep. But I hope that won’t stop anyone reading his books.
Thanks to Helen Crudgington for many useful discussions. The author is a researcher in the School of Cognitive and Computing Systems at the University of Sussex.
1. A. Callinicos, Darwin, Materialism and Evolution: a Review of Daniel Dennett’s Darwin’s Dangerous Idea, International Socialism 71.
2. R. Dawkins, The Extended Phenotype (W.H. Freeman 1982), p. 12.
3. D.C. Dennett, Real Patterns, The Journal of Philosophy, 88 (3), (1991), pp. 45.
4. For more on empiricism, materialism and cause, see A. Collier, Critical Realism: An Introduction to the Philosophy of Roy Bhaskar (Verso, 1994). Bhaskar’s original writings are incredibly hard to follow, and often almost mystical, but the first half of this book is a very good defence of the idea that science can break through surface appearances to understand underlying causes.
5. But for a good introduction to what we do know, see S.P.R. Rose, The Making Of Memory (Bantam 1992).
6. D.C. Dennett, The Intentional Stance (MIT Press 1987), p. 29.
7. This issue is dealt with more fully by the great Marxist biologist Richard Lewontin in Artefact, Cause and Genic Selection, Philosophy of Science 49, 1982, pp. 157–180.
8. For more on how competition is understood, see Competition: Current Usages, in E.F. Keller and E.A. Lloyd, Keywords in Evolutionary Biology (Harvard University Press 1992).
ISJ 2 Index | Main Newspaper Index
Encyclopedia of Trotskyism | Marxists’ Internet Archive
Last updated on 14 June 2018