ISJ 2 Index | Main Newspaper Index

Encyclopedia of Trotskyism | Marxists’ Internet Archive


International Socialism, December 1996

 

John Parrington

Computers and Consciousness

A Reply to Alex Callinicos

 

From International Socialism 2:73, December 1996.
Copyright © International Socialism.
Copied with thanks from the International Socialism Archive.
Marked up by Einde O’Callaghan for ETOL.

 

Alex Callinicos gives an excellent description of the revolutionary significance of Darwin’s theory of natural selection in his review of Daniel Dennett’s new book, Darwin’s Dangerous Idea in International Socialism 71. However, while Alex is not uncritical of Dennett, pointing out some of the major flaws in his acceptance of sociobiology, I was surprised to discover no such criticism of what must be one of Dennett’s core ideas – the notion that the human mind can be compared to a computer.

Alex tells us that Dennett has been:

concerned to develop what might be described as a non-reductionist materialist theory of the mind. In other words, he has sought to find a way of treating the mind as a natural phenomenon, whose activities are continuous with those in the physical world, while at the same time recognising that human beings are ‘intentional systems’ whose behaviour cannot be explained without ascribing to them beliefs, desires and other mental states. [1]

I find such a statement hard to accept given that Dennett’s computer model of the mind is one of the primary forms of reductionism operating in psychology today. [2] According to this viewpoint, consciousness is an emergent property that appears whenever brains achieve the right degree of complexity. But a brain, according to the same school of thought, is just a machine that carries out computational processes. Computers, just like brains, could achieve consciousness, and we would have to regard the conscious self, in Dennett’s own words, as merely ‘the programme that runs on your brain’s computer’. [3]

There are a number of fundamental flaws in such an approach. Firstly, there is the assumption that we can ignore the physical differences between brains and computers. Alex expresses this in his statement that ‘mental activity is best understood less in terms of the physical hardware it depends on (the brain and nervous system in humans), but rather in terms of the functions which it realises’. [4] Sussex University philosopher Margaret Boden put it rather more bluntly in her claim that ‘you don’t need brains to be brainy’. [5]

Yet a straightforward comparison between the respective structures of brains and computers shows this assumption to be completely wrong. A crucial difference is that of determinacy. The units (chips, AND/OR gates, logic circuits, etc.) of which a computer is composed are determinate, with a small number of inputs and outputs, and the processes that they carry out with such impressive regularity are linear and error free. They can store and transmit information according to set rules. [6]

Such determinacy is a necessary thing if computers are to work effectively. Imagine how you would feel if, when you punched in ‘quit’ onto a computer terminal, the computer instead displayed next week’s engagements – or printed your last file, not always, just when it felt like it. If you couldn’t debug that programme, you would junk it right away.

Brains, however, are not constrained by such rigid determinism. As Steven Rose explains:

The brain/computer metaphor fails because the neuronal systems that comprise the brain, unlike a computer, are radically indeterminate ... Unlike computers, brains are not error-free machines and they do not work in a linear mode – or even a mode simply reducible to a small number of hidden layers. Central nervous system neurons each have many thousands of inputs (synapses) of varying weights and origins ... The brain shows a great deal of plasticity – that is, capacity to modify its structures, chemistry, physiology and output – in response to contingencies of development and experience, yet also manifests redundancy and an extraordinary resilience of functionally appropriate output despite injury and insult. [7]

If such features are a general property of animal brains, a qualitatively different level of complexity and indeterminacy is found in the human brain. Human beings are fundamentally different from animals in a number of important respects. [8] One is our ability to act upon and change the world around us by using hands and tools. [9] But another crucial difference is the fact that we are the only species that possesses the linked capacities of reason, reflection and self conscious awareness. [10] Animals can feel pain, hunger, contentment etc., but only we are conscious that we feel these things.

Understanding the nature of human consciousness is obviously a fundamental problem for psychology. Dennett is quite right to reject the approach of Descartes, who saw consciousness as some metaphysical essence within the material brain, the homunculus (little man) in our heads lording it over the rest of us. [11] But there is another way of understanding human consciousness which does not have to resort to such idealist notions. First put forward by the Marxist psychologist Lev Vygotsky in the 1920s and most recently by the American psycho-linguist Derek Bickerton, this alternative approach stresses the central role of language in structuring our minds. [12]

The distinctive feature about human language, as opposed to the sounds and gestures made by animals, is that we use words to refer to things and situations that are not actually present in front of us. We use them to abstract and generalise from the reality that confronts us and to describe other realities. But once we can do this to others, we can also do it to ourselves, using the ‘inner speech’ that goes on inside our heads to envisage new situations and new goals. Thus language creates the material basis for consciousness within the brain, without recourse to any of the internal homunculus Descartes envisaged.

It should be stressed that, while consciousness is based on language, they are not identical. [13] Unlike animals, we have the ability to reflect and reason, but we still share with animals our biological needs and our emotional responses which must also play a part in our thought processes. [14] Neither is the development of consciousness a passive process. It has been shown that children actively seek out the words and concepts that make sense of their everyday practical experience. [15] What is more, ‘inner speech’ itself appears to be a dialogue, reflecting opposing ideological perspectives drawn from society as a whole. [16]

It is this unique synthesis between the social element expressed in language and the individual element expressed in biological and emotional needs that makes human consciousness not only distinct from that of animals, but also something that would be impossible to replicate within a computer. Dennett’s claim that human consciousness can be compared to a computer rests on his assumption that computers are intentional systems, presumably meaning they possess ‘beliefs, desires and other mental states’ [17] like us. But surely this is just confusing the intentionality of the machine with that of its maker. [18] Our intentionality as human individuals stems from a self conscious awareness of what our actions mean both within the context of society and in our interactions with other human beings. It is this infusion of human consciousness with meaning that introduces a further level of indeterminacy into our consciousness that is lacking in computers. [19] Computers are oblivious to the meaning of what they are doing – they merely process the information they are given according to the instructions of a set programme.

One way to illustrate the vast difference between humans and computers is to look at the question of memory. Computers, we are told, are similar to humans because they have ‘memories’ in which they store information. It is sometimes even suggested that computers have superior memories to us because they do not forget things. In fact computer memory is no more like human memory than the teeth of a saw are like human teeth. To describe both as memory is to use loose metaphors that embrace more differences than similarities. [20] Computers ‘remember’ things as discrete entities. Each item is separable, perhaps designated by a unique address or file name, and all of it subject to total recall. Unless the machine malfunctions, it can regurgitate everything it has stored as exactly as it was entered, whether a single number or a lengthy document. That is what we expect of a machine. [21]

In contrast to this, the items of information that the human memory recalls are not selected as isolated units but by their meaning within the wider whole of consciousness. And because we inhabit a biological shell, memory can also involve the emotions, the senses, etc. One of the best descriptions I have read of the strange dynamics of human memory comes not from a scientist, but from the author Vladimir Nabokov:

A passerby whistles a tune at the exact moment that you notice the reflection of a branch in a puddle which in turn and simultaneously recalls a combination of damp leaves and excited birds in some old garden, and the old friend, long dead, suddenly steps out of the past, smiling and closing his dripping umbrella. The whole thing lasts one radiant second and the motion of impressions and images is so swift that you cannot check the exact laws which attend their recognition, formation and fusion ... It is like a jigsaw puzzle that instantly comes together in your brain with the brain itself unable to observe how and why the pieces fit, and you experience a shuddering sensation of wild magic. [22]

On a more mundane level, the importance of meaning for human memory is illustrated by the fact that in laboratory experiments, if given meaningless bits of information to remember, people can only remember 7 ñ 2 items. [23] Obviously not on a par with most pocket calculators, never mind a computer. However, ask a Socialist Workers Party district organiser for members’ phone numbers, or a football supporter for the day’s scores (as in the experiment above), and they will undoubtedly be able to remember a good deal more!

I believe the flaws in Dennett’s computer model of the mind also help to explain why such an undoubtedly sophisticated thinker ends up supporting such a crude model of cultural evolution as sociobiologist Richard Dawkins’s theory of memes. [24] As Alex points out in his review, memes are an entirely idealist notion brought in to offset the crude mechanical materialism of sociobiology. [25] As such they tell us nothing about where ideas come from. But if we look at the properties of memes, described by Alex as a hopeless jumble of different categories functioning in isolation from each other, isn’t this a good description of the way we find information stored on a computer? Of course, the separate files on a computer are linked together in a different way – that is, in the head of the person who put them there. [26] Viewed in this way, Dennett’s enthusiasm for the concept of memes becomes not so much an anomaly as a logical extension of his computer model of the mind.

Finally, what relevance does all of this have for socialists? Firstly, I think we should be very sceptical of much of the hype surrounding Artificial Intelligence (AI). [27] From its origins in the Second World War [28] there has always been an intense military interest in AI. This reached a crescendo during the 1980s under the auspices of Ronald Reagan’s infamous Star Wars programme but the interest continues to this day. As Steven Rose has observed: ‘It has become hard these days to attend a scientific conference on themes associated with learning, memory and computer models thereof, without finding a strongly hovering US military presence’. [29] The result is that AI has been one of the most richly funded fields of academic research over the last few decades.

In order to justify the billions of dollars invested in AI, it is important for its backers to be able to point to something more inspiring than the production of bigger and better weapons of mass destruction. So in 1958 Allen Newell and Herbert Simon, two of AI’s founding fathers, were predicting that we would soon have computers with problem solving powers ‘coextensive with the range to which the human mind had been applied’. [30]

By 1970 no such machines had materialised, but that didn’t stop AI guru Marvin Minsky [31] from declaring:

In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months, it will be at genius level, and a few months after that, its power will be incalculable. [32]

Needless to say, the point at which this day of reckoning will occur has successively been pushed back with each passing decade.

If all this sounds like an outsider sniping, it is worth noting that a significant part of the criticism of AI has come from within the computer community itself. Some researchers are worried that the alliance between information technology and the Pentagon is distorting the priorities of the research. But more basically still, there are those who argue that AI has been oversold to a degree that flirts with outright fraud. [33] So it was an IBM employee, Lewis M. Branscomb, who observed that, with respect to AI, ‘the extravagant statements of the past few years have become a source of concern to many of us who have seen excessive claims by researchers in other fields lead to unreasonable expectations in the public’. [34] The charge was put even more pointedly by Herbert Grosch, formerly of IBM: ‘The emperor – whether talking about the fifth generation or AI – is stark naked from the ankles up’. [35]

The point is not to denigrate the enormous advances which have been made in computer science but to be rather more realistic about the limits of what computers will and will not be able to do compared to human beings in the coming decades. This is an important political point given continual reports in the media about how computers will soon be putting us all out of our jobs. [36]

The fact is, computers are only superior to human beings in those areas where we might expect them to be: that is, in those areas where rapid processing power and monotonous, regular and error free behaviour are called for. So in my own line of work – genetics – it is much quicker to use a computer to scan DNA sequences than to do it manually (and much less monotonous for me!). Computers have also excelled at chess playing, a rule based game where rapid computational skills are an asset. [37]

However, when we turn to what may seem to humans as much simpler tasks requiring only ‘common sense’, it is precisely there that computers’ limitations have most visibly emerged. So computer scientists are still struggling to develop a robot system which can pile an orange pyramid onto a blue cube [38], or follow a simple set of instructions relating to everyday life. [39] And given what we have said about the importance of language in human consciousness, it is significant that no computer has yet been able to successfully translate languages – one of the first skills predicted by AI theorists.

The second problem with AI is the very negative influence it has had on psychology. Given that psychological theories inform debates on social matters as diverse as the question of working mothers, the treatment of prisoners, school testing and crowd control [40], socialists have an interest in the development of progressive theories of the mind and cannot ignore reactionary ones. The mainstream form of psychology until recently was behaviourism. [41] This is the idea that human behaviour can be explained as nothing more than learned responses to outside stimuli. Pavlov’s dogs became the accepted model for human behaviour. As a theory of the mind behaviourism was not only crude, mechanical and wrong – it also had some extremely reactionary practical consequences. Sensory deprivation, restricted diet, solitary confinement and loss of remission for prisoners, the pin down methods used in children’s homes, even the idea that it is right to smack children, all come from behaviourism and the idea that people can be trained like animals. [42]

Since the 1960s there has been a reaction in psychology against the crudest ideas of behaviourism – this is the so called ‘cognitive revolution’. [43] Initially the new cognitive psychology looked as though it was going to be a great improvement on behaviourism. For the first time in years psychologists were talking about consciousness, meaning and social context, not just about nerve reflexes. But, despite a promising start, cognitive psychology has turned out to be largely a dead end. The reason was the full scale adoption of the mind as computer model. One of the consequences is that mainstream psychology today is full of flow charts which have a distinct resemblance to computer diagrams. Uncertainties or ‘woolly’ concepts like context or background are often relegated to single input boxes (with an arrow feeding in, for example, cultural factors).

The result is that cognitive psychology has turned out to be as incapable as behaviourism of explaining how and why people behave as they do. As Huddersfield University psychologist Nicky Hayes puts it:

Cognitive psychology has sold itself, and the rest of psychology, short. It has failed to provide the alternative which was so badly needed after the dominance of behaviourism. The cognitive obsession with the computer metaphor meant that it fell into the same trap for which it criticised the behaviourists. By defining the human being using a metaphor, and then taking the metaphor as if that were the only possible reality, it rendered itself unable to respond to the real issues and challenges which were being thrown up. [44]

In summary, the problem with comparing human consciousness to a computer is that it is based on a gross underestimation of what it means to be human. Computers process information and so do humans, but, whereas that is all that computers do, information processing is only one side of human consciousness. The other side, and the side that also distinguishes us from animals, is that for humans all information is also infused with meaning.

So how does the confusion over what should be obvious differences between ourselves and computers occur? After all, it is not only AI theorists who believe computers will soon be able to do everything a human can – many ordinary people are worried about whether computers might one day take their jobs. I think it has a lot to do with what Marx called our alienation from society under capitalism. According to Marx’s theory [45], workers are alienated from capitalist society not simply because most don’t feel happy and satisfied with that society, but literally, because they have no control over their work and what happens to the things they produce. Under capitalism everything is defined as a commodity, not by how useful it is. So an expensive car can become an unattainable object of desire. Products can appear to have a life of their own, or, as Marx put it, they become a fetish object, despite the fact that they were once made by ordinary workers. With computers, because superficially they can seem to have characteristics similar to our own minds, this fetishisation seems to have gone one step further – machines that we make can nevertheless be seen as a threat to our jobs and potentially as our future masters. This peculiar reversal of roles could only occur in the upside down world of capitalism. In a socialist society we will be able to properly view and utilise computers for what they are – fantastically advanced tools, but tools nonetheless.


Notes

1. A. Callinicos, International Socialism 71, p. 101.

2. N. Hayes, Psychology in Perspective (Macmillan 1995), ch. 8, pp. 109–114.

3. D.C. Dennett, Consciousness Explained (Boston 1991), p. 430.

4. A. Callinicos, op. cit., p. 101.

5. M. Boden, in S. Rose and L. Appignanesi (eds.), The Limits of Science (Blackwell 1986).

6. S. Rose, The Making of Memory (Bantam 1992), p. 88.

7. Ibid., p. 89.

8. J. Parrington, What makes us human? Tape of talk at Marxism 96 available from Bookmarks.

9. F. Engels, The Role of Labour in the Transition from Ape to Man, in Dialectics of Nature (Progress 1954), pp. 170–183; C. Harman, International Socialism 65, pp. 83–142.

10. As long ago as Aristotle, humans have been defined as reasoning animals, but the distinction is developed most recently in D. Bickerton, Language and Human Behaviour (UCL 1996).

11. D.C. Dennett, op. cit., p. 313.

12. L. Vygotsky, Thought and Language (MIT 1986); D. Bickerton, op. cit. Note that, while Bickerton’s analysis complements the Marxist one, there are problems with his approach which will be tackled in my forthcoming International Socialism article on the Russian philosopher, Voloshinov.

13. Developed in L. Vygotsky, op. cit., ch. 7, pp. 210–256. Note that inner speech itself has a different, much more fluid structure from spoken language, leaving space for a great deal more flexibility than the latter.

14. Though it is important to realise that even the most ‘natural’ of our urges, such as hunger or the sexual urge, are also intimately linked to social values.

15. L. Vygotsky, op. cit., Chapters 5 and 6, pp. 96–209.

16. V. Voloshinov, Marxism and the Philosophy of Language (Harvard 1973).

17. A. Callinicos, op. cit., p. 101.

18. Derek Bickerton classifies Dennett as a victim of CAP syndrome (computer assisted pygmalionism). Pygmalion was the legendary Greek sculptor who carved a statue so beautiful that he immediately fell in love with it. To be able to do this he presumably had to believe that he and the statue were organisms of a similar kind. D. Bickerton, op. cit., pp. 123–124.

19. Ibid., p. 151.

20. The phenomenon is particularly prevalent in sociobiology with regard to animal versus human behaviour.

21. T. Roszak, The Cult of Information (Paladin 1988), p. 116.

22. V. Nabokov, The Art of Literature and Common Sense, in Lectures on Literature (Harcourt Brace Jovanovich 1980).

23. P.E. Morris et al., British Journal of Psychology, 72, pp. 479–483.

24. R. Dawkins, The Selfish Gene (Oxford University 1989).

25. A. Callinicos, op. cit., pp. 109–111.

26. I think Dennett’s idea of sub- or pseudo-intentionality as a lower level of consciousness arises from this source too.

27. T. Roszak, op. cit., is best for a general account.

28. Through the need to develop effective servo-mechanical devices to calculate elevation and direction so as to fire anti-aircraft guns against rapidly moving targets techniques developed by the US mathematician Norbert Weiner, who gave the new science a new name, cybernetics.

29. S. Rose, op. cit., p. 79.

30. Quoted in T. Roszak, op. cit., p. 144.

31. Minsky’s influence on Dennett is apparent in his eclectic set of ‘categories of mind’ which sound very similar to Dennett’s memes. See M. Minsky, The Society of Mind (Simon and Schuster 1985).

32. Quoted in T. Roszak, op. cit., pp. 144–145.

33. Ibid., p. 146.

34. Quoted in ibid., p. 146.

35. Ibid., p. 146.

36. Not that one should ever be complacent about the threat to some jobs because of increasing automisation – the point is that the extent and success to which computers can replace workers are often exaggerated for political purposes.

37. R. Penrose, The Emperor’s New Mind (Vintage 1989), pp. 16–17.vPenrose, however, notes that computers ‘play’ chess in quite a different way from humans, going rapidly through every possible move, whereas humans use much more judgment to select alternatives.

38. S. Rose, op. cit., p. 91.

39. T. Roszak, op. cit., p. 147.

40. N. Hayes, op. cit. Also J. Parrington, Marxism and Psychology. Tape of talk at Marxism 96 available from Bookmarks.

41. N. Hayes, op. cit., ch. 2, pp. 19–33.

42. The classic satire of behaviourism – the film Clockwork Orange – may have seemed shocking, but the real life practical consequences are almost as bad, as documented in Rose, Kamin and Lewontin’s Not in Our Genes (Pelican 1984), pp. 175–178.

43. N. Hayes, op. cit., ch. 8, pp. 105–121.

44. Ibid., pp. 120–121.

45. A. Callinicos, The Revolutionary Ideas of Karl Marx (Bookmarks 1995), ch. 4, pp. 65–81.

 
Top of page


ISJ 2 Index | Main Newspaper Index

Encyclopedia of Trotskyism | Marxists’ Internet Archive

Last updated on 7.4.2012