Functionalism - Causal Role

 


 

Recommended Reading

 

 

D. Lewis ‘Mad Pain and Martian Pain’ in the Course Reader.

H. Putnam ‘The Nature of Mental States’ in (1975) Mind, Language, and Reality: Philosophical Papers, vol 2, Cambridge: Cambridge University Press

N. Block, ‘Introduction: What is Functionalism’ in (1980) Readings in Philosophy of Psychology, vol 1, Cambridge:Harvard University Press

R. G. Millikan (1984) Language, Thought, and Other Biological Categories, Cambridge:MIT Press

N. Block ‘Problems with Functionalism’ in (1980) Readings in Philosophy of Psychology, vol 1, Cambridge:Harvard University Press

H. Putnam (1988) Representation and Reality, Cambridge:MIT Press, ch. 5.

 

The Hypothesis

 

 

Recall:

 

1.         Behaviourism defined a mental state in terms of the behaviours that were associated with that state. The behaviours themselves could be defined in terms merely of the output from the mentating creature, or they could also be dependent upon the stimuli that the creature would be presented with. In any case their definition turned out to be non-trivial. Moreover the whole idea came to be seen as unsatisfactory, because we really think that mental states – especially our own mental states – have something to do with what’s going on inside us when we’re in that state.

 

2.         The Identity Theory defined a mental state as being a particular state of the brain. This at least took account of the fact that there was some internal aspect to mental life, and could be seen as a recognition of the necessity for some physical state to ground the ‘dispositions’ that the behaviourists proposed. But it was too specific: there is resistance to the idea that only creatures that are extremely physiologically similar to ourselves – perhaps only those which are identical to us – can be said to be feeling pain.

 

There’s a classic example of a creature that we think should be able to feel pain but has a quite different physiology from ourselves. The Martian (Lewis p. 229a-b) is supposed to be such a person.

 

Now:

 

The Functionalist Theory attempts to combine the positive features of both of the above while avoiding the negatives. The theory was largely created by Putnam in his 1967 article ‘Psychological Predicates’ (in Art Philosophy, and Religion, Pittsburgh:University of Pittsburgh Press; later reprinted as ‘The Nature of Mental States’, see above.) In general terms, the theory holds that a mental state is the mental state that it is because it is in a certain functional relationship with input stimuli, and with output behaviours, and with other mental states. You can see the what the advantages might be here. There is a connection to the behaviour of the creature in a context, this time defined in terms of a functional relationship, and there is a sense in which the functional aspect is an internal aspect of the creature. Moreover, it turns out that the problem of chauvinism might be solvable.

 

Exposition

 

 

We don’t always care about the specifics of the objects that we pick out in the world. Consider chairs for example. Within certain limits, they seem to be defined functionally. It doesn’t matter whether they have curved legs or straight, whether they’re made of wood or plastic, whether they’re painted or polished. What matters is that they be capable of supporting a seated human. (On the other hand, we do distinguish chairs from stools, benches, etc. so the definition can’t be entirely functional.) Maybe a  better example is the gene. Before we even knew what chemical things – if they were chemical things – were responsible for the transmission and mutability of heritable characteristics of biological organisms, we had a concept of the gene. Now we talk about chromosomes and DNA and so on, but these are only the physical implementers of the functions that we had already determined. We don’t care at all – so far as genes per se are concerned – about this actual physical implementation. To prove this, merely note that it is quite possible for us to conceive that the genes turn out not to be located on DNA. The concept of a gene is untouched by this discovery.

 

By analogy, a functionalist holds that a mental state is to be defined in terms of its role in the production of intentional behaviour, and the concept of a mental state is to be likewise distinguished from any implementation of the function. Thus the mental state of pain (it’s always pain) is to be defined as the mental state of the creature that is caused by pinpricks, scratches, eviscerations, broken bones, and so on, when the creature is in a normal mental state, and which causes flinching, whining, screaming, etc.; and, when the creature is in a comatose state or very distracted, is caused by eviscerations and broken bones, and so on and then causes screaming etc.; and so on for further conditions. The mental state of embarassment is similarly defined in terms of stimuli and responses and other mental states, but they are different stimuli and responses and other mental states – or, at least, the functional relations between the elements are different. Now we can see how chauvinism is avoided by this theory. If a mental state is defined functionally but implemented physically, then there is room for any number of distinct physical realizations that are all performing the same functions. So our Martian can feel pain even though he’s entirely hydraulic and has no C-fibres to fire.

 

Note that while we have been assuming that the implementation of the function defining a mental state is a physical state of some kind, this is not necessarily the case. It might be that the implementation is in a spooky substance such as Descartes had assumed. In this respect the functionalist theory is ontologically neutral. It doesn’t determine just by itself what sorts of things are allowed to be in the world.

 

Note also that the definition of any mental state is admitted to refer to other mental states. This makes it look like the definition of mental states is going to be circular. You define MS a in terms of MS b and MS g, and MS g in terms of MS d and MS a, and so on. And you are never guaranteed to come to a point where the definitions can be carried out without reference to some other MS. In fact, it is most likely that there is no such point, because that would mean a mental state existed whose actions are independent of all other mental states. [Question: is there a real problem with circular definitions? Quine, I think, says somewhere that all definitions that aren’t ostensive are going to be circular eventually.] In recognition of this functionalists claim that the necessary definitions are to be considered as being part of a comprehensive set of definitions that cover the whole of psychology. Thus, the matching that is done between the functional states and the mental states is done throughout the entire set of definitions, which is to say the entire causal network is mapped into our psychology, rather than being done piecemeal.

 

Ramsey formulae

 

 

How do we go about constructing a suitable set of definitions? The Ramsey-Lewis method allows us to functionaly define a structure of mental states formally without having to name any of the mental states – which would be to pre-empt the mapping.

 

We can start with a standard definition. Kim (p. 105) gives this for pain, for example, noting that it is only a fragment of what we would consider to be an adequate characterization of pain:

 

T:         For any x, if x suffers tissue damage and is normally alert, x is in pain; if x is awake, x tends to be normally alert; if x is in pain, x winces and groans and goes into a state of distress; and if x is not normally alert or is in distress, x tends to make more typing errors.

 

Note that there are two general classes of events that occur in the definition above – and, in fact, in any definition of the kind. There are the events that are external, and would be acceptable to behaviourist analysis, and there are the events that are internal, and are what the theory is supposed to be explaining. The Ramsey process (ramseification) involves us in quantifying over all the internal events, so that we get something like:

 

TR:       $ (M1, M2, M3) " x [if x suffers tissue damage and is in M1, x is in M2; if x is awake, x tends to be in M1; if x is in M2, x winces and groans and goes into M3; and if x is not M1 or is in M3, x tends to make more typing errors.]

 

In which all the names of the internal states are removed. Now we have only the external events named – and they aren’t controversial – and the functional relationships specified by which some states are related to some other states.

 

We’ll abbreviate this as:

 

TR:       $ (M1, M2, M3) T (M1, M2, M3)

 

Note that T ® TR but not vice versa. This is what we’d expect because we’re making the functional relationships the defining feature of the theory of pain so we’d have to allow that if there were other-named states that were in the same functional relationships with the external states then they’d equally well be candidates for mapping onto this part of our psychoogy. (We’ll talk about this possibility a bit more later.) This means that there is possibly more than one theory, T, of pain that could be ramseified to yield TR. That means that TR can’t uniquely imply any particular T.

 

Note that T and TR make identical connections between all the named external states, so that any experimental test of the theory will give the same result whether you use T or TR. This we’d also expect.

 

We can use this ramseified formula to define pain, which, you will recall was replaced by the predicate variable M2, thus:

 

            x is in pain iff [$ (M1, M2, M3) T (M1, M2, M3) & x is in M2]

 

and we can do the same for all the other predicates that were employed in the statement T. This means that if T is not a fragment of a psychology – as it is in our example – but a complete psychology, we will be able to define all the psychological predicates in terms of the theory. For example:

 

            x is happy iff [$ (M1, M2, …,  Mn, …) T (M1, M2, …,  Mn, …) & x is in Mn]

 

supposing that the predicate ‘is happy’ in the statement of our complete theory was replaced by the predicate variable Mn in the ramseification of that theory.

 

The Psychology in Question

 

 

Note that we need some sort of psychological theory to start this process.

a.         The sort of thing we need is not one that gives definitions of what pain is, but one that tells us what sorts of relationships can exist between the various internal/mental states.

b.         It’s going to be a long conjunction of descriptions like T.

c.         And we want the theory to be true because if it isn’t true then the ramseification is going to result in false definitions.

 

There are two possibilities.

 

1.         Folk Psychology. We could use the psychology that allows us to operate in everyday life. Note that some claim that our psychological predications that guide our everyday life are not based upon a folk psychological theory at all (the position they oppose is known as the ‘theory’ theory) but upon an operating principle of putting ourselves in the other fellow’s shoes. In either case it is not required that the theory be naturally epistemically available to us, any more than are the rules of the grammar that we ‘know’. But they do need to be ‘knowable’ in the normal sense, otherwise we won’t be able to construct our theory to ramseify it.

 

Is our folk psychology strong enough to constitute a theory? It’s typical statements are heavily hedged, often narrowly circular (what’s ‘normally’ alert in T above? – alert enough to feel pain?).

 

On the other hand, could we even begin to comprehend a psychology that was seriously divergent from folk psych? Unless people have a BDA psych they are incomprehensible to us. Mention the case of Chad Hansen’s theory of the Classic Chinese lack of a theory of mind.

 

2.         Whatever passes for the best Scientific Theory that we have. Folk psychology may be no more accurate or close to the truth than folk physics [Ask people about a railway carriage teetering on the edge of a cliff with a person inside and a pile of tomatoes to hand. What to do? See if they get it right.] or folk astronomy, or folk medicine, or folk almost anything. On the other hand, if the analogy to language is accurate we may be better off putting our faith in the folk story – compare our current understanding with grammar of the early 20th C.

 

In either case we have to wonder: if two theories both give the same answers isn’t there just one that’s going to be the ‘correct’ one? At this point we do have to introduce physicalism to the functionalist story because the physicalist will be able to distinguish correct from incorrect functional stories by the possibility or impossibility of discovering a physical instantiation of that function in the creature.

 

Objections by Qualia

 

 

As you would expect, I hope, the functionalist hypothesis is not unchallenged. In fact one of its principal antagonists is that same Hilary Putnam who began the whole thing. The principal difficulty, which was also brought against the behaviourist and the identity theories, is that this theory doesn’t seem to say why it is like something to have some mental states. (We’ll see later that this qualia objection can be taken to ground a general argument againt the possibility of any reasonable physicalist explanation of mental states.) When we have a pain we have an experience which has a particular quality about it, in fact this quality is so marked that we even believe (before we have done any philosophy) that it is that quality that we are describing when we call something a pain, and not the functional role of the mental state – of which most of us would have only the very roughest idea.

 

In fact it looks as if the functional role story leaves room for any number of variations in the qualia story. Functions just don’t seem like the sort of thing that can explain qualia. There are at least two very well-known ‘intuition pumps’ to demonstrate this.

 

1.     Reverse spectrum. It looks like it’s perfectly conceivable that two mentators should have identical functional descriptions and yet have different qualities of experience. To make this altered experience orderly we talk of the possibility of a reversed spectrum, where the mentator A has the experience of ROYGBIV as the wavelengths vary from x to y, while mentator B has the experience of VIBGYOR. Of course, the same will hold with all other type of experience.

 

2.     Absent qualia. Suppose we had a thorough description of just a tiny part of the mentality of a mentator; enough, at least, to replicate exactly the functional relationships that hold for just one mental state, M. In that case we could build a machine that is a perfect physical implementation of the functional states involved. But who would claim that such a machine could really have the mental state M. Suppose that M was pain, for example: could we say that the machine was in pain? (If you doubt that any subset can be defined, then suppose we have a complete theory of the mind, it really makes no difference.) Does R2D2 have feelings? Does Data?

 

If, on the other hand, we suppose that qualia are produced by the internal states of a functionally defined system, then another puzzle arises. This puzzle arises from the supposed possibility of a cross-wired brain. Note that in the definition of a mental state such as pain we say that it is an internal state that is in a particular functional relationship with the appropriate inputs and outputs.

Any (physical, let us say) implementation of this functionally defined state we can imagine to be a black box with connections to inputs and outputs. In the case of pain we could label this box the ‘pain’ box. Since the qualia of pain are surely produced internally it is natural for us to assume that they are somehow a product of the ‘pain’ box (ie. the physical implementation of the functional state). The same, of course, is true for the ‘tickle’ box with respect to the tickle qualia. It has therefore seemed to some that if the input connectors and the output connectors were switched between the two boxes, so that the pain-related stimuli are fed into the ‘tickle’ box which is then connected to the pain-related responses, and the tickle-related stimuli are fed into the ‘pain’ box which is then connected to the tickle-related responses, then we would have a situation where the behaviours of the creature when presented with pain-related stimuli or tickle-related stimuli would be – as they should be – pain-related and tickle-related respectively, but the qualia produced by the boxes are the qualia we would normally associate with tickles and pains.

 

Notice that this puzzle depends upon our believing that qualia are produced internally in some part of the physical organism, and so it depends upon the idea that there was something fundamentally right about the identity theory.