3.       Common-Sense Functionalism

F-ist says M states are internal causes of behaviour

Functionalism: An internal state is the M state that it is in virtue of the functional relationships that exist amongst it and input stimuli, output behaviours, and other M states.

 

Some common ground for Functionalists

 

(a)                 A theory of Mind should have input clauses, output clauses, and internal role clauses.

(b)                 A M state is a inner state that fills the roles in those clauses

(c)                 A theory of Mind should allow multiple realizability

 

Multiple Realizability

 

Many things are defined functionally. The definition has nothing to say about what performs the function.

Similarly for F-ism about the Mind: what matters for a M state is the role occupied and not the occupant.

( = the function, and not the realizer.)

 

We do not know what realizes mental states

 

(a)                 We ascribe M states by observing behaviour and don’t know anything about the realizers – nor do we need to.

 

We can imagine beings unlike us but with mental states

 

(b)                 We easily imagine aliens with recognisable M states but not with our realizers

We should not be chauvinists about the Mind.

 

Human brains may be very diverse

 

(c)                 There is evidence that human brains may normally use a variety of realizers for a function

 

If a different part of the brain takes over a job, we do not mind

 

(d)                 Human brains recovering from damage may use a variety of realizers for a function

 

We might replace part of our brains with artificial aids

 

(e)                 We can imagine prostheses for the Mind – i.e. substitute realizers

 

Which functional roles matter?

 

Since multiple realizability is so plausible, F-ism benefits from its allowing it so easily.

OTOH F-ists disagree about what roles are essential for any M state.

One response is to say that’s not a philosophical problem, but an empirical one.

This seems unlikely. F-ism says roles are what define M states, so it should have something to say about what roles matter. Only temperature-registering roles are essential to thermostats, not their paperweight roles. [That’s a conceptual claim not an empirical one.]

 

Common-Sense Functionalism Expounded

 

The roles that matter are common knowledge

 

Everyone knows what roles are important for a thermometer. CSF says it’s the same for M states.

 

The meaning of mental state terms

 

The clauses of CSF give the menaings of M state terms. Since CSF is an analysis of the meaning of those terms it is sometimes called Analytical Functionalism. (For the same reason, then, and appealing to some of the same evidence, for which Analytical Behaviourism was so-called.)

So: the state M is the state that plays the M role in the network of role claims that are common knowledge about the Mind.

There is no specific functional organization that a Mind must have (any more than there is one for a bank) so there is allowable variation in the functional specification of a M state (just as there is for a bank teller.) But this variation is not infinite.

 

Cluster concepts

 

This is not unusual. In a cluster concept there is a certain set of properties, and anything that has ‘enough’ of those properties will be taken as falling under that concept.

 

The crude understanding of conceptual analysis

 

A crude type of conceptual analysis would attempt to find a certain set of properties such that only if an object has all of those properties will it be taken as falling under that concept. But this can’t be done in general. Language is vague and analysis need to acknowledge that vagueness. But the vagueness isn’t infinite.

Three questions remain:

1.                    Is there bad circularity in the clauses defining M states?

2.                    How do we characterize behaviour in the output clauses?

3.                    How do we specify the clauses that are to itemise common knowledge?

 

Interconnections without Circularity

 

The mental terms in the clauses of a F-ist theory come as a package. They all refer to each other. Is this just circularity?

 

Machine tables

 

One way to see that it isn’t is by looking at the case of machine tables.

Consider a machine that does the following:

1.                    $1 in à coke out

2.                    Accept $1 and 50c coins

3.                    Allow 50c coins to be followed by $1 coins and react appropriately.

Machine needs two states

                S1 for when it is all square

                S2 for when it’s 50c ahead

Make a table

               

 

S1

S2

Insert 50c

Go to S2

Emit coke. Go to S1

Insert $1

Emit coke. Stay in S1

Emit coke and 50c. Go to S1

 

This is the sort of functional description that F-ists think can describe the Mind. Note S1 and S2 are defined in terms of each other. But there is no problem with circularity here.

If we inspected the machine we could identify machine states M1 or M2 (arrangements of levers, or voltages, or whatever) that would be descriobed by a table like the above but with M1 and M2 for S1 and S2. Then identify those states. We thus discover what S1 and S2 are. Since we can work out what the states are they are not circularly defined in a ‘vicious’ way.

 

*Explicit definitions and Ramsey sentences

 

Another way to see that they aren’t is by constructing Ramsey sentences for M states

 

We can start with a standard definition. Kim (p. 105) gives this for pain, for example, noting that it is only a fragment of what we would consider to be an adequate characterization of pain:

 

T:            For any x, if x suffers tissue damage and is normally alert, x is in pain; if x is awake, x tends to be normally alert; if x is in pain, x winces and groans and goes into a state of distress; and if x is not normally alert or is in distress, x tends to make more typing errors.

 

Note that there are two general classes of events that occur in the definition above – and, in fact, in any definition of the kind. There are the events that are external, and would be acceptable to behaviourist analysis, and there are the events that are internal, and are what the theory is supposed to be explaining. The Ramsey process (ramseification) involves us in quantifying over all the internal events, so that we get something like:

 

TR:           $ (M1, M2, M3) " x [if x suffers tissue damage and is in M1, x is in M2; if x is awake, x tends to be in M1; if x is in M2, x winces and groans and goes into M3; and if x is not M1 or is in M3, x tends to make more typing errors.]

 

In which all the names of the internal states are removed. M1 replaces all occurrences of normally alert, M2 replaces all occurrences of pain, M3 replaces all occurrences of a state of distress, and so on for all the named mental states in the theory. When this substitution is completed we have only the external events named – and they aren’t controversial – and the functional relationships specified by which some states are related to some other states.

 

We’ll abbreviate this as:

 

TR:           $ (M1, M2, M3) T (M1, M2, M3)

 

Note that T ® TR but not vice versa. This is what we’d expect because we’re making the functional relationships the defining feature of the theory of pain so we’d have to allow that if there were other-named states that were in the same functional relationships with the external states then they’d equally well be candidates for mapping onto this part of our psychoogy. (We’ll talk about this possibility a bit more later.) This means that there is possibly more than one theory, T, of pain that could be ramseified to yield TR. That means that TR can’t uniquely imply any particular T.

 

Note that T and TR make identical connections between all the named external states, so that any experimental test of the theory will give the same result whether you use T or TR. This we’d also expect. We shouldn’t be alarmed that the two theories are indistinguishable by experimental test. There are theories that are being proposed right now in advanced Pysics that may have no experimental discriminators. I think I recall that String Theory and some of its competitors are in this case. It should not affect our attitude towards a theory to know that it can’t be made uniquely definable.

 

In any case, we can use this ramseified formula to define pain, which, you will recall was replaced by the predicate variable M2, thus:

 

                x is in pain iff [$ (M1, M2, M3) T (M1, M2, M3) & x is in M2]

 

and we can do the same for all the other predicates that were employed in the statement T. This means that if T is not a fragment of a psychology – as it is in our example – but a complete psychology, we will be able to define all the psychological predicates in terms of the theory. For example:

 

                x is happy iff [$ (M1, M2, …,  Mn, …) T (M1, M2, …,  Mn, …) & x is in Mn]

 

supposing that the predicate ‘is happy’ in the statement of our complete theory was replaced by the predicate variable Mn in the ramseification of that theory.

 

 

Behaviour Characterized in Terms of Environmental Impact

 

Characterizing inputs and outputs

 

Characterizing inputs without referring to M states is easy: they are chairs, stings, sounds, etc.

Characterizing outputs without referring to M states is harder. A bodily movement is often characterized intentionally. The same limb movement can be hailing a taxi, reaching for a beer, patting a llama, etc.

Contrariwise, an infinite variety of movements may be ‘hailing a taxi.’

In order to make the Ramsey sentences work we need to suppose that behavioural outputs can be specified in non-mental terms.

Also: we assume knowledge of other minds by observing their behaviour. But if we can only identify behaviours by supposing mental states, that really is circular.

 

Behaviour non-intentionally characterized

 

There is a way to define behaviour non-intentionally.

Behaviour associated with M states is relational behaviour: it has a characteristic impact on the subject. Taxi-hailing behaviour is that which tends to result in the subject coming to be in a taxi.

(Better explications replace ‘tends to’ with ‘would tend to in normative circumstances’ or etc.)

Note that this approach also eliminates chauvinism about body types. Even tentaacled martians can hail a taxi.

 

What Does Common Sense Say about the Mind?

 

How do we construct ‘formulae such as T in the box above? Lewis suggests we just collect all the platitudes that are known by all to be platitudes.

Don’t include too much/little – it will make it too hard/easy for anything to satisfy the criteria.

Don’t confuse common opinions about M states with what is definitive of those M states.

                                Eg. Don’t assume physicalism

There may, however, be a problem if there just aren’t that many things that we ‘know’ about M states in the strong Lewis way.

Better to appeal to implicit or tacit knowledge.

Consider our knowledge of grammar: our implicit knowledge is shown when we identify grammatical sentences and understand them – but most of us have no idea of the formal rules involved.

Similarly, our implicit knowledge of M states is shown by our ability to use them predictively

 

Implicit knowledge as predictively powerful

 

The ability to predict behaviour is remarkable, widespread, reliable. Inference to the best explanation indicates that our appeal to M states is probably the true explanation of our ability.

The fact that we have the ability indicates that there is an explicit formulation of our knowledge of M states to be had – even if we are not immediately capable of providing it.