3. Common-Sense Functionalism |
||||||||||||
F-ist
says M states are internal causes of behaviour Functionalism:
An internal state is the M state that it is in virtue of the functional
relationships that exist amongst it and input stimuli, output behaviours,
and other M states. Some
common ground for Functionalists (a)
A theory of Mind should have input clauses, output clauses, and internal
role clauses. (b)
A M state is a inner state that fills the roles in those clauses (c)
A theory of Mind should allow multiple
realizability Multiple
Realizability Many
things are defined functionally. The definition has nothing to say about
what performs the function. Similarly
for F-ism about the Mind: what matters for a M state is the role
occupied and not the occupant. (
= the function, and not the realizer.) We
do not know what realizes mental states (a)
We ascribe M states by observing behaviour and don’t know anything
about the realizers – nor do we need to. We
can imagine beings unlike us but with mental states (b)
We easily imagine aliens with recognisable M states but not with our
realizers We should not be chauvinists about the Mind. Human
brains may be very diverse (c)
There is evidence that human brains may normally use a variety of
realizers for a function If
a different part of the brain takes over a job, we do not mind (d)
Human brains recovering from damage may use a variety of realizers for a
function We
might replace part of our brains with artificial aids (e)
We can imagine prostheses for the Mind – i.e. substitute realizers Which
functional roles matter? Since
multiple realizability is so plausible, F-ism benefits from its allowing
it so easily. OTOH
F-ists disagree about what roles are essential for any M state. One
response is to say that’s not a philosophical problem, but an empirical
one. This seems unlikely. F-ism says roles are what define
M states, so it should have something to say about what roles matter. Only
temperature-registering roles are essential to thermostats, not their
paperweight roles. [That’s a conceptual claim not an empirical one.] Common-Sense
Functionalism Expounded The
roles that matter are common knowledge Everyone
knows what roles are important for a thermometer. CSF says it’s the same
for M states. The
meaning of mental state terms The
clauses of CSF give the menaings of M state terms. Since CSF is an
analysis of the meaning of those terms it is sometimes called Analytical Functionalism. (For the same reason, then, and appealing
to some of the same evidence, for which Analytical Behaviourism was
so-called.) So:
the state M is the state that plays the M role in the network of role
claims that are common knowledge about the Mind. There
is no specific functional organization that a Mind must have (any more
than there is one for a bank) so there is allowable variation in the
functional specification of a M state (just as there is for a bank
teller.) But this variation is not infinite. Cluster
concepts This
is not unusual. In a cluster
concept there is a certain set of properties, and anything that has
‘enough’ of those properties will be taken as falling under that
concept. The
crude understanding of conceptual analysis A
crude type of conceptual analysis would attempt to find a certain set of
properties such that only if an object has all
of those properties will it be taken as falling under that concept. But
this can’t be done in general. Language is vague and analysis need to
acknowledge that vagueness. But the vagueness isn’t infinite. Three
questions remain: 1.
Is there bad circularity in the clauses defining M states? 2.
How do we characterize behaviour in the output clauses? 3.
How do we specify the clauses that are to itemise common knowledge? Interconnections
without Circularity The mental terms in the clauses of a F-ist theory come as a package. They all refer to each other. Is this just circularity? Machine
tables One
way to see that it isn’t is by looking at the case of machine tables. Consider
a machine that does the following: 1.
$1 in à
coke out 2.
Accept $1 and 50c coins 3.
Allow 50c coins to be followed by $1 coins and react appropriately. Machine
needs two states
S1 for when it is all square
S2 for when it’s 50c ahead Make
a table
This
is the sort of functional description that F-ists think can describe the
Mind. Note S1 and S2 are defined in terms of each other. But there is no
problem with circularity here. If
we inspected the machine we could identify machine states M1 or M2
(arrangements of levers, or voltages, or whatever) that would be
descriobed by a table like the above but with M1 and M2 for S1 and S2.
Then identify those states. We thus discover what S1 and S2 are. Since we
can work out what the states are they are not circularly defined in a
‘vicious’ way. *Explicit
definitions and Ramsey sentences Another
way to see that they aren’t is by constructing Ramsey sentences for M states
Behaviour
Characterized in Terms of Environmental Impact Characterizing
inputs and outputs Characterizing
inputs without referring to M states is easy: they are chairs, stings,
sounds, etc. Characterizing
outputs without referring to M states is harder. A bodily movement is
often characterized intentionally. The same limb movement can be hailing a
taxi, reaching for a beer, patting a llama, etc. Contrariwise,
an infinite variety of movements may be ‘hailing a taxi.’ In order to make the Ramsey sentences work we need to suppose that behavioural outputs can be specified in non-mental terms. Also: we assume knowledge of other minds by observing their behaviour.
But if we can only identify behaviours by supposing mental states, that
really is circular. Behaviour
non-intentionally characterized There
is a way to define behaviour
non-intentionally. Behaviour
associated with M states is relational
behaviour: it has a characteristic impact on the subject. Taxi-hailing
behaviour is that which tends to result in the subject coming to be in a
taxi. (Better explications replace ‘tends to’ with
‘would tend to in normative circumstances’ or etc.) Note
that this approach also eliminates chauvinism about body types. Even
tentaacled martians can hail a taxi. What
Does Common Sense Say about the Mind? How
do we construct ‘formulae such as T in the box above? Lewis suggests we
just collect all the platitudes that are known by all to be platitudes. Don’t include too much/little – it will make it
too hard/easy for anything to satisfy the criteria. Don’t confuse common opinions about M states with
what is definitive of those M states.
Eg. Don’t assume physicalism There
may, however, be a problem if there just aren’t that many things that we
‘know’ about M states in the strong Lewis way. Better
to appeal to implicit or tacit
knowledge. Consider
our knowledge of grammar: our implicit knowledge is shown when we identify
grammatical sentences and understand them – but most of us have no idea
of the formal rules involved. Similarly,
our implicit knowledge of M states is shown by our ability to use them
predictively Implicit
knowledge as predictively powerful The
ability to predict behaviour is remarkable, widespread, reliable.
Inference to the best explanation indicates that our appeal to M states is
probably the true explanation of our ability. The
fact that we have the ability indicates that there is an explicit
formulation of our knowledge of M states to be had – even if we are not
immediately capable of providing it.
|