7.       Four Challenges to Functionalism

Consider 3 things that are functionallly like us but psychologically unlike us. As such they constitute challenges to F-ism.

1.                    The China Brain

2.                    The Chinese Room

3.                    Blockhead

4.                    Zombie

For each consider:

a.                    Is the thing really psychologically unlike us?

b.                   Does it contradict F-ism, or just some kinds of F-ism?

 

The China Brain

 

Suppose each Chinese person is given the running instructions for a neuron of your brain. The population of China thus reproduces the input/output of each neuron and thus of all the states of your brain as a whole.

Let this China Brain (CB) be fed inputs from an exact android simulacrum of yourself, and the outputs go to this android’s limbs, etc. The android behaves just as you do, but all its brainwork is outsourced to China .

Does CB have M states like you?

No? Then F-ism must be false.

 

Denying the intuition

 

Why trust that intuition? It results from our being impressed by irrelevant differences such as sheer size or speed.

 

Consciousness

 

Note that we’re not necessarily concerned with consciousness here. We may only be talking about M states such as belief and desire. This seems easier to attribute to CB, since its actions are thus explicable.

 

Connection to the environment

 

Suppose the android is omitted. Then the functions are not connected to the environment and the functions are purely abstract. This is like the case that we said was in danger of excessive liberalism – mere number-crunching can’t produce M states.

 

The Chinese Room

 

Suppose there was a room in which there was a person who knows only English. The room has a slot in the door through which from time to time there comes a piece of paper with strange squiggles drawn upon it. In the room there is also a large book that has a set of instructions in English about what to do if someone sends you a scrap of paper with squiggles on it. The person follows those instructions. It turns out that what is happening when the person is doing this is that he’s participating in a conversation conducted in Chinese. Does the system of the person plus all the other paraphernalia understand Chinese?

 

The example embellished

 

Searle says NO, the person in the room is simply manipulating symbols; and yet the CR is passing the Turing Test for thinking. (Note that we can elaborate the instruction book as required to produce memory, flexibility, etc)

For understanding we need semantic capabilities that the CR doesn’t have. It is a syntactic engine only.

 

A further embellishment

 

Give the room causal connections to the world. Connect the room to an android as we did for the CB, and let the inputs be environmental data and the outputs be actions.

 

A system that does not understand Chinese

 

Now the intuition is that the CR plus the android body understands Chinese.

Searle would say: let the man memorise the instructions. He now is the CR+body. Does he understand? Searle says No, but isn’t this pretty similar to multiple personalities? We should say Yes.

 

A computer analogy

 

We should think of this as being like a Mac emulating an MS Windows PC. The Mac doesn’t really know what it’s producing in the way that the PC would, but it does the same thing nevertheless.

What we have now is a system that our intuitions tell us might understand Chinese (even if the person in the CR never does.)

 

Blockhead

 

Input-output functionalism

 

We earlier saw that EF-ism that is too restrictive of internal organization is possibly chauvinistic. Can we make it so that the only constraint on internals is that they are responsible for there being the right relationship between inputs and outputs? This we might call input-output or stimulus-response functionalism. It is like Behaviourism, but makes M states internal causal states.

In fact Ned Block’s Blockhead example shows this is not a possibility.

 

Good chess versus being good at chess

 

Copycat chess

 

Suppose that chess grandmasters create a look-up tree that gives the best moves in response to all possible moves by a chess opponent. Anyone with such a tree could play chess at grandmaster level without knowing anything about chess.

 

The game of life

 

Look-up trees for life

 

Treat life as a game. For every person, at any time there are finitely many (effectively distinguishable) inputs and finitely many (effectively distinguishable) responses. Therefore, in principle for any person one can make a look-up tree for all occasions that will perfectly describe their behaviour. Do it for Jones, say.

 

Blockhead

 

Now create Jones’s blockhead twin as a Jones-bot whose actions are determined by the Jones look-up tree written to a chip in its head. The input-output relations for Jones and the blockhead-Jones are identical, and the relations are the result of the internal states of the blockhead-Jones. However, we’re sure that BJ doesn’t think or have real M states. So I-O F-ism can’t be true.

 

Blockhead’s challenge to us all

 

Note that the example demonstrates that I-O F-ism is wrong even about intelligence. We are sure a blockhead isn’t intelligent.

Since intelligence is, we often think, all about getting right answers to problems, why do we think blockhead isn’t intelligent?

 

Some wrong turns

 

It’s not because everything is written in advance.

The same might be true of us if determinism is true.

It’s not because of the practical/nomological/epistemological/… impossibility of the look-up tree.

The conceptual possibility of the tree is all that is required.

Our intuitions don’t count in such impossible cases.

                But is is so like similar more possible cases – like someone who uses a chess book to play.

 

Why Blockhead is not a thinker

 

Causal connections are important in many things. We see X because our perceptual states of X are properly causally connected to X; a person at t1 is the same person at t2 if there are the right causal connections between the two; etc. 

 

Rationality and causal history

 

Rationality depends upon beliefs developing in the right way from previous beliefs and perceptions. Ditto intelligence.

Part of being a belief is that it tends to develop in certain ways. X can only be a belief that ‘if A then B’ if when one has beliefs X and A that will tend to cause the belief B.

 

Blockhead’s causal peculiarity

 

Look-up tree devices don’t have the right causal relations. They use static trees in which at any one time there is just one active node. But note that the set of nodes at depth d1 does not generate the ‘later’ nodes at d2. This is not how we think of thinking. For us to be thinking, our M states at t1 must generate the M states at t2.

 

Common-sense functionalism and Blockhead

 

Note that Blockhead doesn’t affect C-S F-ism since it already held that the right causal relations between M states were required. However, Blockhead is an objection to I-O F-ism.

[Indeed it’s hard to see any intuitions blocking C-S F-ism, since that is supposed to be the sum of all our intuitions.]

 

The Zombie Objection

 

Zombies invade the physicalist paradise

 

The Zombie Objection is made against any form of physicalism. A zombie is a creature physically just like us, and identical in its behaviours, but which has no inner ‘feels.’ It isn’t like anything to be a zombie. So there’s something non-physical that a zombie lacks that is necessary for our feels. So physicalism is false.

Expand the argument:

P1            We can conceive of a minimal physical duplicate of this world where people are zombies.

P2            Conceivability suggests possibility

P3            We aren’t zombies

C4           Zombies are possible (1+2)

C5           There is a minimum physical duplicate of this world mentally different from it. (3+4)

C             Physicalism is false (5+definition of Ph.)

This is a valid argument, so a physicalist must deny a premise.

Analytic F-ism denies P1. It’s a matter of the meaning of the words that you have M states when the right functions are instantiated; and here they are instantiated physically. There is no conceiving of the comceptally (a priori) impossible.

 

Analytic functionalism and ideal conceivability

 

But we think we can conceive of zombies – so is A F-ism false?

Most think that if zombies are impossible this is a metaphysical fact not a semantic one.

Perhaps our intuition of conceivability is incorrect. Compare getting a math problem wrong.

But what about the case of ideal conceivability, when all the facts are presented to a rational mind? Then we can’t get the math problem wrong.

Well, maybe we are only concerned with unideal conceptions in the zombie case.

                What work does the additional clarity and rationality actually do for our intuitions?

Perhaps we should take strong intuitions of possibility as indicating that the A F-ist theory of M state terms is wrong.

 

 

Empirical functionalism and zombies

 

For E F-ists it seems to be possible to deny P2. It may be claimed that zombies are conceivable, but impossible, because we can discover a posteriori that qualia really are physical.

This argument doesn’t work for one kind of E-Fism which is coherent, and those E-Fisms for which it does work don’t appear to be coherent.

 

Reference-fixing

 

Consider some forms of E F-ism in which folk-roles, say, pick out M states, and we then rigidify on the internal features of those M states. If we reference fix on brains and find neural feature X playing the qualia role, then if we rigidify we find that qualia are necessarily neural feature X.

So zombies, though conceivable, are a posteriori impossible: anything with NF X has qualia, and zombies have NF X, so zombies have qualia, so zombies aren’t zombies.

OTOH. Suppose we allow that ‘qualia’ refers to what plays the folk roles that we used to reference-fix – i.e. NF X. Still, we knew enough to reference-fix before we knew any neuroscience. Let ‘qualic’ mean having the roles played. We knew we were qualic and that qualia were the role players (but we didn’t know what qualia were.)

It is conceivable that there are possible physical duplicates that are not qualic. These are R-zombies.

But if being qualic is just having roles played, then nothing can be like us and fail to be qualic.

So there can’t be R-zombies.

Finally, There are E F-isms that deny that we should use folk roles. However; whatever they think indicates a qualitative nature will also be vulnerable to the zombie.

 

A modification of analytic functionalism

 

The A F-ist’s position should be that if Dualism is false then A F-ism is true.

Then the conceivability of zombies is partly the conceivability of Dualism.

                But if A F-ism is a priori true, then zombies should still be ideally inconceivable.

Instead let the a priori truth be that: if Dualism is false then A F-ism is true.

So the analytic truth we grasp is this:

If there are dualistic states then

                                In @ qualia are D states, and in all PW all and only qualia are D states

                Else

In @ qualia are the states that play the roles here, and in other PW they are the states that play the roles there

If so then if D is false then zombies are impossible, but we can’t know that a priori.

So the zombie intuition conflates two things; the possibility of dualism and the possibility of zombies.