Functionalism - Computational Role

 


 

Recommended Reading

 

 

Turing, A. M. (1950) ‘Computing Machinery and Intelligence’ Mind vol. LIX, no. 236, in Hofstadter, D. and Dennett, D. C. (1981) The Mind’s I Brighton:Harvester Press. Pp. 53-67.

Searle, J. R. (1980) ‘Minds, Brains, and Programs’ Behavioural and Brain Sciences vol. 3, in Hofstadter, D. and Dennett, D. C. (1981) The Mind’s I Brighton:Harvester Press. Pp. 353-373.

 

Motivating Machine Functionalism

 

 

1.         Recall that the functionalist view of mental states that we looked at before, in general claims that they are defined in terms of their functional relationships with input, output, and other inner states. In the causal functional treatment the input are stimuli, the output are behaviours, but these are particular types of input and output. If we consider the more general case, and the vocabulary we are using, it suggests that the language of computational devices is being used. We are quite accustomed to treating computers as general purpose devices which are able to take input and produce output because we have specified the functional relationships between those elements and the internal state"s of the machine. It is not a great step from there to the proposition that mental states may be considered in terms of the computational model.

 

Note the original paper by Putnam that really started all this computational talk was phrased in terms of computational processes.

 

2.         The sorts of things that the mind does, such as remembering, calculating, sensing, look like the sorts of things that we have had success in doing with computers. The comparison of the mind to a computer is a cliché, taking over from the earlier telephone exchange model.

 

3.         The functionalist view is generally noncommittal wrt implementation of the functions. When we search for a type of implementation it is the mechanical form that suggests itself. We are accustomed to machinery be defined functionally, and we know that the most general purpose machine is a computer.

 

4.         The whole point of functionalism is to allow the theory of mind to account for the possibility of multiple realisability of mental states. Machinery in general, defined functionally, we are used to seeing being implemented in various ways. Computational functions in particular are the sorts of thing that can be implemented in many ways. Recall Babbage’s wheels and cogs, vacuum tubes and programming by rewiring or punched cards, transistors and programming in machine language, GaAs chips and 4GLs. Simple hydraulic and pneumatic machines have also been designed.

 

Turing Machines

 

 

Abstractions

 

If we’re going to talk about comparing the mind to a computer, and we want this to be a serious discussion, we have to be very clear what we mean by a computer. There is a set of basically intuitive notions about what it is to be a computer, or what it is to be a computable function, or similar notions, and these various notions all turn out to be equivalent. One of the fundamental notions is that any computation that can be done by any reasonably defined ‘computer’ can be done on a particular simple type of machine called a Turing machine (after Alan Turing) which is extremely simple and well-understood. Therefore, it is reasonable and convenient to conduct all this discussion of computationalism in terms of Turing machines.

 

(Note that quantum computers are not classical computers and don’t get described by the Turing machines we’re going to look at. Some people (Penrose) think that the mysterious effects of the mind can only be explained by appealing to quantum mechanical effects in the brain. We won’t make that assumption, and it isn’t likely that the brain is a QC anyway.)

 

Turing machines consist of:

 

1.                   an infinite tape divided in to squares

2.                   a head that reads from and writes to the tape

3.                   a finite set of internal states; q0, …, qn

4.                   a finite alphabet; b1, …, bm

 

The Turing machine’s operation is like this:

 

At time t the machine is in state q and on square a.

It reads character b on the square on the tape at which it is positioned.

Either:  

it writes character b’ on that square

and moves one square left or right,

and goes into state q’,

or:

            it halts.

 

Just how the machine, TM, behaves in each situation is described by a finite state table.

 

It’s easiest to see how this works with an example.

 

First we’ll need a shorthand way of writing things down.

Suppose we had a machine that was in state q1 and was positioned on a square that contained the character b. Suppose that description of the machine said that the machine in this situation wrote b’, moved left, and went into state q2. We could write this action as b’Lq2.

Since there are a finite number of characters in the alphabet that could be read, and a finite number of states that the machine could be in, a good way of listing all the instructions is by means of a table.

 

For example:

 

The following TM, TM1, is designed to perform additions.

 

 

 

q0

q1

1

1Rq0

#Halt

+

1Rq0

 

#

#Lq1

 

 

 

We can see how this works by following the machine as it works.

Note that the machine always starts at the leftmost non-blank character of the tape (# is blank), and by convention the initial state is q0..

 

 

                                    #    #    1   1    1    +    1   1   #   #    …

                                                ­

                                                q0

 

 

What TM1 does, in plain words is to move along the tape making no changes except that it rewrites the ‘+’ character as a ‘1’. When it gets to the first ‘#’ after the last number it goes back and removes the last ‘1’.

 

Another way to do this is to start by removing the first ‘1’ and rewriting the ‘+’ and just stopping at the first ‘#’ that the machine comes to. The table for a machine to do this, TM2,  is

 

 

 

q0

ÿÿ/font>1

1

#Rq1

1Rq1

+

 

1Halt

#

 

 

 

 

Realizations

 

Of course, the TMs that we’ve specified are completely abstract, and the specifications are quite formal. But we never actually have to do with abstract devices: the things that we call computers are all physical objects. Thus the characters of the alphabet have to be realised as some sort of physical object, and the reading and writing of the characters has to be somehow a physical act, which means an act that is explicable in physical causal terms. In general the computational relationships of the objects of the abstract TM have to translated into causal relationships in the realization of the TM. This observation indicates the sort of relationship that will exist between any computational functionalist story about the mind and the general causal funtionalist story.

 

Obviously, there are are any number of physical instantiations possible for any TM.

 

Statement of Machine Functionalism

 

 

The computational functionalist claim is that the mind can be thought of as a TM. What it means for something to have a mind (be a mind?) is that it is a realization of a suitable TM. The mental states of a critter are to be identified with the internal states (q0, …, qn) of its TM.

 

Psychology Supported by TMs

 

 

The idea is that the TM gives us a theory for the psychology of the creature in terms of establishing some systematizatic relationship between stimulus inputs and behaviour outputs. Now we can look at a theory in two ways

1.                   A theory provides an explanation for why, given any particular input, the creature will output some particular behaviour.

 

2.                   A theory predicts, given any particular input, what behaviour the creature will output.

 

Instrumentalism

 

Let’s consider the 2nd point of view first, which is known as Instrumentalism. A theory is properly to be viewed as nothing more than an instrument that gives good predictions. Then if we have two TMs that are predictively equivalent we cannot distinguish between them as the right TM for the creature. We gave two machine tables above for the function of addition. It’s an interesting fact that even though these machine tables are quite different the functional description, at a certain level, is the same for both. They both perform the function ‘add two numbers’. Now suppose we had an actual device that did additions. If we took an instrumentalist approach to providing a theory for this device it seems that we have no grounds to prefer the theory TM1 over the theory TM2 or vice versa. The only applicable criteria would be practical matters of what makes the theory a good instrument for prediction, such as ease of use, accuracy, and so on. According to this point of view the Ptolemaic theory was a better theory than the Copernican theory because it gave better predictions.

 

Although we can see that there are situations when we might find the retreat into instrumentalism to be tempting – for example, when we have theories that don’t seem to lend themselves to a realistic interpretation, as is the case in Quantum Physics, or even Relativity, this doesn’t seem to be the right way to think of theories. (We want our theories to be useful for more than just predictions. Consider how you would go about finding out how to build a radio if you knew nothing about anything but you had an oracle that would always tell you just what would happen in any experiment.)

 

Realism

 

The alternative in the 1st point of view points towards scientific Realism. It takes the view that a theory is a good theory if the terms that occur in the formal expression of the theory correspond to things that exist in the world, and those things play causal roles in the world that are just like (in the relevant ways) the formal roles that the terms play in the theory.

 

On the realist view we can discriminate between two machine descriptions of a psychology on the grounds that there are structures in the brain that are physical realizations of the inner states of its TM.

 

Some Difficulties

 

Identity

 

Recall one of the ideas behind functionalism in general is that it has to explain the possibility of multiple realization. It has to be able to explain how two different types of creature are able to have the same mental states. You might think that this is easily explained by computational functionalism, but in fact it might not even be able to explain how two different creatures of the very same kind are able to be in the same mental state.

 

Consider what it means for A to have the same mental state as B. Let A be in the state q. q is completely defined by the entries in the machine table column that are under the label q. In the machine TM1, for example, the state q0 is only definable as the triple:

 

<<1, 1Rq0>, <+, 1Rq0>, <#, #Lq1>>

 

and we note that the state q1 occurs in this definition. Quite generally, we must expect that the definition for each state is going to involve references to other states, so it makes no sense to talk of a definition of a state other than in the context of a specification for an entire machine table. This seems to mean that to talk about comparing the machine states that belong to different machines makes no sense. But since each psychology is supposedly to be defined entirely in terms of the machine table that it instantiates, this means that we can only talk about the same psychological states if we are talking about the same psychology. So no two different people (who, we assume, are different psychologically) can have comparable psychological states.

 

This doesn’t seem right.

 

Simulation

 

A machine table refers to inputs and outputs, and we’ve allowed that for a comparison between two realizations of a TM we only need to have a suitable isomorphism from the abstract input/outputs to the realized input/outputs. A TM that is the theory for a person has input/output isomorphic to some description of stimulus/behaviour. This TM could be realised in a standard computer where the input/output are simply streams of digital electrical signals. According to the computational functionalist view this is just the same as the human example, and has to be attributed a psychology. But isn’t this better thought of as a simulation than a realization of a psychology. Do we think that a weather simulation program actually has weather going on inside it? Or is weather?

 

The Turing Test

 

 

The Turing test isn’t actually related to Turing machines – it’s not even particularly relevant to computational functionalism: it’s a proposal by Turing for a criterion which a machine has to meet if we are to attribute mentality to it. We consider it here because the idea of computational functionalism is obviously related to the idea that machines can think. How are we going to tell whether the machines that we build following the AI dream are succesfully thinking or not? The TT takes the idea that we can’t look inside other people’s heads to determine whether they are thinking, and have to make a judgment on the basis of the behaviour that we observe, and extends this to things other than folks. Basically it claims that if a machine passes a test that we think could only be passed by a thinking thing then we have no grounds to deny that the machine is thinking. The behaviour that is taken to be characteristic of human thinking is language so the Turing test tests the linguistic power of the candidate thinking thing.

 

One form of the TT goes like this. The candidate is in one room, an actual person is in the other room, and a series of testers occupy yet another room. Each tester holds a conversation with the candidate and the person and at the end of the conversations makes a judgment as to which is the person and which is the machine. (The testers and the candidate communicate by teletype. That’s so that the fact that the machine is buzzing and flashing doesn’t give a clue to the tester.) If the testers do no better than random at judging which is the machine then the machine – just like the person – must be said to have passed the test.

 

Of course, it means that a machine is going to be at least as intelligent as a human before it gets to be thinking at all. That might be setting the bar a bit high. Also, language isn’t the only thing that humans do that makes them thinkers, so restricting intelligence to talkers only seems unfair. And, in fact, talking and thinking like a person surely can’t be the only way to be intelligent either. Science Fiction, a genre with which every philosopher should become familiar, is filled with stories of alien intelligences that are unrecognised by humans.

 

The Chinese Room

 

 

A Thought Experiment

 

Searle thought that the idea that mental states, or consciousness, could be identified with computational states was almost demonstrably incorrect. He used the analogy of ‘the Chinese room’ to make this point.

 

Suppose there was a room in which there was a person. The room has a slot in the door through which from time to time there comes a piece of paper with strange squiggles drawn upon it. In the room there is also a large book that has a set of instructions in English about what to do if someone sends you a scrap of paper with squiggles on it. The person follows those instructions. It turns out that what is happening when the person is doing this is that he’s participating in a conversation conducted in Chinese, but he doesn’t know Chinese. The instruction book plays the role of the machine table in the TM, and the conversation that occurs allows the Chinese room to passs the TT.

 

Searle says this is just silly: there is no understanding going on here, the person in the room is simply manipulating symbols; and yet the CR operating as a computer is passing the test for thinking. This just goes to show that we don’t understand just in virtue of the formal computations that we perform; and this shows that computational functionalism is not correct.

 

Responses

 

Lots of people have been upset by this thought experiment. Their responses have been numerous.

 

1.                   Systems Reply: It’s the system that understands, not the man or the book or the room but the whole thing.

Rebuttal: Let the man memorise the book and step out of the room. Now he’s the system and he still doesn’t understand.

 

2.                   Robot Reply: The room needs causal connections to the world. Put the room on top of a robot and let the inputs be environmental data and the outputs be actions. Now the room understands.

Rebuttal: This is just an admission that computation isn’t all there is. But apart from this there is no improvement wrt understanding.

 

3.                   Simulator Reply: Suppose we have a room that imitates the actual processes of the brain. Call it the Chinese gymnasium since basketballs are exchanged by a whole lot of non-Chinese persons to imitate neural operations.

Rebuttal: What has this to do with computations? Anyway, where’s the understanding in a Chinese gym.

 

Searle doesn’t disagree with the idea that machines can think. He thinks that only  machines can think, ie. brains. But they think because of the particular causal powers of brains (whatever that might be) and not because of the computations that they perform.