|
|
|
|
Introduction
|
|
The Plan, the Plan.
In previous lectures we’ve seen a couple of preliminary theories about what meaning could be; and we’ve also seen a couple of versions of a sceptical claim that there is no fact of the matter about meaning. In lectures to follow this one I intend to concentrate upon one particular version of what is to be understood by meaning in language – a version that focuses upon the relation of meaning and truth (though I don’t intend to spend any time puzzling about what Truth is supposed to be. We will take that concept as primitive for the time being.). We will see that it is a version that uses developments in logic to expand upon the propositional/intensional version that we’ve already looked at, and, as you might suspect from that, it is a version that will require some familiarity with logical notation. But don’t panic because I’ll tell you everything you need to know for the philosophical side of things. It is also a version of the story about meaning that makes good use of the characteristics of language that are of fundamental significance for modern linguistics. Those concepts will also be introduced in a simplified form.
This approach, however, is only one of several approaches to semantics for language. In this lecture I intend to survey the types of theories that are important in answering a different sort of question about language; to whit, how it is that something like a word can be said to mean something at all. Language, of course, is not the only thing that can ‘mean’ things. We know that a variety of objects, from pictures to brain activations can also mean things. Language is a particular way of being meaningful which has its own peculiarities, but the mystery of meaningfulness, tout court, is quite general.
Perspectival Notes
With respect to the limited survey that we’ll now conduct, there are a couple of general points which are worth making in order to understand the point of view from which the survey is conducted.
Mental Content and Linguistic Meaning
The first point is related to an observation that was just made above, that meaning occurs in a variety of places. In particular, meaning is to be found associated with mental states: the content of those states is the equivalent for those tokens of meaning for tokens of language. We have talked about this sort of thing previously, and much of what we have said has been suggestive of a strong relationship between linguistic meaning and mental content. For example, in the discussion of propositional attitudes we noted that to believe that ‘the sky is blue’ is to have a particular attitude towards the proposition expressed by the sentence ‘the sky is blue.’ Or we might say that that sentence told us what it is that we were believing when we believed that the sky is blue – and we knew then what we were believing because the meaning of that sentence was the same as the content of that belief. The same of course was true for other propositional attitudes such as hope, desire, querying whether, doubting, worrying that, … etc. ad inf.
People have wondered whether the problem of meaning in language could not be solved by saying that language gets its meaning through its relationship to our thoughts and it is thoughts that have primary meaning. Only thoughts really mean things; words only mean things because they are associated with thoughts. Grice is a proponent of this view. Others have thought that language is primary: that we can only explain mental content if we have a prior notion of linguistic meaning. This is a view that is held by Dummett. There are also those who believe that neither is primary. Both must come first and both must have prizes. This is Davidson’s view. We’re not going to look into those views at all because the controversy is better treated as a part of the philosophy of mind. (If Deb’s not back next semester, maybe I’ll get to talk about it then.)
Now, if we take there to be some sort of relationship as we’ve suggested between meaning and content then the sceptical arguments that we’ve just been looking at for meaning resent us with an even greater problem. This is because it looks as if, if the Kripkensteinian argument is effective, it will be a sceptical argument against both meaning and content. Suppose, for example, that we were going to explain meaning in terms of content, language in terms of psychological state, then an argument to show that there’s no good candidate for meaning would strongly indicate that there’s no good candidate for content – because if we had content we could get meaning. (Compare: if goodness is explained in terms of utility, and we couldn’t make sense of goodness, we’d have to suspect that we couldn’t make sense of utility either.) Similarly, if we were going to explain content in terms of meaning: if we couldn’t make sense of meaning, then we could not be justified in using meaning to solve our problem of content. (Using the same example: if goodness is explained in terms of utility, and we couldn’t make sense of utility, then we’d have no grounds to think that we could make sense of goodness.) So the Kripkensteinian argument against linguistic meaning is effective against both meaning and content.
This is a point that is made by Paul Boghossian
There would appear to be no plausible way to promote a language-specific meaning scepticism. On the Gricean picture, one cannot threaten linguistic meaning without threatening thought content, since it is from thought that linguistic meaning is held to derive; and on the [Dummett] picture, one cannot threaten linguistic meaning without thereby threatening thought content, since it is from linguistic meaning that thought content is held to derive. Either way, content and meaning must stand or fall together.
It therefore becomes an urgent matter for us to find an appropriate solution to the sceptical problems offered by Quine and Kripkenstein, because we certainly don’t want to be forced to do without a notion of mental content. Just think what that would mean. We’d no longer be able to explain the actions of other people or even of ourselves in terms of their or our beliefs and desires. We couldn’t say that Bob worked hard on his essay because he wanted a good mark. We couldn’t say that because we couldn’t say that Bob wanted a good mark, because to want a good mark is to have some definite mental content (expressed by the sentence ‘I will get a good mark.’), and we don’t have any content. And to do something because of a desire requires us to have a belief that ‘if I do X then I will achieve Y’, and this is yet another content, and so on.
Inspired by the notion that the two problems are so closely related, we are going to treat the problem of the meaning in language from a more general point of view that includes the problem of mental content. For this reason, we’re going to talk about tokens rather than about words or thoughts. Tokens are going to be placeholders for the sorts of things that are used to convey meaning. Note that in what follows we’re not going to be particularly concerned to insist upon a distinction between types and tokens – though there is such a distinction, and the type of a word is quite different from any particular tokening of that type. The point of talking about tokens is that that term serves as a general term that suggests a physical event. Brain activations for thoughts that convey mental contents for example, or pressure variations for sentences that convey linguistic meanings.
Naturalisation
This brings us to our second point. We need to remind ourselves that, of course, what makes meaning mysterious is that it seems difficult to make it fit into the physicalist picture that we have of the world. It seems to be a principle well worth defending that there is only one kind of stuff out of which the world is made, and that stuff is physical. Because Fodor gives quite a nice motivation for this, I’ll quote him here.
I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep. It’s hard to see, in face of this consideration, how one can be a realist about intentionality without also being, to some extent or other, a reductionist. If the semantic and the intentional are real properties of things, it must be in virtue of their identity with (or maybe their supervenience on?) properties that are themselves neither intentional nor semantic. If aboutness is real, then it must really be something else.[1]
As I recall having said a few times before, this puts a restriction on the types of things that we’re allowed to appeal to in explanations of meaning. Unfortunately, as I’ve also said before, the manner in which this constraint should be applied, and even the coherence of the constraint, is not obvious.
To take the latter point first: what shall we take to be the allowable ontological commitments of theses within Physicalism? Loewer/Rey[2] suggest that they are simply those of the most general successful theory of the world, which is Physics. But suppose that to be the case: Let Sc be that ‘most general’ theory with ontology O1. Suppose sc is a science with a restricted subject matter not treated by Sc, and with a supposed ontology O2, different from O1. The theory Sc + sc is more general and more successful than either of its parts taken separately, therefore Physicalism as stated fails to constrain in this case. To be specific, the thesis of Physicalism as stated does not prevent us from taking auras and ectoplasm into our ontology of science as necessary for the inclusion of a science of spiritual therapy currently absent from medical science. Yes, very funny you think; but a more realistic example might be the inclusion of egos, ids, and superegos as the required ontology for psychoanalysis. Should that be allowed or not? The Physicalist constraint doesn’t help us. Then, of course, there’s the example that I’ve used a couple of times before of the introduction of gravity into the physicalist ontology. That introduction was acceptable because it allowed the creation of a more powerful science, even though gravity itself was originally dismissed as spooky action at a distance – the reaction to it in early modern physics was rather as if someone today claimed that they were going to solve some theoretical problem by assuming that there is a superluminal force.
All this is to say that we have a set of constraints in mind, and we think that they’re important constraints, but we’re really not sure how to apply them. |
|
Direct Reduction
|
|
Anyway, to begin. The first thing that looks worthwhile doing is to isolate a few facts that are generally agreed to underlie the phenomenon of content/meaning (I’m going to talk about content and meaning pretty much indiscriminately from now on.) We’ve got a couple of linkages:
1. Meanings attach to certain physical events, their tokens (TT).
2. Meanings may determine things in the world, their referents (RR)
When confronted with phenomena requiring explanation, it is reasonable to begin by forming simple hypotheses, increasing complexity only as it is necessary. On form of simplicity is to locate the hypotheses at as low a level as possible, so that reduction to pure physical terms is made easier. Another form is to restrict the initial hypotheses to an explanation of a strict subset of the phenomena. This is what motivated us to begin talking about names in the past; because we surmised that if a theory of meaning works generally it will work for names particularly – unless names are a special case, which some people actually believe is the case. Never mind that for now. It also seemed to us that the semantic properties of names are rather simpler than the properties of other semantic elements so that there are likely o be fewer theoretical complications in trying to account for them. That’s why we began by trying to understand the meaning of names – and because the meaning of names seems to be so much involved with their pure referential power (that is the most obvious function of names, after all) we concentrated a good deal on the problem of reference.
I’m going to just very quickly review the sorts of theories that are relevant that we’ve already looked at. You will note that it’s pretty fast, and you should bear in mind that each of the theories and elaborations that we’ll be looking at after this review could easily be dealt with in as much detail as we know is necessary for a proper understanding of the reviewed theories.
Identification
So: let’s suppose that the first hypothesis about meaning to explain the facts listed is one that attempts an integration of the elements with the least possible structure. Thus Mill’s theory, by identifying the meaning of TT with its RR proposes the limiting hypothesis, with no structure. It was proven to be unsatisfactory. Quite apart from the difficulty of extending such a theory to the explanation of the meaning of derivative tokens – think of sentences or concepts – it has a number of problems even accounting for the behaviour of names within sentences. To see this it is only necessary to note the compositionality of meaning by which the meaning of a whole is a product of the meanings of its parts. Then:
a. Identity statements between the same and different TT for the same RR are identical in meaningMill though they shouldn’t be.
b. Statements concerning non-existents, and in particular negative existentials, have no meaningMill.
c. Substitution into opaque contexts fails to preserve meaning.
At least since Frege, these problems have been interpreted as making it clear that referential role does not exhaust meaning. More importantly in this context, Mill gives no clue as to how the identification is to be made or used, which is a sine qua non of the naturalising project.
History-Causation
By invoking causality, Kripke’s Historical-Causal Theory supplies just enough extra structure to repair that Millian deficit. Thus:
HCT a token (TT) meansHCT its referent (RR) iff there is an appropriate causal relationship between the two.
Together with the Fregean analysis of senses, this avoids the problems of Millian semantics, but has a problem of its own in the attempted reduction. This is the qua-problem, which is in two parts.
1. Consider first the causal chain that links a naming TT to a particular RR. The chain of causes extends beyond TT in one direction and beyond RR in the other, and all along the chain are events in no way less privileged than those two specific nodes. What is it that makes TT refer to RR rather than to any other event along the chain? At least one response to this depth-problem is suggested by the observation that there are indefinitely many discrete causal chains linking TT to RR – not only via many imaginable different paths but also by the extension in time of the dubbing and transmitting events. Suppose that we consider rather the family of chains so established and maintained between TT and RR. In that case we would be able to distinguish RR as being the point/event where all links must cross.
2. The second and more serious consideration arises when TT is a general term rather than a particular. When we set the reference for a general term like ‘dog’ we perform an ostensive act upon one or more exemplars of that general term – let us say they are members of a natural kind, like dogs. The term is supposedly connected to the object by a causal chain that links the object that features in the ostensive act to the term; but how is it that the term attaches to the natural kind and not to the particular exemplars of that kind that get pointed to in the dubbing ceremony? Why does ‘dog’ refer to the natural kind ‘dog’ and not to the set of all dogs to whom I’ve ever pointed and said ‘you’re a dog’. This breadth-problem we thought might be solved by appeal to description theoretical ideas but the solution was only indicated in a hand-waving sort of fashion.
|
|
Interlude
|
|
So much for the review. Now let’s make a brief detour which will have the purpose of indicating where we might find certain resources that may turn out to be useful in continuing the refinement of our hypotheses beyond the HCT. (This is rather like the detour we took into Fregean theory which showed us the sort of thing that might be required if we were to fix the Millian theory. Recall that part of the charm of the HCT was that it allowed us to explain what sort of thing could play the role of Fregean senses)
In the last lecture we looked at the Kripkensteinian scepticism about meaning. But the scepticism was not his last word on the subject. Recall that the claim there was that:
WS There is no fact of the matter about what something means, and talk about meanings is not talk about facts but has quite a different purpose.
Language Games
The second part of the claim was not much explored last time, but it seems to indicate that Wittgenstein believed that talk of meanings was legitimate, but did not mean what we think it means. The purpose of meaning-talk is not the purpose that we thought it had. Wittgenstein came to his interpretation of what this purpose was from a consideration of what it was that we learnt when we learnt a language.[1] For him, apparently, what we learnt was no more than a very complex set of rules of social interaction. Apparently W. got this idea as he observed a football match being played. And, of course, there’s something right about this because language really is a behaviour that occurs in social settings, and has socially defined rules/conventions. But Wittgenstein thought that this was just about all there was to it. ‘In language we play games with words.’ Conversation was just like a chess game or a cricket game. There are certain moves or plays that are called for in certain situations and there are certain other moves and plays that are quite forbidden. Consequently, when we talk about meanings we are actually talking about the roles that words can play in their various social contexts.
Now, we can see that in some cases this is a reasonable description of what’s going on. Many of our rituals of greeting and departure are like games, and are not best understood through an analysis of the ‘meanings’ of the tokens being moved. When we are introduced to someone we say ‘How do you do’, and this is not a question no matter what it might look like, because the proper response is not ‘I have arthritis’ but ‘how do you do’ in return. Similarly, when taking our leave of someone we may say ‘See you later’, even when there’s not the slightest chance that we’ll ever see this person again. Similar things can be said about a wide range of other types of speech, like ‘thanks’, ‘excuse me’, ‘bless you’, ‘oops’, etc. These are moves in what Wittgenstein called ‘language games’, a type of object of which he identified several kinds – like a wedding language game, an arithmetic language game, and others.
He gave an example of how this was to be understood. He supposed that there could be a very primitive language with just a few builder-related words like ‘slab’ or ‘pillar’, or such like. In order to learn how to use these words builder children have to become familiar with what is expected of them when the adult builders use them, so that when Bob says ‘slab’ the baby Bob knows that a slab is needed on site and that he should carry it there. They also have to be able to make use of them in the same way as the adults. Now, as Wittgenstein say, it may be that you can associate some words with some objects, but this is not at all the point of the game, the point of the game is to know how to behave when those words are used, and to know what to expect when one uses those words.
The fundamental difficulty with the use-theory as it stands is that it is quite clear that most of the sentences that we use are appearing for the first time and they have simply had no opportunity to become enmeshed in a net of socially defined relations. That sentence I just spoke, for example, is not one that you’ve ever heard before. Nor is it likely that you’ll ever hear it again. A critic would sat that there are no conventions in which it features, and therefore it has no use in a game of any sort, and therefore it has no meaning. But this is of course nonsense Can the theory be saved?
Wilfred Sellars[2] and now Robert Brandom[3]e saved if we take inferring as a social act governed by conventions. He supposes that there are rules that tell one what sort of thing gets said as a result of an inference from some other thing that has been said. Expanding this idea slightly we might come up with a claim that the meaning of a word is therefore to be taken as referring to the inferential role of the word. For example the word ‘and’ gets to have a particular ‘meaning’ because whenever we say that ‘grass is green and the sky is blue’ we are allowed to make the inference to ‘grass is green’ (and to ‘the sky is blue.’)
This suggests a quite different form of semantics that we might call Inferential Role Semantics. Rephrasing in terms of tokens we can define this as follows:
IRS The meaningIRS of a token is a function of the inferential relations that hold between it and all other possible tokens which might be produced.
Problems
Before we move on from this topic we’d better have a look at some of the problems that afflict the game view.[4]
1. It looks like the games on earth are going to be just the same as the games on twin earth, but the meaning is different because the referent that is determined by the meaning is different in the two cases – at least that’s the intuition that Putnam would urge upon us.
2. Without the introduction of IRS – which is basically a quite independent theory of meaning – the Use theory appears to be incapable of handling unique and complex sentences. Somehow the Use theorist seems basically to have ignored one of the fundamental characteristics of language that we have remarked upon time and again; that language is compositional. The meaning of a sentence is the compositional result of the meanings of the parts of the sentence. If the Use theory can find some way to incorporate compositionality we may be able to accept it as a theory, but if it can’t then we can’t.
3. What distinguishes games like chess from games like the greeting language game? There has to be something fundamentally different about them, because we don’t typically think of chess moves as having reference or of soccer games as being meaningful. They have no such content – and even Wittgenstein probably wouldn’t claim that we could talk about the meaning of a move pawn to Q3 and be talking about even the same sort of thing as the meaning of a sentence ‘the sky is blue.’ Perhaps it’s just that these games are too simple to capture the complexity of a game like language. Perhaps, if chess rules included reference to the colour of the sky to determine allowable moves, we might be inclined to say that some move meant that the sky was blue. Perhaps. But in that case it looks like we have to assume that something like reference is being snuck back into the rules of the game, and Wittgenstein keeps insisting that the point of words is not that they refer.
[1] Wittgenstein (1953) Philosophical Investigations Oxford: Blackwell. Pp. 1 ff. [2] (1974) ‘Meaning as Functional Classification’ Synthese, 27, 417-37. [3] (1993) Making it Explicit, Cambridge, MA: HUP. [4] Lycan (2000) pp. 93-98.
|
|
Indirect Reduction
|
|
|
Internal
Conceptual Role
Bearing in mind the above discussion we now have a range of possibilities. The first to consider is based upon the intuition that meaning is determined by use, which we have just shown is unsatisfactory. You will, however, recall the suggestion by Sellars that Use theories could be salvaged by introducing an idea of IRS. But IRS is rather more specific than is required so let us generalize it to include any sort of inter-token functional relationships. Such theories are commonly expressed in terms of the relations of concepts, but at this stage talk of concepts is overly limiting, so let Conceptual Role Semantics be expressed wolog in terms of tokens, thus:
CRS The meaningCRS of a token is a function of the relations that hold between it and all other possible tokens which might be produced.
Certainly this is able to treat particular and general terms indiscriminately, but in the form above is open to three very significant objections.
1. First, as it stands, CRS has no ability to integrate RR into the definition of meaning for TT. This is contrary to the apparent sensitivity of meaning to the environment, demonstrated by Putnam in his twin-Earth examples.[1] There we are asked to imagine an environment on twin-Earth physically identical to the environment of Earth. In such a situation it is the case that TT and 2-TT participate identically in all intertoken relations so that TT meansCRS the same thing as 2-TT; and it is contrariwise also the case that, since TT refers to RR whereas 2-TT refers to 2-RRtheir meanings cannot be identical. In the general case this is the argument against any narrow semantics which attributes meaning to TT without reference to the environment.
2. Second, the independence of the system of tokens from the environment has a consequence which derives from the Upward Lowenheim-Skolem-Tarski Theorem: that is, that there are alternative interpretations for any formal systems in which all the appropriate truth valuations are preserved. This means that reference for TT in the system is completely undetermined by the purely formal features of the system; in fact, the theorem shows that each TT could meanCRS some proposition concerning the positive integers, though we are not guaranteed to be able to discover any of these alternative interpretations.
3. A third objection is to the holism which CRS implies. Quine has often made the point that all our concepts are in relationships with each other, be they ever so tenuous. The same being true therefore of our system of tokenings has the consequence that, for two intenders, TT1 in one means the same as TT2 in the other only if the entire set of relationships is identical. This would make it practically impossible for two intenders to have any tokens with identical meanings and it certainly prevents any general intentional explanations of behaviour. The obvious fix for this problem is to restrict the relationships which are to count as relevant to the determination of the meaningCRS of TT. There is a suspicion, however, that no such principled restriction is available. The basis of this suspicion is the fact that it seems possible to have arbitrary beliefs about a thing while still being said to have those beliefs about that thing: and the suspicion is reinforced by the lack of plausible restrictive criteria.
Two Factors
The referential lacuna in CRS might be repaired, albeit inelegantly, by proposing a two-factor theory of meaning – combining causal and conceptual role theories. A typical formulation for this would be
2F = HCT + CRS, i.e.: 2F: (a) for TT a particular term, TT means2F X iff TT meansHCT X (b) for TT a general term, TT means2F X iff TT meansCRS X
The inelegance mentioned arises from the combination of quite different types of content, the one truth-referential, and the other concept-functional. The happiness of this will depend upon such things as whether the class names covering particulars (as, say, ‘tiger’ covers ‘Shere Khan’) are held to be identical in meaning to class names covering twin-particulars (as they are identical in meaning2F.)
External
Teleology
In considering CRS we found it to be thwarted by isolation from environment and the initial 2F repair appears to be unsatisfactory. The obvious next simplest response to make is to attempt to incorporate the environment into the set of relationships in such a way that reference is determinable therefrom. Such an integration may be available by considering the biological function of representation formation as a determinant of meaning for formed, representing, TT.[2] The intuition motivating this approach seems to be that the selection histories which lead to the ability to adaptively respond to the environment, via representations, are coherently describable only in terms of the subjects of representation. So to define it provocatively:
BF TT meansBF X if it is the biological function of TT to represent X.
There is here an appearance of ‘solution by definition’ of the representation problem – which motivates Fodor’s argument[3] that biological function cannot simply be assumed to categorize the world in the appropriate fashion because selection histories are indifferent to the descriptions that we might give of them, whereas that description is supposed to determine the ‘representation’ of TT, and thus its meaningBF. This objection amounts to a rejection of the admissibility of natural kinds of the appropriate type for biological theory. If we admit that the theory is well-formed then we must also admit that the ‘reduction’ proposed in BF is in line with the conditions upon reduction noted above, for it invokes supervention upon the objects of a level of explanation for which there is an acceptable ‘reduction’ to Ph. In the light of this observation, it is only necessary to note, as Sterelny does, that without those objects much of the generalizing and explanatory power goes out of biological theory. Those objects are necessary for biology, and biology, we believe, is valid.
BF seems to be a reasonable explication of the types of atomic referential tokens which are subject to direct selective processes, but that is only a small part of the totality of tokenings. A satisfactory BF must be extensible to the entire set and must account for the compositionality of meaning. Now, if BF is understood to posit originality in the attributed function of TT then it is strictly non-compositional, for a state which represented, say, A-&-B would have a selective history for A and B together but not necessarily for A or B separately, and the converse would also be true.
Millikan’s interpretation[4] of BF finesses this problem by supposing that the biological functions ascribable to TT are not original but are derived from the biological function of the tokening device, TD. Whether the particular functioning of the TD gives rise to compositionality in the then an empirical question, not decidable a priori, but it is very plausible that the functions are best met by implementing a ‘language of thought’ which then provides compositionality gratis. In order to pick out the relevant function of TT Millikan needs to determine not just that TD is behaving non-pathologically but that it is actually realizing the function for which its selection history has fitted it. That, and other (counterfactually inspired) considerations, forces the conditions for ‘functionalising’ states to take a form like:
FS For TD with proper function DF, TT has the proper function FF if: (a) TD in Normal operation produces TT. (b) TT has the function FF. (c) FF is an essential means for DF.
There is a problem of circularity for FS which arises from the normative sense in which Normal has to be understood in the statement of FS (a). It arises because
1. TT having FF depends upon TD operating Normally in producing TT. 2. TD producing TT is operating Normally if it is realising DF. 3. TD realizing DF depends upon TT having FF (by FS(c))
The circularity is most obvious when DF and FF are to be the same function, as is the case for representation, but note that the objection is not restricted to that case. This extension of the teleological explanation has not been able to solve the general problem of content.
Information
A quite different approach to repairing the referential ambiguity of HCT appeals to an intuition (e.g. of Dretske[5]) that: a TT which represents RR does so just in so far as TT bears information about RR. It has proved difficult, however, to clarify this intuition in a manner appropriate the reductive explanatory strategy. Its key problems can be quickly identified by criticizing the Dummy theory that
Dum TT meansDum RR if there is a causal link (c-link) from RR to TT
1. The first problem is that of misrepresentation (or disjunction). TT which means RR is c-linked with RR, but it may also be c-linked with a strict superset PP which includes consistently mistaken identifications of RR, and that c-linkage may be the stronger of the two. In this case how can it be denied that TT meansDum PP rather than RR?
2. A second problem is pan-semanticism. The condition expressed in Dum is a sufficiency condition but not a necessity condition. The event <lightswitch-is-down> is c-linked to, and thus meansDum the event <lightbulb-is-glowing> but it surely has no such meaning.
3. Yet a third problem attaches to our ability to produce TT representing RR without there being RR in the causal history, as presumably for TT = ‘unicorn’. Variants of this theory in which c-linkages are replaced by (vague) ‘immediate causes’, or are supplemented by ‘reliable covariances’ appear to disallow meaningDum for TT unless RR is present. So that <lightswitch-is-down> couldn’t mean <lightbulb-is-glowing> unless a lightbulb were glowing. (Note, too, that if reliable covariation provides sufficiency then <lightswitch-is-up > would seem to meanDum the same thing as <lightswitch-is-down>.)
Teleology
In order to explain how misrepresentation of RR occurs, Dretske[6] proposed that the biological function of TT be incorporated into the theory of meaning in order to establish the connection between that which TT indicates (II) and that which TT represents. The explanation would go something like this:
T if the biological function of TT is to represent RR, and TT indicates II for II different from RR, then TT is misrepresenting.
In order for this connection to actually be established conditions must be set in which II and RR reliable covary, but this has proven difficult to do in ways which do not fall foul of circularity. In fact some representational states may be resistant, in principle, to such identification if they do not only indicate that which they represent. It is possible that, where the costs of failure to represent RR are much higher than the costs of falsely representing RR, the biological function is best realised by an indicator to a much vaguer superset PP of RR. It is likely, therefore, that the teleological approach is not a satisfactory fix for indicator theories.
Asymmetric Dependency
Fodor has devised an alternative explanation for the misrepresentation problem which attempts to make specific the idea that being able to misrepresent depends upon being able to represent, but the converse is not true. In fact, of course, Fodor is looking more generally for a theory of meaning that fills in the blanks in
Th TT meansTh X iff [ ]
with things that don’t refer to intentionality. It turns out of course that Fodor has a version of the causal theory in mind so that his theory of meaning is something like:
F TT meansF X iff it is a law that X causes tokening of TT.
And to be more specific we can say that
“horse” means horse iff it is a law that occurrences of horse cause tokenings of “horse”.
Just so, for the misrepresentation problem, Fodor wants to find ‘a difference between A-caused ‘A’ tokenings and B-caused ‘A’ tokenings that can be expressed in terms of nonintentional and nonsemantic properties of causal relations.’ What is it, we wonder, that makes it true that “horse” means horse even when we reliably misidentify horsey-looking cows in the dark as horses? Why doesn’t “horse” mean ‘horse in good light OR cow in bad light’? This is called the disjunction problem. In Fodor’s theory the asymmetry is attributed to the structure of the causal dependencies and therefore is isolated from particular causal stories. He claims that the tokening of “horse” in the case of cows at night is an incorrect tokening because the causal relationship between tokenings of “horse” and occurrences of cows at night is asymmetrically dependent upon the relationship between tokenings of “horse” and occurrences of horses. The tokenings of “horse” being caused by cows at night is dependent upon the tokenings of “horse” being caused by horses, because we think that if it weren’t for the fact that horses cause “horses” it would not be the case that cows at night cause “horses”. But contrariwise, we don’t think that tokenings of “horse” being caused by horses is dependent upon tokenings of “horse” being caused by cows at night, because we don’t think that if it weren’t for the fact that cows at night cause “horses” it would not be the case that horses cause “horses”. There is an asymmetry in this dependency, which is just what we want for misrepresentation to be possible.
Unfortunately, this strategy too has several problems. The most decisive (not a whit impaired by eschewing weird counterfactuals) questions the assumption of the dependencies which prevail. Consider the possibly unnatural kind PP containing RR. It reliably correlates with TT so that for RR’ different from RR, if RR’ is in PP and RR’ causes TT there is no asymmetry.
[1] Putnam, ‘Meaning of ‘Meaning’’ [2] Millika, R. G. Language, Thought, and Other Biological Categories. [3] Fodor Psychosemantics, ch. 4. [4] ‘Thoughts Without Laws’ Philosophical Review, 95. [5] Knowledge and the Flow of Information. [6] ‘Explaining Behaviour’
|