Meaning and Truth Conditions

 

Primary:               Tarski, A. [1933] ‘The Concept of Truth in Formalized Languages’ in J. Woodger (ed., tr.) (1956) Logic, Semantics, and Metamathematics, Oxford, Clarendon.

                                                [1944] ‘The Semantic Conception of Truth’ in L. Linsky (ed.) (1952) Semantics and the Philosophy of Language, Urbana, IL: University of Illinois Press.

                                Davidson, D. [1967] ‘Truth and Meaning’ in Harnish, RM. (ed.) (1994) Basic Topics in the Philosophy of Language, Englewood Cliffs, NJ: Prentice Hall.

                                                [1970] ‘Semantics for Natural Languages’ in Davidson/Harman (eds.) (1972) Semantics in Natural Language, Dordrecht: Reidel.

                                Chomsky, N. (1981) Lectures on Government and Binding, Dordrecht: Foris.

                                               

Secondary:            Lycan, W. G. (2000) Philosophy of Language, London; Routledge.

                                Miller, A. (1998) Philosophy of Language, Montreal: McGill-Queen’s University Press

Chierchia, G. & S. McConnell-Ginet (1990) Meaning and Grammar, Cambridge, MA: MIT Press.

 

Orientation

 

You’ll recall that last time we took an overview of the theories and developments of theories that are intended to show how it is that something like a word can be said to mean something at all. This was a strand of theorizing that took as its point of departure several of the early approaches to reference and meaning that we covered in early lectures; like the historical-causal theory and the various types of direct reference theory. (You would have seen these connections if you’d read the notes to the end – as I’m sure you all did. I wasn’t actually able to cover all the material I wanted to cover in the last lecture.) Each section of the last lecture could have been the subject of a separate lecture, but we’re not going to look at that stuff any further. Instead we’re going to look in some detail at a type of theory about meaning that takes as its point of departure the sort of thing that inspired Russell to come up with his theory of descriptions.

 

As I said before, we shall see that this is a theory that focuses upon the relation of meaning and truth (and we will have something to say about what could be the conditions on a theory of truth, though I don’t intend to give any particular theory preference in the exposition. We will take truth for granted for the moment.) We shall see that it is a version that uses developments in logic to expand upon the propositional/intensional version that we’ve already looked at, and, as you might suspect from that, it is a version that will require some familiarity with logical notation. But don’t panic because I’m going to tell you everything you need to know for the philosophical side of things. I’m also going to give you a primitive theory of grammar for a toy language so that by using this toy we’ll be able to see how the theory of meaning uses the characteristics of language that are of fundamental significance for modern linguistics.

 

Davidson's Theory

 

Just so that you’ll know what the upshot of Davidson’s theorizing will be, we can state his general claim:

 

A theory of meaning for a language can be provided by a theory of truth for that language.

 

What a Theory of Meaning Must Do

 

Davidson’s argument for his claim about a theory of meaning begins with a couple of claims about what a theory of meaning has to be capable of accounting for if it is to be an adequate theory. We’ll start with just two of these adequacy conditions, though Davidson thinks that there are more. We may have to talk about some others later when we begin to test his theory for difficulties. Let’s list these conditions first and then go back to talk about them in more depth.

 

a.                    Extensional adequacy: a theory of meaning for a language has to be able to tell us what each sentence in the language means.

 

b.                   Compositional adequacy: a theory of meaning must show how the meaning of any sentence is constructed from the meanings of a finite number of basic elements by the application of a finite number of effective rules.

 

a.                    Extensional Adequacy

 

The first condition looks at first glance to be entirely trivial; but really, if you think about it, it’s possibly not that trivial. If you recall the theories that we had about senses and ideas and causal chains and asymmetric dependencies it would at least be a non-trivial problem to derive from such theories just what any particular sentence meant. Perhaps we’ll have to be satisfied with just claiming that in principle for any sentence in the language a theory of meaning for that language can generate a statement that will give the meaning.

 

So what sorts of statements will a theory of meaning have to generate? Our natural first thought would be that it would have to generate sentences like

 

(G)          ‘S’ means that M

 

where ‘S’ is the label of a sentence in the language, and M is another sentence, which gives the meaning of S. For example:

 

                ‘snow is white’ means that snow is white

 

where ‘snow is white’ is a label of the sentence, ahem, “snow is white.” In fact, you would certainly expect the theory of meaning to come out with sentences just like that, because that is the most succinct statement of just what ‘snow is white’ means. Note that we’re completely ignoring whatever theoretical machinery might be required in order to get us to such a sentence. It’s not relevant here. We should also note that, in fact, the theory of meaning for that language should be able to generate an infinite number of such sentences, which would all have the form of our example, in which case our theory would yield all instances of the M-schema

 

(M)          ‘S’ means that M

 

where ‘S’ is the label of a sentence in the language, and M is that sentence. For example:

 

                ‘the sky is blue’ means that the sky is blue

                ‘Frank is eating an ice cream’ means that Frank is eating an ice cream

                …

 

Now, before we go on to see why this won’t quite work the way it stands, I think we should observe that we’ve actually got two languages going on here. In the first place we’ve got the language that we’re trying to get a theory of meaning for – which we call the object language – and in the second place we’ve got a language in which we are stating our theories – which we call (defying all etymology) the metalanguage. In the examples both the object- and the metalanguage are English, but the original example with German as the object language and English as the metalanguage would be

 

                ‘Der Schnee ist weiss’ means that snow is white

 

which looks a good deal less trivial, doesn’t it? Of course, this now spoils our little M-schema scheme because it no longer makes sense for us to claim as products of our theory of language a set of sentences like:

 

                ‘Der Himmel ist blau’ means that die Himmel ist blau

                ‘Frank ißt ein Eis’ means that Frank ißt ein Eis

                …

               

because now we don’t have explanations of the meaning in our metalanguage. What we’d really need of course, is some way of generating translations and getting them into the RHSs of the M-schema instances. Which brings us to the reason why Davidson doesn’t think that that sort of schema will work (not just the M-schema, but the original version, G, too).

 

The sentence ‘Snow is white’ has a semantic value – in this case it is true. That semantic value is the same as the semantic value of the sentence ‘the sky is blue.’ This should mean that if ‘snow is white’ is substituted for ‘the sky is blue’ in any context that is not opaque, then the truth value of the resulting sentence is unaffected. But look what happens if you make that substitution in those G-schema instances:

 

                ‘snow is white’ means that snow is white

 

becomes

 

                ‘snow is white’ means that the sky is blue

 

and this is just not true. Therefore it looks like the context of RHS of the G-schema sentences is opaque. In order to be able to make substitutions that preserve truth values we have to know how to identify only phrases like ‘snow is white’, ‘frozen water crystal precipitation is the same colour as maggots’, and so on. In fact, what is required is some way to discover that sentences have the same meaning. The suspicion would have to be that the theory of meaning for a language that generated instances of the G-schema would have to make use of a notion of meaning in order to determine which sentences can be substituted into the RHS of the schema. What we really want is a theory of meaning that will not make essential use of intensional contexts. We would like, if I may apply excluded middle here, a theory that appeals only to extensional contexts.

 

Davidson then proposes that what we’re going to need is still some sort of equivalence between a sentence and a condition, but one that looks more likely to lead to such an extensional definition. What he proposes is that we have something like

 

                ‘S’ if and only if M

 

where ‘S’ is the label of a sentence in the language, and M is that sentence. But this won’t be quite right, because ‘S’ is only a name and there’s no good grammatical way to make substitutions into that schema. For example, we will get sentences like:

 

                ‘the sky is blue’ iff the sky is blue

 

which is, grammatically speaking, really no different from

 

                Steve iff the sky is blue

 

What we really need is to make a propositional phrase on the LHS, i.e. something in the nature of a subject-predicate claim with ‘S’ the name of the subject (the sentence), and, let us say, P, the predicate or property. Thus:

 

                ‘S’ is P if and only if M

 

And at this point Davidson says, ‘Hey, wait a minute. We’ve seen something very like this before. I know. It looks just like Tarski’s truth schema, but with ‘‘S’ is P’ in the place of ‘‘S’ is true.’ Well, this means that the property P is going to cover all and only the cases that are covered by truth according to Tarski’s theory, so perhaps P just is truth. This would hardly be such an outlandish idea anyway, would it? We might have got to something very like it just by working on the basic insight that if we know the meaning of a sentence then we know what would be the case if that sentence were true; and similarly, if we know what has to be the case if a sentence is to be true then we usually are happy to claim that we know what that sentence means. So we have some evidence there that knowledge of the meaning of a sentence is strongly correlated – to put it no more pointedly – with knowledge of the truth conditions for a sentence.

 

Well, let us take it that the case for truth and meaning is pretty well proved. If it is, then the final schema we arrive at is just Tarski’s:

 

(T)           ‘S’ is true if and only if M

               

where ‘S’ is the label of a sentence in the object language, and M is that sentence if the metalanguage and the object language are identical, or a translation of that sentence if the metalanguage is different from the object language. (We’ll get back to that translation business in just a little while.) Examples of instantiations of this schema are statements like:

 

                ‘the sky is blue’ is true iff the sky is blue

                ‘Frank is eating an ice cream’ is true iff Frank is eating an ice cream

                …

 

and of course

 

                ‘Der Himmel ist blau’ is true iff the sky is blue

                ‘Frank ißt ein Eis’ is true iff Frank is eating an ice cream

                …

 

This material adequacy condition for a theory of truth was derived by Tarski in his very difficult book of 1933. And it was still difficult when it was translated out of Polish. A popularized version – which is all that most people ever read – was given in 1944, and there he explains the point of his T-schema

 

[W]e wish to use the term “true” in such a way that all equivalences of the form (T) can be asserted and we shall call a definition of truth adequate if all these equivalences follow from it.

 

It should be emphasized that neither the expression (T) itself (which is not a sentence, but only a schema of a sentence) nor any particular instance of the form (T) can be regarded as a definition of truth. We can only say that every equivalence of the form (T) obtained by replacing [‘M’] by a particular sentence and [‘S’] by a name of this sentence, may be considered a partial definition of truth, which explains wherein the truth of this one individual sentence consists. The general definition has to be, in a certain sense, a logical conjunction of all these partial definitions.[1]

 

This is good, because Tarski’s theory of truth plays right into the second of the adequacy conditions that we’re interested in.

 

b.                    Compositionality

 

Here’s a quote from Davidson:

 

Since there seems to be no clear limit to the meaningful expressions, a workable theory must account for the meaning of each expression on the basis of the patterned exhibition of a finite number of features. But even if there were a practical constraint on the length of the sentences a person can send and receive with understanding, a satisfactory semantics would need to explain the contribution of repeatable features to the meanings of sentences in which they occur.[2]

 

Davidson is actually referring to several features of language that we’ve already taken note of many times. Recall in the very first lecture when we went through the list of all the features that might be important for a philosophical understanding of language, and amongst these was the productivity of language. The same reasons that convinced us of the productivity of language are used by Davidson to support the claim that language is necessarily compositional, because that’s the only way that we can imagine these characteristics of language coming about.

 

Just for the record, compositionality is supposed to account for the facts that:

 

1.             finite linguistic resources can be consistent with infinite linguistic capacity;

2.             novel sentences can be produced and understood;

3.             infinite languages can be learned with finite training.


 


[1] Tarski (1952) p. 16.

[2] Davidson (1970) p. 18.

 

A Tarskian Truth Theory

 

As it happens, Tarski’s theory of truth provides a very simple form of compositional semantics for a strictly formal language – Tarski was sanguine on the prospect of getting a semantic theory for a natural language, because he thought, as did Frege, if you will recall, that natural languages were just too darn messy to do anything useful with. We shall see reason to thing that they are both too pessimistic, but to begin with we should see what sort of theory of truth applies to formal languages.

 

Propositional Logic

 

The simplest sort of formal language is the language of propositional logic. I’ve already directed you to a little guide to understanding the intuitions behind such languages and I’ll assume that you have the general idea. Here we are merely interested in discovering how to assign truth to sentences in that language.

 

a.                   A Syntax for Propositional Logic.

 

We start by describing the language. Here’s a perfectly standard definition straight out of  a logic textbook:

 

Definition 1:         The language of propositional logic has an alphabet consisting of

1.                    proposition symbols:  A, B, …

2.                    connectives:                 ~, &, v, à.

3.                    auxiliary symbols:        (, ).

 

Definition 2:         The formulae of propositional logic are defined thus:

1.                    the proposition symbols are (atomic) formulae

2.                    if A and B are formulae then so are ~A, (A & B), (A v B), (A à B).

 

b.                   A Semantics for Propositional Logic.

 

As you can see, the sentences of the language, what we call the well-formed formulae (wff), are built up from the smaller elements of the language. In this respect the language is said to be recursively constructed – but never mind about that. We appeal to this feature of the language construction when we want to assign truth values to those sentences. We apply the following rules:

 

Definition 3:        

1.                    ~A                  is true iff                A is not true.

2.                    (A & B)          is true iff                A is true and B is true

3.                    (A v B)           is true iff                A is true or B is true

4.                    (A à B)         is true iff                A is false or B is true

 

and from this definition it is now clear that under these rules of evaluation the truth value (i.e. the truth or falsity) of any wff in Prop. Logic is entirely dependent upon the truth values of the parts of the wff. This is just the sort of thing we’d like to see if we want to have a compositional theory of truth, which we do want because we want a compositional theory of meaning.

 

On the other hand, this all relies upon there being a dictionary of truth evaluations for the propositional variables. Now, generally speaking, these variables are going to be intended to correspond to statements that feature in arguments – propositions, in fact. So that for example we will have a set of translations like:

 

                A             the grass is green

                B             the sky is blue

                …

 

We then get to assign truth or falsity to these propositions as the first step in deriving the truth value for a wff in which they occur.

 

Predicate Logic

 

Obviously, this is not very satisfactory. We want to go into a little more depth than this. In order to do so we have to start talking about propositions like ‘the grass is green’ and finding out what gives them their truth properties. These are statements that tell us that a thing has a property, and we have a standard form of logic that can deal with this. That is predicate logic. Again, I’ve already given you a guide to the language of the predicate logic, so I don’t need to dwell on those basic points of interpretation. Let’s just go straight on to a description of how to assign truth to sentences in that language.

 

a.                               A Syntax for Predicate Logic.

 

We start by describing the language.

 

Definition 4:         The language of predicate logic has an alphabet consisting of

1.                    variable symbols:                         x, y, z, …

2.                    constant symbols:                       a, b, c, …

3.                    predicate symbols:                       P1, P2, …

4.                    function symbols:                        F1, F2, …

5.                    connectives:                                 ~, &, v, à, ", $.

6.                    auxiliary symbols:                        (, ).

 

The constant symbols are supposed to correspond to names in the natural language of things in the universe of discourse. For example, in a wff that is supposed to represent the sentence ‘grass is green’ we need to have a symbol to stand for ‘grass.’ That is what the constants are for. So we could have ‘a’ stand for ‘grass.’

The predicate symbols are supposed to correspond to predicates or relations in natural language; for example, ‘is green’ is predicated of ‘grass’ in ‘grass is green’, and ‘is mightier than’ is the relation between ‘the pen’ and ‘the sword’. We can make the symbol P1 stand for ‘is green’, and P2 stand for ‘is mightier than’, and then we can formalize those statements as P1(a) and P2(b, c). This indicates that if we’re going to combine predicates and constants we’re going to have to have a way of saying that the predicate symbol is only supposed to be combined with 1, 2, 3, … or however many constants. We’ll ignore that for now.

Functions are treated in much the same way as predicates. They’re supposed to stand for things like, say, ‘+’ or ‘square root’.

 

Now, how do we make up wff in predicate logic? We have to do it in a couple of steps.

 

Definition 5:         The terms of predicate logic are defined thus:

1.                    constant symbols and variable symbols are terms

2.                    if t1, t2, …, tn are terms, and F is a function symbol with n places, then F(t1, t2, …, tn) is a term

 

 

Definition 6:         The wff of predicate logic are defined thus:

1.                    if t1, t2, …, tn are terms, and P is a predicate symbol with n places, then P(t1, t2, …, tn) is a wff

2.                    if A and B are formulae then so are ~A, (A & B), (A v B), (A à B), ("x)A, ($x)B.

 

Because that’s a little complicated I’ll just show you some terms and wff.

 

Terms:                    a, x, a+x, …

Formulae:               P1(a), P1(F1(a, x)) , P2(F1(a, x), y),  ("y)P2(F1(a, b), y).

 

You’ll notice that some of those formulae have variables in them. When those variables are not in the scope of a quantifier then the sentence is said to be open. For example:

 

Open wff:               P1(x), P1(F1(a, x)) , P2(F1(a, x), y), 

Closed wff.            P1(x), ("y)P2(F1(a, b), y).

 

b.                               A Semantics for Predicate Logic.

 

Again, we’ve got a language in which the larger formulae are built up from the ones recursively. As before, we’ll appeal to this feature its construction when we want to assign truth values to those sentences. In this case however, we can’t just get straight into talking about the truth values. We have to take a little detour through satisfaction. We can say that closed sentences are true or false, but we say of open sentences that they are satisfied or not satisfied by objects. An open sentence like P(x) is satisfied by an object if replacing x by the name of that object will make it a true sentence. For example; if we take P to stand for ‘is a philosopher’, then P(x) is an open sentence – neither true nor false. But P(x) is said to be satisfied by the object <Socrates>, because P(a) is true when ‘a’ stands for <Socrates>.[1]

 

Because we want to be able to talk about any number of variables in open sentences we don’t talk about satisfaction by individual objects, but rather by sequences of objects, in which the nth position of the sequence always substitutes for the nth variable in the language. For example; if we take P to stand for ‘is mightier than’, then P(x2, x3) is an open sentence – neither true nor false. But P(x2, x3) is said to be satisfied by any sequence like <unicorn, pen, sword, crown, …>, because P(a, b) is true when ‘a’ stands for <pen> and  b stands for <sword>. (It would also be satisfied by <zebedee, pen, sword, Tower of London, …>, but not by <unicorn, sword, pen, crown, …>.)

 

With this motivation we can now give a more explicit definition of satisfaction.

 

Definition 7:         Suppose A is a sequence, then the object in the ith place in A is Ai:

                                Let ai stand for Ai

                                Let X, Y, Z be wff.

                                Let the free variables in X be x1, x2, …, xn.

1.                    X = P(…) for P a predicate; A satisfies P(x1, x2, …, xn) iff P(a1, a2, …, an) is true

2.                    X = ~Y;                          A satisfies X iff A does not satisfy Y.

3.                    X = (Y & Z);                  A satisfies X iff A satisfies Y and A satisfies Z.

4.                    X = (Y v Z);                   A satisfies X iff A satisfies Y or A satisfies Z.

5.                    X = (Y à Z);                 A satisfies X iff A does not satisfy Y or A satisfies Z.

6.                    X = ("xi)Y;                     A satisfies X iff any sequence that differs from A in no more than the ith place satisfies Y.

7.                    X =  ($xi)Y;                    A satisfies X iff some sequence that differs from A in no more than the ith place satisfies Y.

 

Most of these are pretty obvious, but it’ll be well to give an example here of the last two:

 

("x2)P(x2, x3) is satisfied by A iff every sequence B that is just like A except that possibly b2 ¹ a2 makes P(b2, b3) true.

($x2)P(x2, x3) is satisfied by A iff some sequence B that is just like A except that possibly b2 ¹ a2 makes P(b2, b3) true.

 

And now we are in a position to define truth for predicate logic:

 

Definition 8:         A closed wff it true iff it is satisfied by all sequences.

 

Now, to grasp the exposition of the theory this is probably all you need to know, but to understand some of the criticisms I’ll have to mention some more details concerning the interpretation of formulae. You may have noticed that in the explanation I just gave I’d been saying that, for example, P(x2, x3) is satisfied by A=<A1, A2, A3, …> if P(a2, a3) is true where names a2 and a3 stand for objects A2 and A3. But this looks a bit like it’s going to end up begging the question of what makes something true. Of course, Tarski doesn’t actually beg this question. When I talk about the names a2 and a3 standing for objects A2 and A3, I’m actually talking about an interpretation of the formal language. In propositional logic the interpretation was just an assignment of truth values to propositional variables, but things are necessarily much more complex in predicate logic. An interpretation (v) for a predicate logic’s formal language does the following:

 

1.                    constant symbols (names) are interpreted as objects.

For example: v(a1) = Socrates, v(a2) = pen, v(a3) = sword

 

2.                    predicate symbols with n places are interpreted as sets of sequences of n objects.

For example: v(P1) = {Socrates, Plato, Aristotle, …} (here we think of P1 as meaning ‘philosophers’.

v(P2) = {<pen, sword>, <dog, cat>, <bulldozer, goldfish>, …} (here we think of P2 as meaning ‘is mightier than’.

 

3.                    function symbols with n places are interpreted as sets of functions from sequences of n objects to an object.

For example: v(F1) = {<0,0>à1, <0,1>à1, <1,1>à2, …} (here we think of F1 as meaning ‘plus’.

 

Given this understanding of interpretation we can fix up the 1st condition in the 7th definition a bit by saying that

 

1.                    X = P(…) for P a predicate; A satisfies P(x1, x2, …, xn) iff P(a1, a2, …, an) is true

and P(a1, a2, …, an) is true iff <v(a1), v(a1), …, v(a1)> is in v(P)

For example: P1(a1) is true           iff v(a1) is in v(P1)

        iff Socrates is in {Socrates, Plato, …}

and it is.

P2(a2, a3) is true                             iff <v(a2), v(a3)> is in v(P2)

        iff <pen, sword> is in {<pen, sword>, <dog, cat>, <bulldozer, goldfish>, …}

and it is.

 

and you can see how the rest of the definition’s clauses can fit with this.

 


 


[1] Note that this is a pedagogical presentation, and very far from a formally accurate one.

 

Semantics for Natural Language

 

That may be all very well for formal languages, but a language like English looks nothing like that sort of thing. How could we apply the semantic notions that Tarski has shown the value of to English? Well, I’m not going to show you how to do that, of course, but I can show you how you might go about doing it for a tiny fragment of English. And I will claim that there is every reason to think that the same sort of process could be applied to the whole of English. (Not that anyone knows how to do this yet, but there are plenty of people working on it.)

 

Logical Form as a Level of Linguistic Representation

 

The trick is to find a way to convert natural language sentences into something that is structurally analogous to the sort of formal language that we’ve seen Tarski’s theory being applied to. We saw that Russell did something like this when he claimed to uncover the logical form of statements which contained a definite description. The way Russell did this wasn’t entirely satisfactory, however, because it seemed to involve his simply declaring that the paraphrases that he devised represented the logical forms of the sentences. We’d like some system that is a little less open to subjective interpretation. In fact there is a candidate that appears in modern linguistic theory. This candidate is the so-called [‘ell eff’] LF which first appears as a theoretical entity in Chomsky’s ‘Extended Standard Theory’ (EST) of the Transformational Generative Grammar (TGG) (a now-outmoded theory that is nevertheless sufficient for our purposes).[1]

 

[Here’s a brief but important note. In the linguistic literature (and in Formal Semantics undertaken from a linguistic perspective) this LF is also called Logical Form (after all it is supposed to be a mere formalisation of that informal notion), but because I wish to distinguish the linguistic and the philosophical usages of ‘Logical Form’ I shall always use LF to refer to the linguistic object.]

  At this point I really need to give a really quick sketch of what a TGG is like. As the name suggests, there are actually two distinct parts to the definition. In the first place the grammar is supposed to be ‘transformational’: which just means that there are several levels of linguistic representation for a sentence – each of which plays a different role in the language faculty - and these levels are translatable one into another by means of specific types of transformations. The best known such levels are the Deep-Structure, supposed to be the most basic or initial cognitive representation of a sentence, and the Surface-Structure which is derived from it. In a TGG the actual forms of these structures are restricted to members of a phrase structure grammar because this allows a finite number of rules to generate an infinite number of structures of the right sort. It is this restriction which is referred to when it is described as a ‘Generative’ grammar.

 

In the EST a D-structure (a type of Deep Structure) is generated from a Base consisting of the rules to a Phrase Structure Grammar (with some other constraints) together with a Lexicon. The S-structure (a type of Surface Structure) is projected from the D-Structure by transformations.  From S-structure further transformations produce the ‘Phonetic Form’ which tells us how to say the sentence. LF in this theory exists as the syntactic level representing properties relevant to semantic interpretation and it is also produced from the S-structure by semantic projection rules.

 

 

 

 

 

 

 

 

 

 Deriving the LF

 

For the sake of clarity I’ll just give an indication of how the LF is derived for a couple of sentences. We’ll start with “Bob loved Carol and Ted loved Alice” and then we’ll have a look at “Bob read a book”.

 

a.                                “Bob loved Carol and Ted loved Alice”

 

·         First we establish a language fragment sufficient to express what we regard as the important parts of the sentences. In this case I suppose that the necessary rules include things like

 

G1.                S ® NP VP

G2.                S ® S Conj S

G3.                S ® Neg S

G4.                NP ® N

G5.                N ® Bob, Carol, Ted, Alice

G6.                VP ® V NP

G7.                V® read, loved

G8.                NP ® Det Nom

G9.                Det ®  every, a, the

G10.             Nom ® book, man, woman

G11.             Conj ® and, or

G12.             Neg ® it is not the case that

 

·         Then we analyse the sentence in terms of TGG to get a tree diagram of the S-Structure:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

                This can be written in labelled bracketed form as

 

[S [S [NP [N Bob]] [VP [V loved] [NP [N Carol]]]] [Conj and] [S [NP [N Ted]] [VP [V loved] [NP [N Alice]]]]

 

·         As it happens, there are no transformations required in order to get to the LF for this sentence. So it’s this representation of the sentence that we will try to interpret.

 

b.                                “Bob read a book”

 

Now, that example doesn’t involve a difference between LF and D-Structure so it doesn’t give much of an idea of how the transformations into LF are supposed to work. Nor does it show us how the quantifiers get to be interpreted. So let’s have a look at the second example. We use the same toy language.

 

·         Then we analyse the sentence in terms of TGG to get a tree diagram of the S-Structure:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

               This can be written in labelled bracketed form as

[S [NP [N Bob]] [VP [V read] [NP [Det a] [Nom book]]]]

 

·         In order to get the LF, the semantic representation, for this sentence we have to make two applications of a rule called Quantifier Raising which takes quantifiers to the front of the phrases in which they have effect, and leaves behind a trace which plays the role of a bound variable.

 

The form of the rule is:

QR:        [S X NP Y] ® [S NPi [S X ti Y]] where NP is Det Nom (as in rule G6).

 

The resulting LF is:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Which can be written in labelled bracket form as

 

[S  [NP [Det a] [Nom book]]1 [S [NP [N Bob]] [VP [V read][NP e1 ]]

 

Now, we can see that this form of the sentence is suggestively similar to the usual LPC formula;

 

($x) [ read(Bob,x) ]

 

which suggests that the linguistic derivation of LF will indeed allow a principled derivation of the logical form expressed in the style of SL.

 

Interpreting the LF

 

So how do we go about applying an interpretation to the LF? We’ll look at those two example sentences again.

 

a.                    “Bob loved Carol and Ted loved Alice”

 

LF = [S [S [NP [N Bob]] [VP [V loved] [NP [N Carol]]]] [Conj and] [S [NP [N Ted]] [VP [V loved] [NP [N Alice]]]]

 

·         Here’s a possible interpretation for the language.

 

M = <U, v>, where:

 

M1.              U =                          {Bob, Carol, Ted, Alice, Iliad, Aeneid}

M2.              v(Bob) =                 <Bob>, etc.

M3.              v(read) =                {<Bob, Iliad>, Carol, Aeneid>, <Alice, Iliad>}

M4.              v(loved) =              {<Bob, Iliad>, <Carol, Aeneid>, <Ted, Alice>}

M5.              v(book) =               {Iliad, Aeneid}

M6.              v(man) =                {Bob, Ted}

M7.              v(woman) =           {Carol, Alice}

M8.              v(and) =                 {<1,1>à1,

                                                       <1,0>à0,

                                                       <0,1>à0,

                                                       <0,0>à0}

M9.              v(neg) =                 {<1>à0,

                                                       <0>à1}

 

·         Here are the rules we’ll need for interpreting the language.

 

R1.                v( [A B ] ) =                             v( [ B ] ) for categories A and B

R2.                v( [ S1 Conj S2 ] ) =                v( [ Conj ] ) (v( [ S1 ] ), v( [ S2 ] ) )

R3.                v( [ Neg S ] ) =                       v( [ Neg ] ) (v( [ S ] ) )

R4.                v( [ NP VP ] ) = 1 iff              v( [ NP ] ) is in v( [ VP ] ), otherwise 0.

R5.                v( [ V NP ] ) =                        {x: <x, v( [ NP ] )> is in v( [ V ] )}

 

·         Here’s how we go about applying the rules.

 

1.                    By R2, v( [ [S Bob loved Carol ] [Conj and ] [s Ted loved Alice] ) =

v( [ and ] ) ( v([S Bob loved Carol ] ), v([s Ted loved Alice] ) )

2.                    By R4, v( [ [NP Bob ] [VP loved Carol] ] ) = 1 iff        

v( [NP Bob] ) is in v( [VP loved Carol] )

3.                    By M2, v( [NP Bob] ) = Bob

4.                    By R5, v( [ [V loved ] [NP Carol ] ]) = {x: <x, v([NP Carol ] )> is in v([V loved ] )}

5.                    By M4 we find that this is the empty set.

6.                    Therefore Bob (3) is not in this set.

7.                    Therefore by (2) v( [ Bob loved Carol ] = 0

8.                    We similarly find that v( [ Ted loved Alice ] = 1

9.                    By (1) v( [Bob loved Carol and Ted loved Alice ] ) = 0

 

which means that the sentence is false.

 

b.                    “Bob read a book”

 

[S  [NP [Det a] [Nom book]]1 [S [NP [N Bob]] [VP [V read][NP e1 ]]

 

·         We’ll use the same interpretation, but now we’ll also need to have an assignment.

 

M10.       let g be a function from traces (e­i) to U

 

                Let’s suppose that it’s a constant assignment, with g(ei) = Iliad.

 

·         We’ll need a few extra rules for interpreting the language.

 

R6.                For category A, if X is a trace then   v( [A X ] ) = g(X)

else         v( [A X ] ) = v(X)

R7.                v( [ a X]i S ] ) = 1 iff              for some u in U, u is in v(X)

and, with assignment g[u | ei], v( [ S ] ) = 1

 

·         And here’s how we go about applying the rules.

 

1.                    By R7, v( [ a book ]1 [Bob read e1 ] ) = 1 iff for some u in U, u is in v(Book)

and, with assignment g[u | ei], v( [ Bob read e1 ] ) = 1

2.                    In M only ‘Iliad’ and ‘Aeneid’ are in v(book) so we only test

v( [ Bob read e1 ] ) with g[Iliad | e1] and g[Aeneid | e1]

3.                    Consider the case for ‘Iliad’.

By R4, v( [ Bob read e1 ] ) = 1 with g[Iliad | e1] iff

v( [ Bob ] ) is in v( [read e1 ] ) with g[Iliad | e1]

4.                    Consider the RHS.

By R5, v( [read e1 ] ) with g[Iliad | e1]

= {x: <x, v( [e1 ] ) with g[Iliad | e1]> is in v( [ read ] )}

                = {x: <x, Iliad > is in v( [ read ] )}

5.                    We see that <Bob, Iliad> and <Alice, Iliad> are in v( read ) so

v( [read e1 ] ) with g[Iliad | e1] = {Bob, Alice}

6.                    Now consider the LHS of step 3.

Since v(Bob) = Bob, and Bob is in {Bob, Alice} we find

v( [ Bob read e1 ] ) = 1 with g[Iliad | e1]

7.                    From this we can say that for u = Iliad, u is in U and u is in v( book ) and

v( [ Bob read e1 ] ) = 1 with g[u | e1]

8.                    Thus by step 1, v( [ a book ]1 [Bob read e1 ] ) = 1

 


 


[1] Chomsky (1981)

 

Problems for Davidson's Project

 

1.                   Potential Circularity

 

Let’s cast our minds back to the Tarskian T-schema that Davidson took to describe a set of theorems or statement that an adequate theory of meaning for a language must generate. You’ll recall it is:

 

(T)           ‘S’ is true if and only if M

               

where ‘S’ is the label of a sentence in the object language, and M is that sentence if the metalanguage and the object language are identical, or a translation of that sentence if the metalanguage is different from the object language. As I mentioned, examples of instantiations of this schema are statements like:

 

                ‘the sky is blue’ is true iff the sky is blue

                ‘Frank is eating an ice cream’ is true iff Frank is eating an ice cream

               

 

and of course

 

                ‘Der Himmel ist blau’ is true iff the sky is blue

                ‘Frank ißt ein Eis’ is true iff Frank is eating an ice cream

               

 

where the object language is German and the metalanguage is English, and the phrases in the metalanguage on the RHS (corresponding to M in T) are supposed to be translations of the sentences in the object language on the LHS (corresponding to S in T.) At the time I mentioned that we’d have to go back to look more closely at this essential use of the notion of ‘translations’ in the definition of the T-schema. Now is the time to do so.

 

When we talk about M being a translation of S what we really mean is that M is a statement in the metalanguage that means the same thing as the statement S in the object language. When Tarski came up with his T-schema it was perfectly reasonable for him to say that his definition of truth could appeal to a notion of meaning. There is no circularity there. But what Davidson wants to do is to find a definition of meaning via a definition of truth, and therefore there is at least a prima facie case that he cannot be allowed to make use of the notion of meaning in these T-schemas. It looks as if meaning is to be definied in terms of truth and truth is to be defined in terms of meaning. If that is the case then we would seem to have a really obnoxious circularity here and the Davidsonian project must fail to contribute to an understanding (of either problem.)

 

Note that one can’t just ignore the whole question of M being a translation. If one declared that a definition of meaning for a language could only be stated in that language – surely unsatisfactory in itself – it would mean that Davidson would only be able to derive

 

(P)        ‘S’ is P if and only if M

 

where M is S, as the schema for statements that must be generated by an adequate theory of meaning for a language. But this means that the predicate/property P and the predicate/property ‘true’ are no longer necessarily coextensive. They can no longer be seen to hold of just the same objects. So there’s no reason to identify them.

 

Of course, Davidson knows all about this problem.

 

In Tarski’s work, T-sentences are taken to be true because the right branch of the biconditional is assumed to be a translation of the sentence truth-conditions for which are being given. But we cannot assume in advance that correct translation can be recognised without pre-empting the point of radical interpretation; in empirical applications, we must abandon the assumption. What I propose is to reverse the direction of explanation: assuming translation, Tarski was able to define truth; the present idea is to take truth as basic and to extract an account of translation or interpretation.

 

You can see how this is supposed to work. If we suppose that we’ve got a theory about truth, and we have a (possibly distinct) theory that allows us to derive a theory of translation (that doesn’t appeal to meanings) then it is legitimate to use those two concepts to derive a theory of meaning. But if this is what Davidson thinks then he’s going to have to say that a theory of meaning of his sort has the resources that are required so that we may correctly interpret speakers of that language. And, in fact, this is one of the adequacy conditions that Davidson does mention are required for his theory.

 

The question now is whether we can derive a satisfactory theory of interpretation that may appeal to truth but not to meaning. Davidson, as you might expect, thinks that we can. He thinks that there are conditions that a potential interpretation has to satisfy in order that it can even count as an interpretation. And that these conditions can be defined by recourse to statements that may make use of the notion of truth but that do not make use of the notion of meaning. In this way a notion of translation that we need can be introduced with reference just to the conditions of interpretation and truth and is not required to be presupposed.

 

Now, Davidson’s theory of interpretation, though he claims that it is essential to his theory of meaning, cannot detain us for too long. Eventually, everything turns out to be relevant in some way or another, and one has to draw a line somewhere. To be as succinct as possible, Davidson thinks that:

 

(I)            an adequate theory of meaning is one that establishes the principle of charity as a constitutive condition for a successful interpretation.

 

Davidson has in mind that the principle of charity is to be applied to claims about what the utterer of a sentence holds to be true. Thus the claim that the utterer of the statement ‘Der Himmel ist blau’ holds it to be true if and only if the sky is blue is taken to be evidence for the claim that:

 

                ‘Der Himmel ist blau’ is true iff the sky is blue

 

I’m sure you can fill out the rest of the details of this charitable principle to your own satisfaction.

 

2.                   Limited Applicability

 

There are several less important difficulties that are not particularly related to Davidson’s argument for the truth-conditional version of a semantic theory, but will apply, it would seem, to any such theory. The first of these is the simple observation that not all sentences are the sorts of things that are appropriately interpreted in terms of their truth conditions. The sorts of sentences that fail this theory are:

 

1.                    interrogative:                e.g. ‘What’s my name?’

2.                    imperative:           e.g. ‘Shut the door!’

3.                    optatives:             e.g. ‘If only this lecture were over!’

 

This is the same sort of defect that Frege (or rather, Dummett interpreting Frege) noted in any interpretation of his theory that would take meanings to be senses. He was convinced that meaning was a vague term that could be divided into various subcategories – in particular, tone and force. The distinction that we’re talking about now is what he labelled the force of a statement. Perhaps proponents of truth-conditional semantic theories should be satisfied with this sort of dismissal of the problem. There is, after all, no particular reason to think that folk notions of what meaning is supposed to include are going to be scientifically coherent. Perhaps those theorists should be happy to claim that their semantic theory is only sufficient to treat what we might call the ‘content’ of statements.

 

3.                   Accidental Coextension of Truth Conditions

 

One of the most productive of criticisms is the standard old criticism of extensional or referential theories of meaning. Consider the standard counterexample to the equivalence of meaning and truth, which I know you’ve seen before, that renate and cordate mean quite different things even though the things which are renates are just the things that are cordates. In that case the sentence

 

a.        ‘Bob is a renate’ is true iff Bob is a renate.

 

seems to be extensionally indistinguishable from the sentence

 

b.       ‘Bob is a renate’ is true iff Bob is a cordate.

 

Does this mean that Davidson’s extensionalist project is fundamentally limited?

 

Not necessarily. To begin with we have to note that the Davidsonian claim is just that a theory of meaning for a language will generate certain schematic statements as theorems. It is explicit that in the case that the object language and the metalanguage are identical, that the RHS of such a theorem will be identical to the sentence named in the LHS of the theorem. This is certainly not the case for sentence b above.

 

Of course, the identification of object language and metalanguage may be seen as somewhat artificial, especially since the definition of a T-sentence includes the condition that the RHS may be simply the translation of the sentence named in the LHS. But if that is the way that we’re going to look at things, (and we can treat the case of the identity of metalanguage and object language as a special case in which the two languages that are being translated between are the same language) then one option for the defence of the Davidsonian position is to claim that there is a possible – even a likely – charitable interpretation of ‘renates’ that distinguishes it from ‘cordates.’ We have established that the role of charity in this theory is indicated by the fact that the claim <<that the utterer of the statement ‘Bob is a renate’ holds it to be true if and only if Bob is a renate>> is taken to be evidence for the claim that:

 

                ‘Bob is a renate’ is true iff Bob is a renate

 

and not for the claim that

 

                ‘Bob is a renate’ is true iff Bob is a cordate

 

Note that the difference can exist because to say that someone ‘holds something to be true’ is not an extensional claim, as the claim that ‘something is true’ would be, but an intensional one, like the claim that ‘I want, believe, know that something is true.’

 

Another way of looking at this problem is to say, as Davidson does say, that the meaning of a sentence is given by the theoretical derivation of the T-sentence in which that sentence replaces S. We get theorem b from a by applying normal logical rules to a and the added premiss “X is a renate iff X is a cordate.” In that case it might be possible to identify the added premiss as objectionable. Or to note that the meaning – meaning the entire proof procedure – of a is quite different from the proof process for b, and therefore the meanings that are defined by the two T-sentences are quite different.

 

Yet another way of handling this problem of intensional contexts, for that is essentially what is at issue here, is to expand the model theory for the language to include possible worlds. I will not inflict this improvement upon you, but I will remark that the Davidsonian story with this intensionalist improvement starts to look very like the propositional entity theory of meaning with propositions identified as sets of possible worlds. The expansion of this I leave for a final essay question. [see me for guidance.]