Paraconsistent Logic

 


 

Introduction

  

Aristotle had some comments about contradiction. He said that no contradiction could be true (see Metaphysics G,) or more exactly, that "a thing cannot at the same time be and not be."  This has been accepted pretty generally and so has the corollary that any statement that a thing is the case and is not the case (simultaneously) must be false. I.e. (p & ~p) cannot be true - which is a formula you'll recognise from before in the Law of (non-)Contradiction. This acceptance occurs even though Aristotle's particular arguments for the claim have been challenged by people like Łukasiewicz. 

 

In one sense the question of whether or not a contradiction can be true depends upon the view that you take of the truth values. Are T and F exhaustive and exclusive over the range of truth values? If we're prepared to take seriously the idea of  many-valued logics, with TVs like 'Indeterminate' and so on, then we have to accept that the two aren't exhaustive. The assumption that they are exclusive has not yet been challenged, but the idea that they are not is a very old one. Anyone who's had the opportunity of reading some Indian philosophy will be familiar with the catuskoti or 'four-cornered negation' where the Indian philosopher answers a question: is it the case that A? by saying on the one hand yes, and on the other hand no, and also neither yes nor no, and also both yes and no. The Buddha, for example, thought that there was a class of questions which were in some way defective or undetermined, so that to the question 'Does the Buddha continue after death?' ('what direction does the flame go when it goes out?') he would have to answer 'Not yes, not no, not neither yes nor no, and not both yes and no.' The suggestion here is that we can conceive that there are really 4 TVs arranged as a lattice.

 

                                 {T, F}

                               /      \

                             {T}       {F}

                                \     /

                                   Æ 

 

Which is also what we'd get if we thought of T and F as labels for overlapping sets of items (sentences.) But can this really be made sense of?

 

The Buddha's use also suggests that true contradictions  - if there are such - are somehow related to the evaluation of sentences with problematic meanings; and we can see a way of justifying the proposal accordingly. 

 

Consider again the Liar Paradox.

 

L2: This statement is false

 

Previously we thought this was paradoxical because it seemed to be impossible to assign it one of our two truth values: if it was false then it was true, and if it was true then it was false. But did we consider that perhaps it could really be both? Here is possibly an example of a true falsehood, or a false truehood, or, anyway, a true contradiction - if we think that T and F are exhaustive.

 

Belnap's FDE

 

Another way of looking at it is from the purely formal properties of classical logic. The thing that is wrong with a contradiction is that if it is accepted then everything follows - so it is explosive or trivialising.

 

For example, consider the standard natural deduction in PL:

 

     1.   p & ~p                         Assumption

     2.   p                              1, Simp

     3.   p v q                          2, Add

     4.   ~p                             1, Simp

     5.   q                              3, 4, DS

 

This is the principle of ex contradictione quodlibet (ECQ.) Now a logic that says that everything is true or acceptable or whatever is not very useful. If a logic results in this sort of explosion then it is no use at distinguishing good from bad arguments, for example. For that reason contradictions are verboten in classical logic. But bear in mind that the only reason classical logic explodes in this way given a contradiction is because of the very particular laws of deduction that are said to hold. If we proposed other laws which did not result in explosions, then contradictions wouldn't - formally - be objectionable. Any such logic is said to be paraconsistent and we'll have a look at an example of one now.

 

Call this FDE (Belnap's 'First Degree Entailment.') It has the same language (wffs) as PL. 

 

Whereas the semantics for classical logic is just a function from wffs in PL to {T, F}, the semantics for FDE is a function val from wffs in FDE to {Æ , {T}, {F}, {T, F}}, or {Æ , {1}, {0}, {1, 0}}, or {n, 1, 0, b}

Here we're interested in not just True statements but in statements that are 'at least True'. So we designate the values 1 and b for this system and set D = {1, b}

An interpretation is admissible if it fits the following tables:

 
  ~   &   v
    1 b n 0 1 b n 0
1 0 1 b n 0 1 1 1 1
b b b b 0 0 1 b 1 b
n n n 0 n 0 1 1 n n
0 1 0 0 0 0 1 b n 0

 

and we can define a consequence relation as 'truth preservation' for all admissible functions; i.e. if every premiss in an argument is at least true then the conclusion is also at least true. So

 

S ||- A iff (for all B Î S val(B) Î D) implies that val(A) Î D

 

A is valid just when Æ ||- A

 

There are no logical truths in this language as you can see by imagining what happens if val is such that val(A) = n for all A. We can get all the classical logical truths back by restricting the admissible valuations to the set {1, 0, b}, in which case we get the following tables for a logic called LP (Asenjo's 'Paradox Logic')

 
  ~   &   v
    1 b 0 1 b 0
1 0 1 b 0 1 1 1
b b b b 0 1 b b
0 1 0 0 0 1 b 0

 

What we can't get back, however, is the Disjunctive Syllogism: A, ~A v B |= B. As a consequence we don't have MP either.

 

Tree Rules for FDE

 

The tree rules for logics like this are quite similar to those for 'normal' logics. The biggest difference you'll notice immediately is that for each formula to be decomposed there are different rules for the assumption that it is given a designated value and the assumption that it has an undesignated value. accordingly, the rules are tagged with respectively Å and y, So:

 

&                      (A & B) Å                      (A & B) y

                                 |                                    /       \

                               A Å                        A y          B y

                               B Å

 

~&                  ~(A & B) Å                    ~(A & B) y

                           /           \                                 |

                    ~A Å          ~B Å                   ~A y

                                                                      ~B y

 

v                       (A v B) Å                        (A v B) y

                           /           \                                 |

                     A Å           B Å                        A y

                                                                       B y

 

~v                    ~(A v B) Å                      ~(A v B) y

                                 |                                    /       \

                             ~A Å                      ~A y         ~B y

                             ~B Å

 

~                         ~~A Å                            ~~A y

                                 |                                        |

                               A Å                                 A y

 

A branch of the tree is closed if A Å and A y both appear in that branch. (Note: not just if A and ~A appear. Why?)

When we are testing an argument A1 ... An, B for validity (see the definition above. We try the truth tree for  A1 Å ... An Å, B y  

When we are testing A for tautology in FDE we actually try the truth tree for A y

 

Counterexamples

 

A counterexample for the argument A1 ... An, B is a valuation that gives A1 Å ... An Å, and B y. Similarly for the tautology A we seek val such that val( A) = y

Construct the counterexample valuation in two steps.

 

1.    Apply the tree rules until we have only propositional variables and their negations left, p, q, ..., ~p, ~q, ....

       Now climb an open branch.

2.    i.    If p Å occurs then val(p) Î D. If ~p Å occurs then val(~p) Î D.

       ii.   If p y occurs then val(p) Ï D. If ~p y occurs then val(~p) Ï D.

3.    i.    If val(p) Î D and val(~p) Î D then val(p) = b.  

       ii.   If val(p) Ï D and val(~p) Ï D then val(p) = n.  

       iii.  Assume that only one of val(p) Î D and val(~p) Î D

             a.    If val(p) Î D then val(p) = 1.

             b.    If val(~p) Î D then val(p) = 0.

       iv  Assume that only one of val(p) Ï D and val(~p) Ï D

             a.    If val(p) Ï D then val(p) = 0.

             b.    If val(~p) Ï D then val(p) = 1.

 

Further Reading

 

Beall and van Fraassen Possibilities and Paradox (Oxford: OUP, 2003) ch. 7.