Explanations | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Introduction |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
In
this course we have seen how arguments may be considered, very roughly
speaking, as a series of statements that are intended to justify some
disputed statement. It is said that there are other uses for arguments
– as excuses or refutations or confirmations or what have you – but
all those other uses of arguments are probably best looked at as
derivative from this primary usage, and most of them don’t need to be
looked at any further than that. There is, however, another attitude
that we can take towards the conclusion of an argument which gives rise
to a form of argument (in some people’s view) which is really quite
distinct. What I have in mind is the class of explanatory
arguments – or explanations. I
have mentioned several times in the course of these lectures that
explanations are closely related to arguments: they often have the same
structure and employ many of the same techniques. But explanations,
when they are called for, are requested to explain why
some state of affairs is the case, not to convince that some situation is the case; and, consequently, in explanatory
arguments it is a description of the state of affairs that occupies the
place of the target proposition. For
example, if one asks why some computer is not working, in the context it
is assumed that the computer does not work and we are seeking some
account or explanation of this fact which, in the context, is
uncontested or accepted. We are, therefore, not seeking to justify
the phenomenon in question (e.g. the malfunctioning of the computer), we
are seeking reasons for it that will shed light on it. One way in which
reasons might be offered and light shed is by way of argument. We offer
reasons to explain an accepted state of affairs by constructing an
argument which has a description of the state of affairs as its
conclusion, and the reasons as premises of the argument. Thus we might
say that the computer is malfunctioning because
it is not plugged in and without power such machinery will not work.
This use of the connective ‘because’ indicates the presence of an
argument, but in the context it cannot be considered a justificatory
argument since, in the context, the conclusion is not in dispute. We are
dealing with an explanatory use of the term 'because'. As
explanatory arguments – arguments used to provide explanations, to
explain why some accepted fact or event has occurred – seem quite
distinct from other forms of arguments, it is normal to use a special
vocabulary when talking about them. And I’m going to start by saying
that from now on I’m going to go back to talking consistently about
justificatory arguments as just plain arguments
and about explanatory arguments as just plain explanations. D1.
The explanandum
is that which is to be explained in an explanation. D2.
The explanans
is that which does the explaining in an explanation. The
difference between explanations and arguments can be diagrammatically
represented thus:
Argument
Explanation
Reasons
Explanans ß ß
Conclusion
Explanandum
(disputed)
(not disputed)
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Causal Explanations |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
There
are several types of explanation: 1.
Intentional explanations explain some fact about the world in terms of the
beliefs and desires of some actor. (BDA psychology.) 2.
Functional explanations explain things in terms of the ends which those
things are supposed to serve. 3.
Causal explanations explain things in terms of the things which cause
them. Causal
explanations are by far the most important. Just what justifies us in
calling something a/the ‘cause’ of something else is an ongoing
philosophical problem that we can’t dwell too much on here. Let’s
just take it for granted that we can say that event A is the cause
of event B in the real world if in any possible world where
everything is the same as it is in the real world except that A does not
occur, then B does not occur in that world. Such
arguments are often said to proceed from the explanans consisting of
general principles or laws and initial conditions to the phenomenon to
be explained, the explanandum. That is, they are often seen as
instantiating the following form: Given
this form of explanation it’s clear that an important part of the
search for an explanation of an event is going to be the search for
general principles which, in conjunction with other facts, would show
why it happened — these principles are causal
generalisations. Later on we’ll have to look more closely at
this part of the process of forming or discovering an explanation, but
let’s first get an idea of the sort of thing that we’re talking
about. Example
I find my pet dog dead in the yard, and testing
reveals the presence of funnel-web spider venom in the dog’s blood. I then explain
the dog's death by saying it was bitten by a funnel-web spider, which,
in conjunction with general principles about the effects of toxins on
dogs logically points to the dog’s death. In
this example we have: It
is important to note that we use the same general principles – causal
generalisations – in conjunction with other facts to predict
what will or may happen in relevant situations. This ability to
predict is one of the criteria that we use to distinguish good
explanations from not-so-good explanations. We’ll talk more about
these criteria later too, but let’s see an example of it now. Example
I find my pet dog being bitten by a funnel-web spider, and then predict the dog’s death. In
fact the prediction of the death proceeds from the fact of the dog’s
being bitten in conjunction with
the causal generalisation that all dogs die when bitten by
funnel-web spiders. In
this example we have:
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Causal Generalisations |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
So
what are these causal generalisations? Though contested, we shall treat
them in the following way. D3. Causal generalisations are general
conditionals asserting a causal
relationship between events. Example
Consider the causal generalisation above: ‘Any dog
dies if bitten by a funnel-web’.
It is of the form:
‘For any x, if x is a dog bitten by a funnel-web, then x is a
dog that will die’
or, in more English form, ‘For anything, if it is a dog bitten by a
funnel-web, then it is a dog that will die’
So: – it is a general conditional (‘For any x, if x is an F
then x is a G’) – that asserts a causal relationship (as opposed to, say, a mathematical or legal
one)
– between events (the event
of a dog’s being bitten and the event of its dying). I
believe I have remarked a few times that conditionals often assert causal
connections. I made this point, for example, when we briefly had a
look at propositional logical translations of English language
sentences. The so-called material conditional, which we took to
translate ‘if A then B’ was said to be true whenever A was false or
B was true. But this led to peculiar results like saying that ‘if 2 +
2 = 5 then giraffes are fish’ is a true statement. The problem was
that that interpretation of the conditional ignored much of the meaning
that the conditional has in real language. The possibility of causal
implications, for example is ignored. A more common use of the
conditional which has such causal implications is something like: If air is removed from a solid closed container, the
container will weigh less than it did In
which case it is implied that the cause of the weight-loss is the
removal of air. Let
me emphasise, however, that there are not always such implications. For
example: If a shape is a square then it is a rectangle There
is no causal connection being alluded to, but rather a definitional
connection. But
suppose we do have a causal
generalisation, for example: For any x, if x is an F then x is a G Then
we can say that
x’s having feature F is a
causally sufficient condition for its having feature G and
x's having feature G is a causally necessary condition for its having feature F. Example
A dog's being bitten by a funnel-web is a causally
sufficient condition for its dying.
Its dying is a causally necessary condition of its being bitten
by a funnel-web. These
are actually fairly important concepts to get straight on if we’re
going to understand explanations, so let’s have a bit of a digression
here.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Necessary and Sufficient Conditions |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Sufficient
Conditions Necessary
and sufficient conditions can be defined more generally as follows. D4. A is said to be
a sufficient
condition for B just in case, if A is true then B is true as
well. Which
you may recognise as being the same as saying that it is true that if A
then B. So to assert the conditional ‘If A then B’ is to claim that
A is sufficient for B. So: If A
then B
Ý
sufficient condition for B Example
If Phil is a wombat then he is a mammal. So, being a
wombat is a sufficient condition for being mammal. Necessary
Conditions D5. A is said to be a necessary
condition for B just in case B is true only if A is true. Which,
again you may recognise as being the same as saying that B
is not true if A is not true, or If
B is true then A is true, or If
B then A. So
to assert the conditional ‘If B then A’ is to claim that A is
necessary for B. So If
B then A
Ý
necessary
condition for B Example
An argument is sound only if it is valid; if it is
not valid then it is not sound; if it is sound then it is valid. So,
being a valid argument is a necessary condition for being a sound
argument. I
guess the upshot of all this is that we can identify what is being
claimed as a sufficient or necessary condition for something if we can
identify the conditional being asserted. To
assist in this identification, note that the following are equivalent:
Biconditionals If
A is a sufficient condition for B then it is true that:
If A then B
– which is to say –
B, if A. If
A is also necessary for B then it is true that:
If B then A
– which is to say –
B only if A So,
to say that A is both necessary and sufficient for B amounts to
claiming:
B if A and B only if A, or
B if and only if A
(Sometimes abbreviated to ‘B iff A’) Statements
of this last kind — i.e. ‘... if and only if ...’ – are known as
biconditionals.
Biconditionals give necessary and
sufficient conditions and so are often used to state the exact
conditions under which something is caused or the exact definition of a
concept. (Note that ‘B iff A’ means exactly the same thing as ‘A
iff B’.) Example
Consider the following definition using a biconditional:
An object is a square if and only if
(i) it is a rectangle
& (ii)
it has sides of equal length It states that: (a)
An object is a square only if (i), it is a rectangle. Which is
the same as saying
If an object is a square then it is a rectangle. and
An object is a square only if (ii), it has sides of equal length. Which is the
same as saying
If an object is a square then it has sides of equal length. So
each of (i) and (ii) independently
is necessary for being a square (b) If both (i),
an object is a rectangle, and (ii), it has sides of equal length, then
the object is a square. So
(i) and (ii) jointly (though
not separately) are sufficient for being a square. Summarising then, we can say that conditions (i) and
(ii) are independently necessary for being a square, and are jointly
sufficient for being a square.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
The Hypothetico-Deductive Method |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
So
much for the necessary and sufficient background, now let’s get back
to explanations. You will recall that I said that discovering causal
generalisations was an important part of the process by which we
discover explanations. Well it’s now time to have a look at what sort
of process I’m talking about there. Unfortunately, it’s no
mechanical process like doing long division, or filling in a truth table
to find out whether an argument is valid or not. There is no algorithm
for discovering explanations, no infallible procedure that we can apply
mindlessly. Explanations have to be invented by the ingenuity of the
explainer. That is why science is an adventure with many mistakes. Nor
is there an infallible procedure for deciding which explanations are
good ones – as I say more on that later. Nevertheless there is a
well-known general method for coming up with reasonable explanations
that seems to be as good as any other, and it seems to look like what
scientists or others actually do when they follow reasonable procedures
to get to reasonable explanations. This
is the Hypothetico-Deductive Method. And it goes like this: a.
Invent
an hypothesis to explain a fact. b.
Deduce
testable consequences of the hypothesis. c.
Test
whether those consequences are true. d.
Confirm
or disconfirm the hypothesis. If
the consequences that were derived from the hypothesis are observed in
the test then the hypothesis is confirmed.
If they are not observed then the hypothesis is disconfirmed. Note
the results of the testing have the structure (H is the hypothesis): A.
If H is true then A
+
A
---------------------------------------------------
↓
H is confirmed B.
If H is true then A
+
not A
--------------------------------------------------------
↓
H is disconfirmed In
A, when H is confirmed we seem to have an example of the fallacy of
affirming the consequent. But this would only be a fallacy if we were
claiming that A. proves H. We are not saying that. We are only
saying that H has survived a test that was intended to disprove it. The
intuition is that the more of these sorts of tests that H survives the
more likely it is to be true (or nearly true – whatever that means.) In
B, when H is disconfirmed we seem to have a perfectly good example of
modus tollens. However, we do not generally immediately conclude that H
is false, because it may be that there are plausible explanations for
why the test failed in that particular case. (Perhaps we have drawn the
wrong conclusions, made incorrect assumptions, gotten the hypothesis
just slightly wrong, etc.) In fact there are always excuses that
can be made: it is a matter of judgement whether we are to take these
excuses as legitimate or not. That is why we call it a disconfirmation
rather than a disproof. Notwithstanding
the cautions just mentioned, it is much more useful to seek
disconfirmations than confirmations. We can never come close to proving
H true but if we get enough disconfirmations we may come to be very
confident that H is false.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Fallacies in Explanatory Reasoning |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
There
are two important fallacies of explanatory reasoning whose
fallaciousness this model makes clear. Confusing
Confirmation and Proof See
the discussion just above. Proposing an
Unfalsifiable Hypothesis If
H is able to explain every possible event in the world then it is not
possible to come up with any testable consequences and so we can never
disconfirm the hypothesis. This is not to say that only false hypotheses
are good ones – the hypothesis that the Earth is round, for example,
is falsifiable by trying to sail around it and falling off the edge. The
hypothesis that our actions are fated to be what they are is not
falsifiable at all – and therefore it can’t really be said to explain
anything. Example This is the standard objection to explanations that
invoke the agency of God. Since God (the one we usually invoke anyway)
is omnipotent and omniscient and omni all other desirable things, He can
be used to explain anything at all. Why is the sky blue? Because God
said so. Why is there a diversity of species upon the Earth? Because God
said so. Why do chickens
have four feet and a pink curly tail? Because God said so. The last example makes it clear that since no state
of affairs that could possibly obtain in the world is ruled out by the
explanation via God’s intention, it does not tell us anything about
the world as it actually is. Example A more plausible example is Freudian psychology. This
was once very popular but has now largely fallen out of favour. Science
has its fads too. The characteristic feature of Freudian psychology was
that there was no conceivable human behaviour that could not be
explained by it. Do you love your mother? It is an unresolved Oedipal
Complex. Do you hate your mother? It is a repressed Oedipal Complex with
a touch of resentment for alienated affections. Do you neither hate nor
love your mother? You are in denial. Etc. Once again, since no form of human behaviour that could possibly occur in the world is ruled out by the explanation via Freud’s psychology, it does not tell us anything about the human behaviour as it actually is. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Testing Causal Generalisations - Mill's Methods |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
And
so we come to the problem of actually finding the causal generalisations,
which, as you now see, is only a part of the whole explanation
generating process – though it is an important one. How do we decide
what is a reasonable hypothesis to begin the hypothetico-deductive
method? Well, of course, there’s no infallible rule, but there is a
set of methods invented by John Stuart Mill in his System
of Logic (building on work done by the great Francis Bacon) which
have proven to be quite reasonable and which are known as Mill’s
Methods. We’ll have a look at some simple varieties of
Mill’s methods now. Those who are keen to in look at this in more
detail are referred to Mill – or to Copi.[1] The
Sufficient-Condition Test SCT:
Any candidate condition that is present when condition G is
absent is eliminated as a possible sufficient condition of G. (From
Mill’s Method of Agreement.) Example
Suppose that some of a group of students in college
have become ill after a meal and we want to test for conditions that
might be causally sufficient for their apparent food poisoning so that
we might thereby explain the poisoning. Three students are interviewed. The first got sick after eating Avocadoes (A),
Broccoli (B), Carrots (C) and Dumplings (D). The second didn't get sick
and ate Broccoli (B), Carrots (C), no Avocadoes (~A) and no Dumplings
(~D). The third didn't get sick and ate Avocadoes (A), no Broccoli (~B),
no Carrots (~C) and no Dumplings (~D). See Table I below. Applying SCT, we can eliminate – on the basis of
student 2 – Broccoli and Carrots as possible sufficient conditions for
the food poisoning. On the basis of student 3's testimony we can also
eliminate Avocadoes.
The only remaining possible sufficient condition is the Dumpling.
However, there may be some as yet unnoticed candidate
which is another possible sufficient condition for getting sick which
further cases would bring to light.. Or we might hope to find a fourth
student who ate Dumplings without getting sick. Something like this:
If however no such student is found and we have
reason to think that A-D are all
the possible candidates for the outbreak of food poisoning then we can
reasonably conclude that eating Dumplings is causally sufficient for
getting food poisoning, if anything is. The
Necessary-Condition Test NCT:
Any candidate condition that is absent when condition G is
present is eliminated as a possible necessary condition of G. (From
Mill’s Method of Difference.) Example
Using another scenario like that above let us try to
establish a necessary condition for getting sick. The details of the
food eaten on this occasion are given below in Table II.
Student 2 shows eating Avocadoes cannot be necessary.
Student 3 shows eating Broccoli or Dumplings cannot be necessary.
Now the only remaining possible necessary condition for getting
sick is the eating of Carrots. Again, so long as we have reason for thinking one of
the four food-types was necessary for the illness, then we can
reasonably conclude that it was probably the eating of Carrots that was
causally necessary, if anything was.
What
the Tests Show From
Table I we might conclude that D is a causally sufficient condition for
G; that is: Anyone who ate D got sick (G) Notice
that we might further conclude that D is also a causally necessary
condition for G: (However,
we might subsequently find a student who did not eat the Dumplings, they
only picked the Raisins out and ate them, yet they too got sick. What
this would show – on the basis of the information available – is
that, whilst eating the Dumplings was still a causally sufficient
condition, it was not a causally necessary condition. The Necessary
Condition Test would rule out dumplings as causally necessary since a
sick student did not eat them.
It might then be concluded that the eating of raisins, contained within
the dumplings, was both a causally necessary and sufficient condition.) So
what have we got so far? i.
These
are general causal conditionals
inferred from the data presented. ii.
Of
course these conditionals do not follow with certainty from the
information and they can be undermined by subsequent information – thus, the argument for D as a causally necessary condition and the
argument for D as a causally sufficient condition for G are inductive
arguments. iii.
Notice
also that they are rather weak inductive arguments – further testing
of the conditions thought causally necessary and sufficient would be
needed before one could be confident in the conclusions. From
Table II we might conclude that C was a causally necessary condition for
G Anyone who got sick (G) ate C Again
though, the conclusion is weak and we would need further data to
strengthen it. What
about causal claims we might
want to make on the basis of such tests? If, after extensive testing,
some event F can be reasonably said to be a causally sufficient and/or
causally necessary condition for an event E would we say that it is the cause of E? There is no clear answer here, but F can
reasonably be said to be causally related to E – that much is
clear. Causes
are commonly considered to temporally
precede (i.e. come before) their effects. So given a causally
sufficient or causally necessary condition for some event, we would call
it the cause of the event only
if it came before the event. Also, it is typically that event or change
which stands out against background conditions that we identify as the
cause of an event. You can read more about this sort of thing in
your text.[2] a.
Sometimes we call a condition that is merely a causally sufficient
condition the cause. Example
A hammer hitting a glass window is a causally
sufficient (but not causally necessary) condition for the glass to break
and it would be cited as the cause. b.
Sometimes we call a condition that is merely a causally necessary
condition the cause. Example
Sometimes we will cite a spark as the cause of fire
(it is a causally necessary but not causally sufficient condition) but
not the presence of oxygen (which is also a causally necessary but not a
causally sufficient condition). Example
Sometimes we will cite the presence of oxygen, but
not other contributing factors. (For example, if magnesium is glowing
red-hot in an oxygen-free environment and oxygen is suddenly
introduced.) c.
Usually it is that necessary condition whose presence triggers
the event in question (as opposed to those necessary conditions that are
more or less constant in the background) that we call the
cause. Example
The sudden presence of a spark in the presence of
background necessary conditions … like the presence of oxygen. The Method of
Concomitant Variation The
previous tests enable us to eliminate certain conditions as causally
necessary and others as causally sufficient. With additional premises we
might then, with varying degrees of strength, argue inductively in
favour of some remaining condition as causally necessary or causally
sufficient. In this way we might argue inductively for particular causal
claims. Sometimes
however, such tests fail us. They rely on cases where a target feature
– G, say – is present or absent and other features (possible causes)
are present or absent. An alternative method is required for situations
in which features are always
present or absent to some
degree. Such
a method is The Method of Concomitant Variation. (i)
Some feature F is positively
correlated with a target feature G
iff (a)
increases in F are accompanied by increases in G
& (b)
decreases in F are accompanied by decreases in G. Example
The presence of money in my pocket is positively
correlated with the presence of smiles on my face. (ii)
Some feature F is negatively
correlated with a target feature G
iff (a)
increases in F are accompanied by decreases in G
& (b)
decreases in F are accompanied by increases in G. Example
The presence of alcohol in the blood is negatively
correlated with driver ability. Suppose
we observe after many trials that some feature F is positively
correlated with some feature G or F is negatively correlated with a
feature G. What might we conclude from such correlations? In
the case of (i) it seems a reasonable hypothesis that increases in F cause
increases in G and decreases in F cause
decreases in G (all other things being equal). So, in the particular
example: increases in the presence of money in my pocket cause
me to smile, and decreases in its presence cause decreased smiling. Similarly,
in the case of (ii), it seems reasonable to suppose that increases in F cause
decreases in G and decreases in F cause
increases in G (all other things being equal). So, in the particular
example: increases in the presence of alcohol in the blood
cause decreased driver ability, and decreases cause increased driver
ability. These
hypotheses are arrived at inductively.
They are strongly suggested but not certain. They could be undermined by
further evidence not yet considered.
[1]
I. M. Copi Introduction to
Logic (many editions). [2]
Fogelin/Sinnott-Armstrong (2005) Understanding
Arguments, pp. 289 f.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Testing Causal Generalisations - Mill's Methods |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
And
so we come to the problem of actually finding the causal generalisations,
which, as you now see, is only a part of the whole explanation
generating process – though it is an important one. How do we decide
what is a reasonable hypothesis to begin the hypothetico-deductive
method? Well, of course, there’s no infallible rule, but there is a
set of methods invented by John Stuart Mill in his System
of Logic (building on work done by the great Francis Bacon) which
have proven to be quite reasonable and which are known as Mill’s
Methods. We’ll have a look at some simple varieties of
Mill’s methods now. Those who are keen to in look at this in more
detail are referred to Mill – or to Copi.[1] The
Sufficient-Condition Test SCT:
Any candidate condition that is present when condition G is
absent is eliminated as a possible sufficient condition of G. (From
Mill’s Method of Agreement.) Example
Suppose that some of a group of students in college
have become ill after a meal and we want to test for conditions that
might be causally sufficient for their apparent food poisoning so that
we might thereby explain the poisoning. Three students are interviewed. The first got sick after eating Avocadoes (A),
Broccoli (B), Carrots (C) and Dumplings (D). The second didn't get sick
and ate Broccoli (B), Carrots (C), no Avocadoes (~A) and no Dumplings
(~D). The third didn't get sick and ate Avocadoes (A), no Broccoli (~B),
no Carrots (~C) and no Dumplings (~D). See Table I below. Applying SCT, we can eliminate – on the basis of
student 2 – Broccoli and Carrots as possible sufficient conditions for
the food poisoning. On the basis of student 3's testimony we can also
eliminate Avocadoes.
The only remaining possible sufficient condition is the Dumpling.
However, there may be some as yet unnoticed candidate
which is another possible sufficient condition for getting sick which
further cases would bring to light.. Or we might hope to find a fourth
student who ate Dumplings without getting sick. Something like this:
If however no such student is found and we have
reason to think that A-D are all
the possible candidates for the outbreak of food poisoning then we can
reasonably conclude that eating Dumplings is causally sufficient for
getting food poisoning, if anything is. The
Necessary-Condition Test NCT:
Any candidate condition that is absent when condition G is
present is eliminated as a possible necessary condition of G. (From
Mill’s Method of Difference.) Example
Using another scenario like that above let us try to
establish a necessary condition for getting sick. The details of the
food eaten on this occasion are given below in Table II.
Student 2 shows eating Avocadoes cannot be necessary.
Student 3 shows eating Broccoli or Dumplings cannot be necessary.
Now the only remaining possible necessary condition for getting
sick is the eating of Carrots. Again, so long as we have reason for thinking one of
the four food-types was necessary for the illness, then we can
reasonably conclude that it was probably the eating of Carrots that was
causally necessary, if anything was.
What
the Tests Show From
Table I we might conclude that D is a causally sufficient condition for
G; that is: Anyone who ate D got sick (G) Notice
that we might further conclude that D is also a causally necessary
condition for G: (However,
we might subsequently find a student who did not eat the Dumplings, they
only picked the Raisins out and ate them, yet they too got sick. What
this would show – on the basis of the information available – is
that, whilst eating the Dumplings was still a causally sufficient
condition, it was not a causally necessary condition. The Necessary
Condition Test would rule out dumplings as causally necessary since a
sick student did not eat them.
It might then be concluded that the eating of raisins, contained within
the dumplings, was both a causally necessary and sufficient condition.) So
what have we got so far? i.
These
are general causal conditionals
inferred from the data presented. ii.
Of
course these conditionals do not follow with certainty from the
information and they can be undermined by subsequent information – thus, the argument for D as a causally necessary condition and the
argument for D as a causally sufficient condition for G are inductive
arguments. iii.
Notice
also that they are rather weak inductive arguments – further testing
of the conditions thought causally necessary and sufficient would be
needed before one could be confident in the conclusions. From
Table II we might conclude that C was a causally necessary condition for
G Anyone who got sick (G) ate C Again
though, the conclusion is weak and we would need further data to
strengthen it. What
about causal claims we might
want to make on the basis of such tests? If, after extensive testing,
some event F can be reasonably said to be a causally sufficient and/or
causally necessary condition for an event E would we say that it is the cause of E? There is no clear answer here, but F can
reasonably be said to be causally related to E – that much is
clear. Causes
are commonly considered to temporally
precede (i.e. come before) their effects. So given a causally
sufficient or causally necessary condition for some event, we would call
it the cause of the event only
if it came before the event. Also, it is typically that event or change
which stands out against background conditions that we identify as the
cause of an event. You can read more about this sort of thing in
your text.[2] a.
Sometimes we call a condition that is merely a causally sufficient
condition the cause. Example
A hammer hitting a glass window is a causally
sufficient (but not causally necessary) condition for the glass to break
and it would be cited as the cause. b.
Sometimes we call a condition that is merely a causally necessary
condition the cause. Example
Sometimes we will cite a spark as the cause of fire
(it is a causally necessary but not causally sufficient condition) but
not the presence of oxygen (which is also a causally necessary but not a
causally sufficient condition). Example
Sometimes we will cite the presence of oxygen, but
not other contributing factors. (For example, if magnesium is glowing
red-hot in an oxygen-free environment and oxygen is suddenly
introduced.) c.
Usually it is that necessary condition whose presence triggers
the event in question (as opposed to those necessary conditions that are
more or less constant in the background) that we call the
cause. Example
The sudden presence of a spark in the presence of
background necessary conditions … like the presence of oxygen. The Method of
Concomitant Variation The
previous tests enable us to eliminate certain conditions as causally
necessary and others as causally sufficient. With additional premises we
might then, with varying degrees of strength, argue inductively in
favour of some remaining condition as causally necessary or causally
sufficient. In this way we might argue inductively for particular causal
claims. Sometimes
however, such tests fail us. They rely on cases where a target feature
– G, say – is present or absent and other features (possible causes)
are present or absent. An alternative method is required for situations
in which features are always
present or absent to some
degree. Such
a method is The Method of Concomitant Variation. (i)
Some feature F is positively
correlated with a target feature G
iff (a)
increases in F are accompanied by increases in G
& (b)
decreases in F are accompanied by decreases in G. Example
The presence of money in my pocket is positively
correlated with the presence of smiles on my face. (ii)
Some feature F is negatively
correlated with a target feature G
iff (a)
increases in F are accompanied by decreases in G
& (b)
decreases in F are accompanied by increases in G. Example
The presence of alcohol in the blood is negatively
correlated with driver ability. Suppose
we observe after many trials that some feature F is positively
correlated with some feature G or F is negatively correlated with a
feature G. What might we conclude from such correlations? In
the case of (i) it seems a reasonable hypothesis that increases in F cause
increases in G and decreases in F cause
decreases in G (all other things being equal). So, in the particular
example: increases in the presence of money in my pocket cause
me to smile, and decreases in its presence cause decreased smiling. Similarly,
in the case of (ii), it seems reasonable to suppose that increases in F cause
decreases in G and decreases in F cause
increases in G (all other things being equal). So, in the particular
example: increases in the presence of alcohol in the blood
cause decreased driver ability, and decreases cause increased driver
ability. These
hypotheses are arrived at inductively.
They are strongly suggested but not certain. They could be undermined by
further evidence not yet considered.
[1]
I. M. Copi Introduction to
Logic (many editions). [2]
Fogelin/Sinnott-Armstrong (2005) Understanding
Arguments, pp. 289 f.
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
Fallacies in Causal Reasoning |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
There are some common mistakes that are made in proposing causal explanations. Here are a few of the best ones: Confusing
Correlation and Cause
Take the case of the method of concomitant variations that we’ve just been looking at. The method involves going from observation of correlations to suppositions of causal relationships; but it is a fallacy to suppose too hastily or without sufficient care that if property F is observed to be correlated with property G then F is the cause of G.
Even if the correlation is strongly supported by the data, thus establishing a strong inductive argument from the data to the correlation, the move from correlation to cause can be problematic. There are many ways in which this deduction can be fallacious. a.
Coincidence
The correlation may be purely coincidental! Example
An inference from the strong positive correlation
between
In fact, though correlated, there is no causal connection at all it seems. The correlation appears purely accidental. Example
Is an inference from the strong positive correlation between an individual's level of literacy and their participation in volunteer work to a causal connection from the former to the latter fallacious or not? (In the inaugural lecture of the Keith Murdoch Oration, hosted by the State Library of Victoria, October 11, 2001, Rupert Murdoch claimed such a causal connection as part of an argument for the importance of higher education and corresponding need for increased funding. The enthymematic implication of the claimed causal connection was that higher literacy, achieved through education, causes a social good which, in conjunction with the suppressed premise that we should pursue social goods, implies that we should endeavour to increase literacy and thus ensure proper funding for education.) b. Symmetry
Correlation is symmetrical – F is correlated with G
if and only if G is correlated with F so the same correlation would
justify the conclusion that G causes F.
Example
Consider the positive correlation between time spent in hospital and risk of death. Does the risk cause one to spend more time in hospital or does spending more time in hospital cause an increased risk of death (due to non-treatable infections found in hospitals)? c.
Common
cause
It may be that neither is the cause of the other, but
that both are the effects of some other property H.
Example
Does the El Nino effect cause drought in by some third factor? Their positive correlation does not answer this question.
Example An inference from the strong positive correlation
between shoe-size and quality of handwriting to some causal connection
between them would be fallacious (and there is no accepted theory which
would suggest such a connection). In fact they are correlated by virtue
of their having a third common cause — maturity. d.
Reflexivity There may be a complex causal interrelationship of F
and G such that F is to some extent the cause of G and G is also the
cause of F. Example Is a chicken the cause of an egg, or is an egg the
cause of a chicken. There are any number of positive feedback loops in
the ecology. Any one of these could probably serve as an example here. e.
Insignificance The causal relation may be real but insignificant, in
which case P may contribute to Q but it could not be claimed to be the
cause of Q. A
similar fallacy can also arise in connection with uses of the Sufficient
Condition Test and the Necessary Condition Test. Example Suppose (as was claimed in the Australian yesterday)
that homosexual males are discovered to have a certain brain property X.
Having brain-property X is a necessary and sufficient condition for
being a gay male. (In this case it was found that women and homosexual
men registered a reaction in the anterior hypothalamus when tested with
4, 16-androstadien-3-one, but women and heterosexual men did not.) The mere correlation of being a homosexual male with
the presence of brain-property X doesn't necessarily show that this
brain property causes sexual orientation (nor has it been suggested this
time). Only a correlation has been established and this might well be
explained by means of something other than a causal connection between
sexuality and the brain-property. It might turn out that both
homosexuality and the brain-property X have a common cause in biological
and environmental factors. Post Hoc Ergo
Propter Hoc Also
known as simple Post hoc. The Latin
tag just means ‘After this therefore because of this.’ A fallacy of
supposing that because event G follows event F in time that therefore
event F causes event G. Of course, there is no such necessary
connection. Although, as I said earlier, we do expect that if F is the
cause of G then F will precede G temporally. Which is to say that if G
precedes F we think that it is absurd to suppose that F is the cause of
G. (Naturally none of this applies in advanced Physics.) Evaluation Bearing
in mind the very different function that an explanation has from an
argument, it is not surprising that the criteria which are of interest
to us in evaluating the two are quite different. What makes a good
explanation depends upon the particular use that the explanation is
intended for, but there are two principal factors to be considered: 1.
Plausibility. How likely is it that the explanans will be true. 2.
Power.
Can the explanans actually cause the explanandum. These
are pretty obvious criteria and don’t need too much explanation by me,
but there are many other factors that should be considered in deciding
whether we have a ‘good’ explanation or not. There are any number of
different lists of desirable criteria, and it’s an area of ongoing
dispute amongst philosophers of science, but here is a list that is
taken from a well-known introductory text.[1]
3.
Relevance. Is the explanation overly complex? We should prefer the simplest
possible explanations. (Look up ‘Occam’s Razor’) 4.
Simplicity. Is the explanation overly complex? We should prefer the simplest
possible explanations. (Look up ‘Occam’s Razor’) 5.
Generality. Can the same explanation help us to understand a wide variety of other
facts about the world? The more it explains the better – but if it
tries to explain everything it may end up explaining nothing.
(This is the problem of ‘unfalsifiability’ we’ll talk about
later.) 6.
Modesty. Does the explanation require us to change too much of what we think we
already know about the world? The less we change the better –
intellectual laziness can be a virtue. Great changes require great
justifications, but they are possible; for example, it has been
suggested that to understand some quantum physical facts we need to
change our beliefs about logic. [1]
I. M. Copi Introduction to
Logic (many editions).
|