Inductive Arguments | |||||||||||
|
|||||||||||
Fundamentals |
|||||||||||
Argument Strength If
you’ll think back to one of the very early lectures you’ll recall
that I gave a very general theory of arguments in which an argument is
good if it possess a quality that we called ‘effectiveness’; which
is to say that an argument is good iff it causes
the respondent to feel the so-called Argument Intuition, so that the
respondent is more disposed to accept the conclusion if the other
statements are accepted. I mentioned at the time that it follows from
such a conception that arguments may be good to different degrees. If an
argument is such that the respondent is disposed to
some degree to accept the conclusion if the other statements are
accepted, then we consider the argument to be good to
that degree. The Likelihood of Truth At the time we took our judgments of the truth or
falsity of statements to be equivalent to our judgment of the
acceptability or unacceptability statements. We say that it is the same
thing to say that a statement is true and that it’s an acceptable
thing to believe. (And we made the necessary comment that this
‘acceptability’ is to be understood in the context of our looking
for the right sorts of thing to be in our store of statements about how
the world is.) It was therefore natural for us to talk about a subset of
effective arguments in which the acceptability of the statements is
interpreted in terms of the truth of the statements. In that case the
scale of argument strengths could be interpreted as referring to the
likelihood (or the perceived likelihood
if you prefer) that the conclusion is true given that the premisses are
true. I then gave you an arbitrary division of arguments by strength on
a scale that looks like this: 1.
A useless argument is one in which the truth of the premisses has
no effect at all on the truth of the conclusion. 2.
A weak argument is one in which the likelihood of the
conclusion’s being true is not much affected by the truth or falsity
of the premisses. 3.
A moderate argument is one in which the likelihood of the
conclusion being false if the premisses are true is quite low. 4.
A strong argument is one in which the likelihood of the
conclusion being false if the premisses are true is very low. 5.
A valid argument is one in which it is just impossible for the
conclusion to be false if the premisses are true. So far we’ve looked only at the very last level,
and, in fact, we’ve only looked at a subset of those arguments,
because we’ve been concerned with arguments which are valid because of
the form of the statements that occur in them. Such arguments we called
formally valid. They are also sometimes called deductively valid because there’s a way to create a procedure
(called a deduction) by which one can go from the premisses to the
conclusion. That needn’t concern us now, however, because it’s the
sort of thing that a course in logic is concerned with, and this isn’t
a course in logic. Rather, what we’re going to be interested in now is
all the other arguments that aren’t valid and that aren’t quite
useless. We’ve already seen a few of these in the course of the
course, but we didn’t pay much attention to them. You’ll recall, for
example
Bob is an Australian
Most Australians are happy
----------------------------------
Bob is happy but a more common sort of example would be The
sun has risen every morning for (at least) the past 500 years
-----------------------------------------------------------------------------
The sun will rise tomorrow The general name for such arguments is inductive,
and we’ll define them thus: D1.
An inductive argument is an argument that claims that the
likelihood of the conclusion is increased by the truth of the premisses,
but is not made certain by their truth. And the value of such arguments will be described in
terms of their inductive strength, thus: D2.
An argument is Inductively Strong if and only whenever the premises are true the
conclusion is highly likely. Some people have proposed that we can simplify our
discussion of inductive arguments by finding some appropriate terms and
giving them a technical meaning in this context similar to the terms
‘valid’ and ‘sound’ for deductive arguments. Soccio and Barry
propose the following definitions: D3.
An inductive argument is Justified
if and only whenever the premises are true the conclusion is highly
likely. D4.
An argument is Cogent if and only it is justified and the premisses are true. Obviously, the most important question for an
inductive argument is going to be whether the premisses – even if they
are true – provide sufficient support
for the conclusion. Unlike the case of deductive arguments there are no
hard and fast rules about what is going to count as sufficient evidence,
and many of the rules of thumb that we’ll encounter are specific to
particular types of inductive argument. In fact the problem of
sufficient evidence can even be applied to the possibility
of induction at all. There is a notorious philosophical problem — Hume's
Problem — centering on the justification
of inductive reasoning in general, which is a part of broader concerns
focusing on justification in epistemology and Philosophy more generally.
(Knowledge requires justification of our beliefs and inductive reasoning
is typically invoked as a means of justifying certain beliefs. But the
question is whether inductive reasoning itself can be relied on or
justified. Those interested might like to read A. Chalmers, What
Is This Thing Called Science?, UQ Press, Ch 2.) We are not concerned
with its general justification, but rather, supposing its general use
can be justified, we shall be concerned with its characteristics and
criteria for its legitimate employment in particular cases. We now turn
to consider inductive arguments in more detail. Induction and Deduction Compared and Contrasted In order to bring the general characteristics of
inductive arguments into sharp relief, consider the following contrast
between a deductively valid argument and an inductively strong argument:
Misconceptions a.
Valid arguments are better than inductively strong ones. One might well ask why we should employ arguments or
reasoning which can only, at best, give strong yet not conclusive
grounds for the conclusion. Aren't valid arguments always better? No.
They are better only in the sense that if
the premises are true then a valid argument will ensure the
conclusion — whereas an inductively strong one will only strongly
support it. But getting true premises that are strong or powerful
enough to ensure the conclusion may be difficult or even impossible.
Inductively strong arguments have the advantage of only needing premises
powerful enough to strongly support the conclusion. All
ravens are black
All observed ravens have been black ----------------------------------------------
-----------------------------------------------
Any raven on Pikes Peak will be black
Any
raven on Pikes Peak will be black
The left-hand-side (valid) argument has a very strong
connection between premises and conclusion but requires acceptance of a
very strong and therefore a very contentious premise. The
right-hand-side (inductively strong) argument, however, in spite of its
weaker connection between premises and conclusion, requires acceptance
of a less contentious premise. Consequently, the left-hand-side argument
will be of little use as a means of getting someone to believe the
conclusion because justifying the premise will be very difficult (if not
impossible) whereas the right-hand-side argument would be useful. Certainty is a
virtue if obtainable, but where it is not obtainable a high degree of
probability is better than none at all! b.
Valid arguments go from the general to the particular, whereas
inductively strong ones go from the particular to the general. No. Whilst
this is true of some cases —
for example:
Valid Argument
Inductively Strong Argument
with general premise and
with particular premises
and
particular conclusion
general conclusion
All ravens are black
Raven1
is black
------------------------
Raven2
is black
That raven is black
:
Ravenn
is black
------------------------
All
ravens are black it need not
be — for example:
Valid Argument
Inductively Strong Argument
with particular premise and
with general premise and
particular conclusion
particular conclusion
John and Phil are extremists
All ravens we have seen have been black
----------------------------------
--------------------------------------------------
John is an extremist
The next raven we will see will be black Varieties You might imagine that this is going to be a fairly amorphous category of arguments because you might suppose there are many different ways in which the premisses might be so related to the conclusion that the likelihood is increased. To some extent you would be correct, but there are only a few really important styles of inductive argument. We’ll look at several types of argument: the argument from analogy, the inference to the best explanation, the sampling argument, and the statistical argument.
|
|||||||||||
The Argument from Analogy |
|||||||||||
You
may recall that I talked about fallacies of analogy before, but
arguments from analogy can’t really be dismissed so easily, because
very many of the common arguments that we use – especially when
we’re looking for counterarguments to someone else’s fallacious
argument – take this form, and they don’t all appear to be ‘bad’
arguments. Here’s a famous example. The
Argument from Design for the Existence of God (aka ‘The Teleological
Argument’) Consider
a watch. A watch exhibits (a) complexity of parts; (b) suitability to
fulfil a certain function (i.e. telling the time); and (c) its
complexity is what enables it to fulfil this function. These three
features are extremely unlikely to have come about by accident. No one
on seeing a watch would think it the product of chance. Even seeing it
for the first time, one would conclude that it is the product of design
by some intelligent being. But
many things in nature we observe (e.g. the eye) are similarly complex,
fulfil a function (e.g. seeing) and their complexity enables them to
fulfil this function.
So it is reasonable to suppose that they too are made by an intelligent
being. This argument is a paradigmatic argument from analogy, its important parts can be summarized thus:
A watch has (a), (b), (c). The
world has (a), (b), (c). Watches
require a watch-maker
-----------------------------------------
The world requires a world-maker But
that’s just one argument: the more general definition of an argument
from analogy looks like this: 1.
It is claimed that the Object (an argument, or a natural
phenomenon, or an idea, or what you will) has properties P1,
P2, …, Pn. 2.
The Analogue also has properties P1, P2,
…, Pn. 3.
The analogue has property P. --------------------------------------------------------- 4.
Therefore the object has property P. We
can see from this that an argument of this form could be treated as a
type of enthymematic valid argument with the hidden premiss that: [5.
If two objects share properties P1, P2,
…, Pn, they will also share property P.] As
was said before, arguments like this are fallacious if the hidden
premiss is not true or if it is not obviously true and yet is not argued
for. For
example:
Bob is a blue-eyed, blonde male
Bob is a criminal
---------------------------------------
So is Henry The
hidden premiss here is that anyone who is a blue-eyed, blonde male is also a criminal. There
would need to be some reason given to believe this because the
connection isn’t obvious. But, as I say, this isn’t yet an inductive
argument. On our definition it only becomes a type of inductive argument
if the conclusion is said to be made more
likely by the truth of the premisses. How about:
Alan, Bob, Carl, David, …, and Xavier are blue-eyed, blonde
males
Alan, Bob, Carl, David, …, and Xavier are criminals
----------------------------------------------------------------------------------
So is Zach Evaluating Arguments from Analogy
The
philosopher David Hume, in his Dialogues
Concerning Natural Religion (1779) thought the analogy between the
world and a machine was rather weak. In support of this we could point
to relevant disanalogies like the world's containing objects with
features apparently not well suited to fulfil their functions
(cf. Stephen Jay Gould's The Panda's Thumb)
The
philosopher David Hume, in his Dialogues
Concerning Natural Religion (1779) pointed to the fact that, even
supposing the analogy were a strong one, it would only strongly support
the very limited, much weaker conclusion that there are Gods (not
necessarily one as required by advocates of the argument), that are very
powerful (not necessarily all-powerful, all-good, all-knowing).
|
|||||||||||
The Inference to Best Explanation |
|||||||||||
This is sometimes called Abductive Inference and is
treated as an entirely different form of inference only marginally
related to inductive inference. I have much sympathy for that point of
view, but I’ll let you be the judge. First you’ll want to see an
example of the sort of thing that’s meant by this. You
return home to find your door broken and some valuable items missing.
This cries out for explanation. Possible explanations include:
1.
A meteorite struck your door and vaporised your valuables.
2.
Friends are playing a joke on you.
3.
A police Tactical Response Group entered your house mistakenly.
4.
You were robbed.
Explanation 4 seems the best, so you conclude
------------------------------------------------------------------------------------------- you
were robbed. More generally, inferences to best explanation take
the following form:
Phenomenon C is observed
Explanation A explains C and does so better than any rival
explanation
--------------------------------------------------------------------------------------
A The underlying assumption is that the best
explanation of a phenomenon is likely to be true.
My
car stops after a complete service by a reliable mechanic. Best
explanation (in this context) might be that it is out of petrol. It is
possible, of course, that someone broke into my car overnight and
meddled with the engine but this is less plausible. In this situation
you might reasonably conclude it has run out of petrol.
This conclusion can be seen as following from inference to best
explanation and, if it is the
best explanation, then it seems a rational thing to believe since
it is probably true. Evaluating Inferences to Best Explanation
I.e. under what conditions can we be confident in the
truth of the second premise and thus be confident that the explanation
is probably true?
i.
They should actually
explain the event in question as opposed to merely shifting the burden
of explanation onto something else itself needing explaining.
ii.
They should be powerful (i.e. widely applicable).
iii.
The simpler the better.
iv.
They should be conservative with respect to prior beliefs. (Compare creationism and evolutionary theory as rival
explanations of the diversity of the biological world.)
Consider the case of Nile floods discussed on p. 269
of the text. The problem here was that the conclusion that floods were
due to the Gods was perhaps the best of the rivals considered but one
strong rival — the best, in fact — was not considered. The
conclusion was flawed because premise 2 was flawed. Thinking up strong rival explanations is a
challenging task. A lot of effort is expended in scientific research
trying to develop strong rival explanations of a phenomenon for
consideration. Phenomena are often
riddles in the sense of their being puzzling facts.
Example 1. A man, when alone, always rides the lift to the 10th
floor and then walks the last two floors to his 12th floor
flat. He never does this when someone else is in the lift. WHY? The best
explanation can reasonably be said to be probably true by inference to
best explanation. But what are the candidates?
Example 2. The buried ruins of a town believed to be that of the
builders of the pyramids, was unearthed in Egypt in the late twentieth
century and archaeological research — including the newly available
method of DNA analysis — established: (i)
the
workers were well-fed (bone fragments in floors suggested they ate fish
and high-quality beef) (ii)
they
lived in family groups (with 50% male, 50% female and ~24%
children) (iii)
their
bones showed clear signs of having been subject to medical treatment for
breaks and fractures. Yet: (iv)
The
ancient Greek historian Herodotus had claimed that the pyramid builders
were slaves. How
can we explain (i)-(iii) given (iv)? Archaeologists concluded that we cannot and so
rejected Herodotus's claim. The
best explanation of (i)-(iii) is that the pyramid builders were not
slaves!
People often believe things since they are thought
necessary to explain some illusory phenomenon. They draw an inference to
the best explanation of what they take to be a real phenomenon requiring
explanation. In fact, they believe a falsehood inferred from the false
claim that some phenomenon obtains (and is best explained by that which
they come to believe).
Example. I think that Phil always acts strangely in my
presence. I take this to be a puzzling phenomenon requiring explanation.
In the circumstances I reasonably suppose, say, that the best
explanation of this is the idea that Phil dislikes me ... and so I go on
to infer that Phil doesn't like me.
|
|||||||||||
Inductive Generalisation |
|||||||||||
In
spite of the fact, noted in earlier lectures, that inductive reasoning
is not characterized by its
moving from the particular to the general, common examples of inductive
reasoning are like this — namely, inductive generalisations. So let us
turn to look at this form of inductive reasoning. An
argument is an inductive generalisation if and only if a generalised conclusion
about the character some class as a whole is drawn from characteristics
of a sample of the class.Such arguments are obviously generalisations,
and they are inductive because not all members of the class in
question need necessarily have the characteristics of the sample (as the
conclusion suggests they do). It is, at best, highly likely. a.
Strong Inductive Generalisation Consider
the inductive generalisation:
A Canadian quarter did not work in the American telephone on
occasion 1.
A Canadian quarter did not work in the American telephone on
occasion 2.
:
A Canadian quarter did not work in the American telephone on
occasion 20.
Canadian quarters do not (ever) work in American telephones. This
generalization claims that all of
the members of the target class possess a certain property. Arguments
like this are called Strong Inductive Generalisations. b.
Weak Inductive Generalisation Other
generalizations claim no more than that some
of the members of the target class possess a certain property. Arguments
like this are called Weak Inductive Generalisations. For
example:
Many of the people I know who didn’t graduate from
college have gone into real estate. ---------------------------------------------------------------------------------------------------------
People who don’t graduate from college tend to go into real
estate c.
Statistical Generalisation Our
final class of generalizations claims that some
specific proportion of the members of the target class possess a
certain property. Arguments like this are called Statistical
Generalisations. For
example:
10% of people in this sample of the general
population indicated they would eat Zowie Bars. ---------------------------------------------------------------------------------------------------------------
10% of the general population would eat Zowie Bars. Evaluating Inductive Generalisation 1.
Are
the premises true? This question is always important to address. Did the
sample really substantiate claims made about it in the premises? Does
the premise, perhaps, merely report hearsay or popular opinion and not
fact? 2.
Is
the sample large enough? A small sample may be unrepresentative and so suggest
a general characteristic which does not apply. This leads to the Fallacy of Hasty Generalisation For example:To try one coin and then generalise would
be overly hasty and fallacious because we cannot assume the one coin is
truly representative of all coins.. Cognitive psychologists have noted our unreasonable tendency to see a small sample as truly representative (thus explaining our tendency to hastily generalise): "We
submit that people view a sample randomly drawn from a population as
highly representative, that is, similar to the population in all
essential characteristics. Consequently, they expect any two samples
drawn from a particular population to be more similar to one another and
to the population than sampling theory predicts, at least for small
samples."
A. Tversky & D. Kahneman, 'Belief in the Law of Small Numbers', Psychological
Bulletin 76 (1971): 105.
On the other hand, it is not generally the case that the substantial results of a survey will be affected by the size of the sample. What will change is the margin of error. 3.
Is
the sample biased in some other way? The representativeness of the sample is what makes it
possible (by definition) for the result of testing the sample to be true
of the population. If there is a bias in the sample then the sample will
not be representative. The best way to try to make sure there are no
biases is to select the sample randomly from the population. (If there
is no systematicity in the selection there can be no systematic bias.)
It is not easy to do this. A sample may be unrepresentative due to the method of
sampling having been such as to select for particular characteristics
which go unnoticed. This leads to the Fallacy
of Biased Sampling. Bias can arise due to: —
Insufficient variation in the sample
A large sample might still be unrepresentative of the
whole population because it is badly chosen. 1.
Using the same Canadian quarter in all the many trials.
Using the same American telephone in all the many trials.
2.
Using a phone book to get a large sample of the population.
3.
Birth-order as a behavioural predictor Suppose (as was actually done) you sample scientists
to see what factors might pre-dispose someone to accept radical new
theories. You discover that, of the people sampled,
socio-economic factors do not vary with such a disposition yet
birth-order does (i.e. the first born of the sample-set tend to be so
disposed and the later born do not). Conclusion:
First-borns are more likely to accept radical new theories in
general — a biological or "nature" factor, contra an account
which might suggest the determining factor is socio-economic —
"nurture". Problem:
Cannot draw conclusion that birth-order is a more important
determinant of willingness to accept radical new theories per
se since sample of scientists might not be representative of
socio-economic variation. 4. Basing
inferences concerning the general behaviour of native pigeons, say, on
their behaviour when you've observed them. (They always seem very
nervous when I observe them and so I am tempted to conclude that they
are quite nervous birds, but of course my initial data may simply
reflect the fact that they're very nervous when being watched by a
˙˙ p˙˙ential predator (namely, me)!) — Ignoring or emphasising characteristics of a sample due to
Prejudice and Stereotypes —
Eliciting a particular characteristic from the sample by Slanted Questions 4.
Is
the inference justified? There are three factors that need to be considered.
i.
The sample size: the proportion of the population that is tested.
ii.
The level of confidence: the accuracy level of the extrapolation
of the sample’s result to the population. At a 95% level, the result
for the population will be within the margin of error 95% of the time.
(ie. If you did 100 samplings you would get only 5 whose results would
indicate a value for the population that was outside the margin of
error.)
iii.
The margin of error: the precision of the result. Generally
expressed as ±
y
%. (eg. Voters are projected to vote Whig at 45% ± 3 %, which means that we have a 95% level of confidence, say, that the
result would have been between 42% and 48%). These three factors are interdependent; changing one
affects both the others. For a result to justify an inference the margin of
error has to be narrow enough to make the result non-trivial, and the
level of confidence has to be high enough to make the result
significant.
|