CAUSATION
CARL BROCK SIDES
|
Copyright © 1997 Carl Brock Sides. Permission
granted to distribute in any medium, commercial
or non-commercial, provided all copyright
notices remain intact |
We might be tempted to say that causes are
necessary and sufficient conditions for their
effects. As J. L. Mackie points out in "Causes
and Conditions," however, this will
not do. Consider the sentence "The short
circuit caused the fire" (and suppose
it true). The occurrence of a short circuit
was in no way necessary for the occurrence
of a fire, for a fire could have been caused
by Mrs. O'Leary's cow knocking over a lamp.
And the short circuit was not sufficient
for the fire, for without the presence of
oxygen and flammable material, the short
circuit could have occurred without the ensuing
fire.
Mackie proposes, however, that this naive
judgment is not completely off the mark.
This is what is true of the short circuit,
according to Mackie: it is an indispensable
(i. e. necessary) member of a set of conditions
that are jointly sufficient for the fire,
although these conditions are not themselves
necessary for the occurrence of the fire.
Mackie refers to such a condition as an INUS
condition: an Insufficient but Necessary
part of a condition which is itself Unnecessary
but Sufficient for the result.
Formally, Mackie's definition of an INUS
condition is as follows: A is an INUS condition
of P iff for some X and some Y, (AX or Y)
is necessary and sufficient for P, although
neither A nor X is sufficient for P. [AX
is the conjunction of A and X.]
Mackie's official analysis of causation actually
makes use of the notion of A's being at least
an INUS condition for P. A is at least an
INUS condition just in case there is a necessary
and sufficient condition for P of one of
the following forms: (AX or Y), (A or Y),
AX, A. According to Mackie, we may then analyze
"A caused P" as follows:
(i) A is at least an INUS condition for P,
(ii) A was present on the occasion in question,
(iii) X [if there is an X in the NS conditions]
was present on the occasion in question,
and
(iv) Every disjunct in Y not containing A
was absent.
In "Causation," David Lewis notes
that Hume gave two different definitions
of "cause": first, Hume said that
"we may define a cause to be an object
followed by another, and where all objects
similar to the first are followed by objects
similar to the second." We may call
analyses of causation that follow this general
strategy "regularity analyses."
Following Lewis, we may give a general schema
for a regularity analysis. Let C be the proposition
that event c occurs, and let E be the proposition
that event e occurs. Then, according to a
regularity analysis, c causes e iff (1) C
and E are true, (2) for some non-empty set
L of true law-propositions and some set F
of true propositions of particular fact,
L and F jointly imply C-E, although L and
F alone do not jointly imply E and F alone
does not imply C-E.
Lewis thinks that the prospects for a workable
regularity analysis of causation are not
good. In particular, it does not seem that
a regularity analysis will be able to distinguish
between genuine causation and other causal
relations. Rather than c causing e, e might
be a cause of c, one which, given the laws
and the circumstances, have occurred otherwise
than by being caused by e. Or c might be
an epiphenomenon of some genuine cause of
e, an event caused by some genuine cause
of e. Or c might be a preempted potential
cause of e, an event that did not cause e,
but which would have, in the absence of whatever
actually did cause e.
Hume's second definition (which he apparently
thought synonymous with the first, was this:
"an object followed by another, . .
. where, if the first had not existed, the
second had never existed." We may call
analyses of causation that follow this general
strategy "counterfactual analyses."
This is the strategy that Lewis proposes
to follow. The classic objection to counterfactual
analyses, that counterfactuals are ill- understood,
no longer holds: thanks to the work of Stalnaker
and Lewis, we now have an adequate semantics
for counterfactual discourse. We may now
put this understanding to use in an analysis
of causation.
First, we need to define causal dependence
among actual events. Let O(c) be the proposition
that event c occurred, and let O(e) be the
proposition that event e occurred.
(Note that O(c) is not the proposition that
some event occurs satisfying some description
that c actually satisfies, but the proposition
that c itself occurs. If Socrates had fled
Athens and died of old age, there still would
have been an event satisfying the description
"the death of Socrates," but this
would not have been the same event that was
actually the death of Socrates.) Then we
may define "e depends counterfactually
on c" as follows: ~O(c) []- ~O(e), i.
e. if c had not occurred, e would not have
occurred.
Counterfactual dependence implies causation,
according to Lewis, but causation does not
imply counterfactual dependence, for causation
must be transitive, although counterfactual
dependence may not be.
(Counterfactual conditionals, unlike material
conditionals, do not obey the law of hypothetical
syllogism.) Perhaps c is a cause of e, even
though e would have occurred even in c's
absence: for if c had not occurred, something
else would have caused e. Instead, we may
define causation as the ancestral of counterfactual
dependence. An event c causes an event e
just in case there is some finite sequence
of events d1... dn, such that e depends counterfactually
on dn, d1 depends counterfactually on c,
and each member (after the first) of the
sequence d1... dn depends counterfactually
on the preceding member.
References
ewis, David. "Causation", Journal
of Philosophy 70 (1973), 556-67. Reprinted
in Philosophical Papers, Vol. II. Oxford,
1986.
Mackie, J. L. "Causes and Conditionals,"
American Philosophical Quarterly 2 (1965),
245-65.
|