# May 24, 2002

Time:
5:00pm, May 24, 2002
Place:
3rd floor conference room, College of Arts and Sciences Building 10, University of Tokyo Komaba campus
Speaker:
Stefan Kaufmann (Kyoto University)
Title:
Probabilities of Conditionals
Abstract:
```The study of denotations is usually kept separate from the study of
beliefs.  Truth and entailment belong to the former, probability and
reasoning under uncertainty to the latter.  Truth serves as the link
between beliefs and the facts: The probability of (1a) is the
probability that (1a) is true, which in turn is the probability that
the match will light.

(1) a. The match will light.
b. If you strike the match, it will light.

Things are more complicated when it comes to conditionals like (1b).
Intuititions as well as empirical facts about the validity of
inference patterns suggest that (T) should hold:

(T) The probability of "if A then C" is the
conditional probability of C, given A.

But this leaves many questions unanswered.  Lewis (1976) showed that a
conditional probability cannot be interpreted as the probability that
a proposition is true.  Does it follow that conditionals do not have
objective truth values?  Empirically, too, (T) is problematic: It is
not clear whether and how it applies to counterfactuals, and there are
counterexamples with indicative conditionals as well.

In this talk I will first briefly present some of the theoretical
background of this work, including facts about inference patterns, the
basic assumptions behind probabilistic inference (Adams, 1965, 1975,
1998), and the framework in which probabilities are encoded.  I will
then discuss in some detail one approach which, when implemented
properly, addresses many of the open problems.

The approach extends a proposal by Jeffrey (1991; cf. also Stalnaker
and Jeffrey, 1994) to assign "truth values" to conditionals that are
allowed to fall between 0 (False) and 1 (True) at worlds at which the
antecedent is false.  It is known that this avoids the problem Lewis
pointed out, but beyond that it is mostly of technical interest and
makes false predictions about the probabilities of some conditionals.

The topic I will mostly focus on concerns the relationship between
predictive conditionals like (2a) and their counterfactual
counterparts (2b).

(2) a. If you strike the match, it will light.
b. If you had struck the match, it would have lit.

It has often been suggested that the probability of (2b) is the
probability (2a) had at an earlier time.  But this cannot be generally
right.  For instance, the probability of (3b) depends on the outcome
of the coin toss, whereas that of (3a) does not.

(3) a. If you bet on heads, you will lose.

Based on examples like these, I will explore the possibility of
interpreting Jeffrey's intermediate values of indicative conditionals
as the values of the corresponding counterfactuals.  It turns out that
a simple unified account of these classes is provided by taking
(qualitative) information about causal independence into account, much
as in the "causal networks" recently used in Artificial Intelligence
(Pearl, 2000).

Finally, I will discuss another consequence of this approach: It
predicts that the probabilities of conditionals are not always the
corresponding conditional probabilities.  In particular, the two come
apart whenever the conditional is causally but not stochastically
independent of its antecedent.  I will argue that this is a feature,
not a bug, for it accounts for a number of counterexamples found in
the literature.

I will conclude with some remarks on the generality of this approach
and its computational relevance in Knowledge Representation.
```