Saturday, September 06, 2008

the nature of apriori contingencies

Here is a familiar story. "Once upon a time, it was thought that if a truth was knowable a priori, then it is a necessary truth (i.e. the truths of pure reason are the necessary ones). Then along came Saul Kripke in the early 1970's and shook everything up. He provided putative examples of truths that are knowable apriori yet contingent (e.g. the meter stick truth). Thus, the traditional conception of the relationship between the truths of reason and the truths that couldn't have been otherwise was shattered. And, the philosophers have been trying to pick up the pieces ever after."

This story seems to be fairly accurate. What I am interested in is the following two quesitons: (i) why did pre-Kripkean theorists think that aprioricity entails necessity, and (ii) which implicit premise of the pre-Kripkean reasoning does the post-Kripkean theorist deny?

Before introducing his counterexamples, Kripke provides the following intuitive argument on behalf of the traditional theorist.

I guess it is thought that...if something is known a priori it must be necessary, because it was known without looking at the world. If it depended on some contingent feature of the actual world, how could you know it without looking? Maybe the actual world is one of the possible worlds in which it would have been false. This depends on the thesis that there can't be a way of knowing about the actual world without looking that wouldn't be a way of knowing the same thing about every possible world. - Saul Kripke, Naming and Necessity (1980), p. 38.


Interestingly, Kripke never tells us what is wrong with this reasoning. He just provides (alleged) counterexamples of the conclusion. It would be nice to formulate this quote into a valid (non-circular) argument and see what premise Krikpe and the post-Kripkean theorists are denying. Unfortunately, I have found it difficult to whip it into shape by brute force. But a more delicate approach will provide some insight. Here is an analogous argument.

I guess it is thought that if it is known that the output of a function is 1 without knowing the input, then the function must give 1 on every input. If some inputs gave 0, how could you know what the output was without knowing what the input was? Maybe the input is one of the ones on which it gives 0. This depends on the thesis that there can't be a way of knowing the output of a function without knowing the input that wouldn't be a way of knowing the same thing about every input.


This argument has a sort of intuitive appeal but if you think about it for a second it is obviously fallacious. For example, say the function we are dealing with is division. In general, you needn't know the input in order to know the output; if you know that the input is of the form (x,x), you thereby know the output is 1. So as long as you have some minimal information about the input you can deduce the output (e.g. knowing that the input is in the domain of a constant function and knowing that this constant function is a subset of the function in question).

I think this is a perfect analogy for understanding the nature of apriori contingencies. There is a level of abstraction upon which all the various (and conflicting) accounts of the contingent apriori have something important in common. Here is a scheme that all accounts of the contingent a priori trivially fall into.

Ψ is both apriori and contingent iff f(Ψ) = (h1,h2) and A(h1) & C(h2)


Every account will trivially accord to this scheme if we let Ψ=h1, Ψ = h2, and let A and C be the properties apriority and contingency, respectively. But there is a more interesting level of similarity that can be captured by the scheme. Lets start with a broadly two-dimensionalist account (what I will call "bifurcationalism") and then see how the other accounts (i.e. disquotationalism and exportationalism) have a similar structure.

The bifurcationalist (Tichy, Stalnaker, Evans, Davies and Humberstone, Kaplan, Chalmers, Jackson, Fragments-of-Kripke) holds that Ψ is both a priori and contingent if and only if two functions h1 and h2 associated with Ψ have certain properties, namely if h1, is a constant function to TRUE and h2 is not constant but gives TRUE at w@. There are different ways to understand h1 and h2, but the basic idea is that they are functions from world pairs (w,v) to truth-values, where h1 tracks the epistemic profile of Ψ [it takes inputs of the form (x,x)] and h2 tracks the metaphysical profile of Ψ [it takes inputs of the form (x,y)].

Understood this way the analogy is clear. If we know that the input is in the domain of a h1, h1 is a constant function and that h1 is a subset of h2, then we can know the output of h2 even though its output varies across worlds. For example, consider the sentence (or proposition) `The inventor of bifocals is the actual inventor of bifocals'. The associated function h1, which is a function from world-pairs to truth-values, is such that for every pair of worlds (x,x), h1 delivers as output TRUE [since in every world considered as actual the inventor of bifocals in that world is the actual inventor of bifocals]. Thus, the sentence (or proposition, if you like) is apriori. But since the other associated function h2 gives FALSE on some world-pairs, the sentence is contingent. So in other words, one can know that the contingent sentence is true without knowing which world is actual simply by knowing that the 2D matrix is constant down the diagonal. This is very much like knowing that the output of the division function is 1 without knowing the exact input.

The disquotationalist (Kripke-of-Soames) and the exportationalist (Soames, Salmon) also hold that Ψ is both a priori and contingent if and only if two functions h1 and h2 associated with Ψ have certain properties, namely if h1, is a constant function to TRUE and h2 is not constant but gives TRUE at w@. Of course they don't express thier views this way but they could.

For the disquoatationalist a contingent proposition Ψ is knowable a priori in virtue of the truth of a sentence S that expresses Ψ being knowable a priori. If it is knowable a priori that sentence S is true, then (via understanding S) it is knowable a priori that Ψ. For the disquotationalist f a function from a proposition Ψ to a ordered pair of functions (h1, h2). h1 is a function from contexts-of-utterance to truth values; it takes a context as argument and assesses whether or not the metalinguistic proposition expressed by a sentence of the form`S is true' in that context, where `S' expresses Ψ in the actual context, is true. If h1 is a constant function Ψ is apriori (the contingency bit is straightforward).

For the exportationalist a contingent proposition Ψ is knowable a priori in virtue of the truth of a necessary proposition φ (knowledge of which exports to knowledge of Ψ) being knowable a priori. If it is knowable a priori that φ, then (via exportation) it is knowable a priori that Ψ. For the exportationalist f is a function from a proposition Ψ to a ordered pair of functions (h1, h2). h1 is a function from worlds to truth values; it takes a world as argument and assesses whether or not the de-actualized cousin of Ψ, φ (i.e. if Ψ looks like this [R(t,@t)], then φ looks like this [R(t,t)] is true). If h1 is a constant function Ψ is apriori (the contingency bit is again straightforward).

If any of these accounts are right then there is an priori way of knowing about the actual world that is not a way of knowing the same thing about every possible world. And that is just what is at issue.

2 comments:

Anonymous said...

I found your blog as a result of an ongoing google search for contingency theory, which just seems to lead me to a pile of books on business management and leadership, which wasn't what I intended. You were, of course, the happy exception.

Not that you got me to exactly what I was looking for-the two sentence, philosophical definition of contingency theory, which some hapless customer introduced me to once upon a time as a 16 year old retail servant, and which changed my whole perspective. Somewhere in the intervening 14 years, I've lost the complete definition, the originating concept, and my personal redifinition. That whole story is the most recent post on my blog, if you are interested. (www.entropy.wordpress.com)

Otherwise, nice blog for reminding me how much I *should* have read and have not yet gotten to.

wo said...

I find the argument against contingent a priori knowledge pretty convincing: if things could be either A or B, then I can't know for certain without looking which of these ways they are. The function analogy tells us that I can know whether A is the case without knowing absolutely everything about the world (without knowing the exact input to the function that is the proposition A). But that doesn't show that there is anything wrong with the claim that I need *some* empirical information.

I think I don't quite understand what is meant by "an priori way of knowing about the actual world that is not a way of knowing the same thing about every possible world". Suppose I know that a sentence S has a constant diagonal, and therefore that it is true, no matter what the world is like. This gives me a priori knowledge that S is true only if my knowledge that S has a constant diagonal is itself a priori. But if it is, then arguably 'S is true' is also necessary (even though 'S' may be contingent).

The only uncontroversial way I can see for defending a priori knowledge of contingencies is by redefining it so that the argument doesn't apply any more, for instance, by turning it into the claim that there are true instances of the schema "x knows a priori that S and S is contingent".