Tuesday, March 20, 2012

Propositions and multiple indexing

Forthcoming in Thought: A Journal of Philosophy:



Abstract. It is argued that propositions cannot be the compositional semantic values of sentences (in context) simply due to issues stemming from the compositional semantics of modal operators (or modal quantifiers). In particular, the fact that the arguments for double indexing generalize to multiple indexing exposes a fundamental tension in the default philosophical conception of semantic theory. This provides further motivation for making a distinction between two sentential semantic contents--what Dummett (1973) called "ingredient sense" and "assertoric content".

Wednesday, January 04, 2012

Monsters in Kaplan's Logic of Demonstratives

This paper started its life as a post awhile back and is now officially forthcoming in Philosophical Studies.

Abstract

Kaplan (1989a) insists that natural languages do not contain displacing devices that operate on character—such displacing devices are called monsters. This thesis has recently faced various empirical challenges (e.g., Schlenker 2003; Anand and Nevins 2004). In this note, the thesis is challenged on grounds of a more theoretical nature. It is argued that the standard compositional semantics of variable binding employs monstrous operations. As a dramatic first example, Kaplan’s formal language, the Logic of Demonstratives, is shown to contain monsters. For similar reasons, the orthodox lambda-calculus-based semantics for variable binding is argued to be monstrous. This technical point promises to provide some far-reaching implications for our understanding of semantic theory and content. The theoretical upshot of the discussion is at least threefold: (i) the Kaplanian thesis that “directly referential” terms are not shiftable/bindable is unmotivated, (ii) since monsters operate on something distinct from the assertoric content of their operands, we must distinguish ingredient sense from assertoric content (cf. Dummett1973; Evans 1979; Stanley 1997), and (iii) since the case of variable binding provides a paradigm of semantic shift that differs from the other types, it is plausible to think that indexicals—which are standardly treated by means of the assignment function—might undergo the same kind of shift.

Monday, December 12, 2011

variables are not directly referential

Here is an argument that variables are not directly referential:

(1) A term t is directly referential iff the semantic content of t is the designatum of t.

(2) The semantic content of a term t is X iff for every linguistic environment E, the semantic content that t contributes to E(t) is X [i.e Semantic Innocence].

(3) Thus, a term t is directly referential iff for every linguistic environment E, the semantic content that t contributes to E(t) is the designatum of t.

(4) In a bound environment (e.g. ∀xFx) a variable does not contribute its designatum (since 'Fx' must contribute a set of assignments).

(5) Thus, variables are not directly referential.

I'm interested in how people who advocate direct reference theory maneuver out of this argument. Denying either premise (1) or premise (2) are really the only options for not accepting the conclusion.

Premise (1) is an attempted definition of "direct reference". So perhaps I got the rough definition wrong. One alternative idea that would avoid the conclusion is this: a term t is "directly referential" iff the designatum of t is "directly" determined by the interpretation or assignment function, in the sense that its designatum is assigned independently of the world and time parameters. But to my ear this is a definition of rigidity de jure---it's a fact about a terms designatum across worlds/times and how it is built in by law that it be independent of worlds/times. Direct reference, however, is supposed to make a further claim about "semantic content". Kaplan defines it thusly: "When what is said in using an indexical in a context c is to be evaluated with respect to an arbitrary circumstance, the relevant object is always the referent of the indexical with respect to the context c." It's a familiar point that "direct reference" simply amounts to rigidity de jure, unless we construe it in terms of Russellian structured contents. But I'm not sure how one might want to tinker with the definition to avoid the conclusion.

In any case, I think the real action is with premise (2). And it seems that many will somehow want to deny it, and thus deny semantic innocence (e.g. see Salmon's "A theory of bondage", which develops a Fregean referential shift semantics for variable binding (although he doesn't exactly endorse the occurrence-based semantics he outlines)).

But if a term can be "directly referential" even though the value it contributes across all environments (i.e. its compositional semantic value) isn't its designatum, what is the significance of calling the term "directly referential"? For example, the direct reference theory of names is controversial precisely because of its commitments on embedded occurrences, e.g. names embedded in belief reports. (Perhaps this is where the distinction between Millianism and Direct Reference is important.) Can one accept that 'Hesperus' and 'Phosphorus' are directly referential (and thus have the same "semantic content" in some important sense) even though the semantic content of their occurrences in belief contexts are different? It's hard for me to get a grip on "semantic content", if it is detached from the semantic contribution an expression makes to the complex expressions of which it is a constituent.


I'm left wondering what "direct reference" is exactly and what important (and plausible) thesis about language it is committed to. Any ideas?

Saturday, June 04, 2011

I could have been Barack Obama

(1) My parents could have named me "Barack Obama".
(2) If my parents had named me "Barack Obama", I would have been a Barack Obama.
(3) Necessarily, every Barack Obama is identical to someone who is Barack Obama.
(4) Therefore, I could have been Barack Obama.


I imagine people will question premise (3) or question whether or not (4) follows from (3). But (3) just says that it is necessary that for any x such that Barack Obama(x) there exists a y such that x=y and Barack Obama(y). So...

(3.1) If I had been a Barack Obama, I would have been identical to someone who is Barack Obama.
(3.2) If I had been identical to someone who is Barack Obama, I would have been Barack Obama.
(3.3) So, if I had been a Barack Obama, I would have been Barack Obama.

...I could have been Barack Obama.


I am not, of course, saying that I could have been the current and actual president of the US---I couldn't have been that Barack Obama. But I could have been a different one, if I had been named appropriately. The Barack Obama I could have been would have been Barack Obama the philosopher.

Friday, April 29, 2011

formulating a donkey sentence in first-order logic

Here is a standard donkey sentence:

(1) Every farmer that owns a donkey beats it.

It is often said that such sentences cannot be formulated in the language of first-order logic. There is a formulation in first-order logic that gets the correct truth conditions but it differs radically from the surface structure of (1)---most significantly it interprets the indefinite article as a universal quantifier.

(1') ∀x∀y[(farmer(x) & donkey(y) & owns(x,y)) --> beat(x,y)]

There are numerous novel semantic frameworks that attempt to deal with the problem, e.g. various dynamic semantics. But the following seems to be a reasonable formulation of the donkey sentence in first-order logic.

(1'') ∀x[(farmer(x) & ∃y(y=z & donkey(z) & owns(x,z)) --> beat(x,z)]



What's wrong with this?

Thursday, April 14, 2011

syntax for definitions

How should one write a definition? The mathematician Douglas West (The Grammar According to West) provides the following advice:

Definitions. Words being defined should be distinguished by italics (or perhaps boldface in a textbook context). When italics are used to indicate a word being defined, it is unnecessary to use "called" or "said to be"; the use of italics announces that this is the term being defined and replaces these words.

Many definitions are phrased as "An object has property
italicized term if condition holds." We use just "if" even though subsequently it is understood that an object has the property if and only if the defining condition holds. The italicization alerts the reader to this situation. The convention can be justified by saying that the property or object does not actually exist until the definition is complete, so one does not yet in the definition say that the named property implies the condition.

This seems confused to me for a number of reasons---especially the justification for the convention: "the property or object does not actually exist until the definition is complete". What does that mean?
Definition. An object x is a firath if and only if x is a female giraffe.
Obviously, firaths existed long before I defined "firath"! They just were not so-called.

But what I mostly don't like is that the "definition" schema appears to merely give a sufficient condition for falling under the definiendum. The following is not a good definition of "firath".
Definition. An object x is a firath if x is a giraffe.

Does his advice seem reasonable or not? Presumably, a lot of mathematicians are adhering to his grammar suggestions.

Monday, March 28, 2011

Negation as an interpretation shifting operator

In propositional logic a formula φ of language L is only true (1) or false (0) relative to an interpretation M. The semantics of the negation symbol ~ is usually given as follows:

  • [[~φ]]^M = 1 iff [[φ]]^M = 0.

But we could do it differently. We could conceive of negation as analogous to a modal operator (or quantifier). In this case it doesn't shift the world parameter or the assignment of values to individuals variables---instead it shifts the interpretation, i.e. the assignment of truth-values to propositional letters.

  • [[~φ]]^M = 1 iff [[φ]]^M* = 1, where M* is just like M except it assigns 1 - M(φ) to φ.

As far as I can tell, that is a perfectly fine semantics for negation in propositional logic.