wo's weblog

Musings in analytical philosophy

Wednesday, 21 July 2004

Is Selecting By Salience Rational?

philosophy

Suppose you and I both face a choice between several different options. Say, we both have to pick a ball out of a bag of 100 balls. We win a prize if we make the same choice. But we have no means to communicate. Moreover, our only relevant interest is to win the prize, otherwise we are completely indifferent about the options.

If one of the options is somehow salient, say one ball is red and all the others white, most people will choose that one. And wisely so, as many people following this strategy win the prize, whereas hardly anyone picking a white ball does. However, is this a rational decision among perfectly rational agents who know of each other's rationality and preferences? (I also assume that the agents know that they make exactly the same judgements about salience.)

On the one hand, as a perfectly rational agent, you should make your choice depend only on what you expect me to choose. Since by assumption you have no interest in red as opposed to white balls, or in salient as opposed to non-salient options, these features should not affect your choice at all. So it can only be rational for you to choose the salient option if you have reason to expect me to choose that option. Without such a reason, you should be completely indifferent. But you know that I am just as rational as you are, and that I have just the same preferences. So you know that I will choose the salient option only if I have reason to believe that you will choose it, which I have only if I have reason to believe that you have reason to believe that I will choose it, and so on. Nowhere in this chain of considerations will any of us find a reason to believe that the other has reason to believe that (etc.) any of us will choose the salient option. So we should be completely indifferent.

On the other hand, we both know that if a) we both choose the most salient option, we'll surely win; whereas if b) at least one of us chooses at random, our chance of winning is quite small (.01 in the 100 balls case), no matter what the other does. Assume for the moment (a) and (b) are the only possibilities. Then I should rationally choose the salient option unless I'm dead certain that you choose at random, in which case I should be indifferent. But what could make it absolutely certain for me that you choose at random? Being as rational as I am, you will choose at random only if you have reason to believe -- in fact, are dead certain -- that I myself will choose at random. And you have reason to believe this only if you have reason to believe that I have reason to believe that you will choose at random. And so on. Nowhere in this chain of considerations will any of us find a reason, let alone a decisive reason, to believe that the other has reason to believe that (etc.) any one of us will (certainly) choose at random. So we should both go for the salient option.

Unfortunately, (a) and (b) aren't the only possibilities. Instead of choosing at random or choosing the most salient option, I could also choose the least salient option (if it exists -- it may share the fate of the least uninteresting number), or the option of which I think it would most remind Tony Blair of Paris. What favours selecting by salience over selecting by such gerrymandered properties? Should we choose selecting by salience because salience is, er, the most salient property to select by?

Sunday, 18 July 2004

TPG2a

umsu

Last week the RSI got worse, and this week I've spent some more time on the tree prover. Here's the current version. It works in Mozilla and (slower) Opera on Linux, and doesn't work in Konqueror. I don't have any other browsers here, so feedback on how it behaves especially in Safari and Internet Explorer is welcome.

The prover is generally faster and more stable than the old one. But it still does badly on some formulas, like \lnot(\lnot(\forallx(Px\toRx)\leftrightarrow\forally(Qy\toSy))\land(\existszPz\leftrightarrow\existszQz)\land\forallx\forally((Px\landQy)\to(Rx\leftrightarrowSy))). There are some improvements (e.g. merging) under the hood that would improve the performance, but are currently turned off because they make it very hard to translate the resulting free-variable tableau into a sentence tableau. My plan is to turn these features on automatically when a proof search takes too long, and not to display a tree in that case. I'm also thinking about trying to find simpler proofs after a first proof has been found: the tableau for the above formula doesn't look like it's the smallest possible proof.

To improve the detection of invalid formulas, I've added a very simple countermodel finder. What it does is simply check all possible interpretations of the root formula on the sets { 0 }, { 0,1 }, etc. This works surprisingly well since many interesting invalid formulas have a countermodel with a very small domain. The countermodels are currently not displayed, but that will change soon.

Tuesday, 06 July 2004

Knowing the Meaning

philosophy

On one of our many conceptions of meaning, the meaning of an expression is what you know when you know the meaning of the expression. I don't think this is a particularly useful conception. Besides, it violates some commonplace truths about meaning, like that expressions of different languages can have the same meaning. For suppose the meaning of the German "schwarz" is identical to the meaning of the English "black". Then by the above rule anyone who knows the meaning of "black" should know the meaning of "schwarz", which isn't so.

Now for why I don't find the proposed conception very useful. What do people know when they know the meaning of an expression? First of all, they typically have certain abilities. If you know the meaning of "black", you are able to use that word appropriately. But these abilities probably require propositional knowledge, and anyway they are neither necessary not sufficient for knowing the meaning.

Perhaps the relevant knowledge is acquaintance knowledge, like the knowledge you have when you know Paris. Knowing the meaning of "bachelor" then is being acquainted with a certain entity, viz. with the meaning of "bachelor". That doesn't seem right. If somebody has the right propositional knowledge, and the right abilities, then she knows the meaning of "bachelor", whether or not she is acquainted with whatever the meaning might be. Knowing the meaning of "bachelor" is more like knowing the Peano Axioms. That doesn't require being acquainted with the axioms, but only knowledge that such-and-such are the Peano Axioms.

What propositional knowledge do you need to have in order to know the meaning of "black"? Note that it doesn't suffice to have knowledge of propositions whose expression contains "black", e.g. that black things are not white. Every German knows that, but many Germans don't know the meaning of "black". The relevant propositions must be propositions involving "'black'", not merely "black": Perhaps you need to know that the things of which "black" is true are not white, and so on -- more simply, you need to know that "black" is true of something iff that something is black. Applying this idea to sentences, we get the familiar idea that knowing the meaning is knowing truth-conditions: You know the meaning of "it's raining" if you know under what conditions "it's raining" is true; i.e. if you know that "it's raining" is true iff it's raining.

So suppose that what you know when you know the meaning of "it's raining" is the proposition that "it's raining" is true iff it's raining. (I think that's too simple in some ways, but it's on the right track.) Then on the initial proposal on which the meaning of "it's raining" is what you know when you know the meaning of "it's raining", it follows that the meaning of "it's raining" is the proposition that "it's raining" is true iff it's raining. That doesn't look like a very useful notion of meaning to me.

I think it's better at this stage to identify sentence meaning with the relevant truth conditions, and similarly predicate meaning with the satisfaction conditions. What is a condition? Something that divides possibilities into those that meet the condition and those that don't. It might be a linguistic description (under a fixed interpretation), or a function from possibilities to truth values, or simply a set of possibilities.

Some people complain that you don't know a set of possibilities when you know a meaning. True, but that only shows that sets of possibilities are not meanings on the not very useful conception of meaning with which I began. If meanings are sets of possibilities then knowing the meaning of "it's raining" isn't knowing the relevant set, but rather knowing something like that the sentence is true iff one of the elements of the set is actual, that is, iff it's actually raining.

What does it take to know that? Perhaps it takes having certain mental items ('concepts') arranged in a certain way and connected to some mental representation of "it's raining". Or maybe, and similarly, it takes knowing a purely qualitative description in some basic vocabulary that picks out just the situations in which it's raining, and somehow connecting that description with "it's raining". Maybe this is so, but I don't think we make any such assumptions when we say of somebody that she knows that "it's raining" is true (in English) iff it's raining. Rather, what we mean is that she (truely and justifiedly) believes things like that a sincere speaker of English will utter "it's raining" only when it's raining, or at any rate only when she believes that it's raining. This is an ordinary belief with non-semantic content, and people have that belief in the same way in which they have other beliefs, perhaps by means of Mentalese tokens, or perhaps by some other means.

Wednesday, 30 June 2004

Truth at a Fictional World

philosophy

A sentence is true in a fiction iff it is true at certain worlds, say, at the closest worlds where the pretense which the narrator and the audience engage in is not only pretense. But to evaluate whether the sentence is true at a world, do we treat the world as actual or as counterfactual?

It seems that there could easily be stories in which water isn't H20, and Hesperus isn't Phosphorus. This suggests that the worlds must be treated as actual. However, it isn't clear that these terms ("water" etc.) are sufficiently rigid, and if not, there are also worlds as counterfactual where the identities fail. Could there be a story in which the stuff that actually is water isn't the stuff that actually is H2O? I'm not sure.

Another (sort of wacky) argument for evaluating fictional sentences by their A-intension is that engaging in a fiction seems to work like hypothtically conditionalizing on the fictional truths. That is, when we engage in a fiction F, the subjective probability we pretend to assign to a statement S more or less equals our conditional subjective probability of S given the relevant F-facts established by the pretense. Which equals the intutive subjective probability of the indicative "if [F-facts] then S". But indicative conditionals are often put forward as heuristics to determine A-intensions, rather than C-intensions.

(If one analyses truth in fiction in terms of indicative conditionals and holds that indicative conditionals are truth-functional one gets the funny result that for almost all F and S, "in fiction F, S" is true, though usually unassertable.)

On the other hand, consider a story in which Oswald doesn't kill Kennedy, and in which Kennedy is never even mentioned. Does someone else kill Kennedy in that story? Presumably not.

If fictional worlds were to be considered as actual, the impossibility solution to imaginative resistance would be in trouble. For I'm inclined to believe that false normative statements have a contingent A-intension.

Saturday, 26 June 2004

Implementation of Psychology in Aplysia

philosophy

"Dynamical basis of intentions and expectations in a simple neuronal network" (PNAS subscription required, there's a free abstract):

[R]ecent indirect evidence suggests that intentions and expectations may arise in behavior-generating networks themselves even in primates [...]. In that case, interestingly, the intentions and expectations inferred from behavioral observations are not always identical to the intentions and expectations that are consciously accessible [...]. In this study we have demonstrated how such intentions and expectations arise automatically in the feeding network of Aplysia.

The "intentions and expectations" found are basically this: if you repeatedly present an A-stimulus to one of Aplysia's central pattern generators, and then switch to a B-stimulus, the pattern generator will respond as if it received another A-stimulus. Only after several B-stimuli will it switch to responses adequate for B. In this sense the animal expects to receive further A-stimuli, and intends to produce further A-behaviour. In a similar, slightly strechted, sense one could say that the animal believes to be in an A-environment (which is an environment containing seaweed). This belief is a certain state of the synapse linking Aplysia's neurons B20 and B8.

Generalized Scrutability

philosophy

I'm back. Here's a question that occurred to me while I was listening to Dave Chalmers's talk on scrutability.

First some background. One might think that for every world w there is a complete description D true at w such that all and only the sentences true at w follow a priori from D: simply let D contain all sentences true at w. Then all sentences true at w will be a priori entailed by D. However, if "true at" is read counterfactually, sometimes sentences false at w will also be so entailed. Consider Twin World where XYZ occupies the water role. "Water doesn't occupy the water role" is true at Twin World. But "water occupies the water role" is a priori, and hence a priori entailed by everything1. Thus every complete description of Twin World a priori entails a contradiction (and every sentence whatever).

It's better to read "true at" counteractually: S is counteractually true at w if w is in the primary intension of S; which it is roughly iff S is true given that it turns out that w is the actual world. "Water doesn't occupy the water role" is not counteractually true at Twin World. Moreover, on the counteractual reading, "water" never needs to occur in the vocabulary of D. For the primary extension of "water" at every world is whatever plays the water role at that world. If that is knowable a priori, the true "water" sentences are therefore a priori entailed by the sentences describing the stuff that occupies the water role.

Suppose at some world w nothing occupies the water role: there are no seas or rivers or taps in our surroundings at w, nor is there any stuff suitably causally linked to our use of the term "water". Then the primary extension of "water" at w is empty, and "there is no water" is counteractually true at w.

Dave Chalmers and Frank Jackson have argued that all macrophysical truths are a priori entailed by microphysical and phenomenal truths. The phenomenal truths are crucial since, as I understand the argument, the primary extension of macrophysical terms is largely determined by phenomenal facts.

Now consider a world w without consciousness, where there are no (relevant) phenomenal truths. w might be Zombie World, or a world containing nothing but yellow rubber balls. If the primary extension of macrophysical terms at w is whatever occupies a certain largely phenomenal role at w, all our macrophysical terms will end up empty at w. So "there are no planets, no tables, no trees, etc." will be counteractually true at Zombie World, and "there are no rubber balls" at Rubber Ball World.

My question is: is that so?

I don't have a good argument why it shouldn't, but it seems odd to me. For one, it doesn't pass the usual heuristics: If it turns out that there are no phenomenal states, will it turn out that there are no planets? I don't think so. More importantly, I would have thought that at least for some macrophysical terms, primary and secondary intension coincide. But if the primary extension of all macrophysical terms is empty at Zombie World this can't be true. For surely there are planets and tables and trees at zombie world, so the secondary extension of those terms isn't empty there.

(A similar problem arises for terms whose primary intension is largely deferential, as it is on some 'theories' of direct reference: If "Neptune" denotes whatever Leverrier called "Neptune", then "Neptune doesn't exist" will be counteractually true at every world where Leverrier doesn't exist. Similarly, if "elm" denotes whatever the experts call "elm", "elms don't exist" will be counteractually true at every world where there are no experts. However, this problem can easily be avoided by conceding that those terms are not in fact largely deferential.)


[1] Update: Actually, "water occupies the water role" isn't a priori. But "if anything occupies the water role then water occupies the water role" is. So still "water occupies the water role" is a priori entailed by "something occupies the water role" which is true at w.

Tuesday, 15 June 2004

Another Hiatus

philosophy

For the next few days I'll be in Konstanz at the 'Concepts and the A Priori' conference. I'll probably stay a bit longer in southern Germany and Switzerland to visit some friends and relatives and mountains before I return to Berlin in about a week ot two.

Sunday, 13 June 2004

Causal Roles and Laws of Nature

philosophy

If the individuation of mental states depends at least partly on their causal roles, then it depends on the laws of nature (including possibly psychophysical laws). For if the laws differ between world 1 and world 2, a state with a given intrinsic nature can have causal role R in world 1 but lack R in world 2.

Assume world 1 is our world and world 2 is a world that contains a perfect spatiotemporal duplicate of our galaxy but lots of weird things elsewhere that contradict our laws. So the laws of world 2 are not the laws of our world. Then our duplicates in world 2 could have quite different mental states than we do.

But that sounds strange. I would have thought that my mental states do not depend upon what goes on outside the milkyway. We might also get the externalist problem about self-knowledge: If whether I believe P or Q depends on far away events, how can I know I believe P rather than Q if I don't know about these far away events?

Monday, 07 June 2004

RATs, PETs, Missed Clues, and Closure

philosophy

Jonathan Schaffer argues (in Analysis 2001) that Relevant Alternatives Theories of knowledge (RATs) such as Lewis's fail because of Missed Clues cases:

Professor A is testing a student, S, on ornithology. Professor A shows S a goldfinch and asks, 'Goldfinch or canary?' Professor A thought this would be an easy first question: goldfinches have black wings while canaries have yellow wings. S sees that the wings are black (this is the clue) but S does not appreciate that black wings indicate a goldfinch (S misses the clue). So S answers, 'I don't know'.

We want to say that S doesn't know that the bird is a goldfinch. Yet it seems that S's evidence rules out all relevant alternatives. For situations with goldfinch-perceptions but no goldfinches are skeptical scenarios and usually regarded as irrelevant.

Anthony Brueckner (in Analysis 2003) argues that Lewis's theory falls prey to Missed Clues not because it is a RAT but because it is a PET, a Purely Evidentialist Theory on which knowledge is a matter of available evidence rather than justified belief. What the student lacks isn't evidence, but justified belief (based on her evidence).

However, Lewis's theory isn't a PET. On Lewis's theory, S knows that P iff S's evidence is incompatible with all relevant non-P possibilities. Justification and belief enter via the Rule of Belief, which says that if S assigns, or ought to assign, non-negligible credence to a possibility, then that possibility is relevant. (In stating the rule, Lewis speaks of belief rather than non-neglible credence, but his applications of the rule only make sense on the wider reading.)

Call an epistemic situation in which a subject has experiences E normal if the subject believes to have E and is justified in that belief.

Now at least on a certain, not too far-fetched understanding of justification, the Rule of Belief entails that in any normal situation, if the subject knows that P on Lewis's analysis, then the subject also has a justified, true belief in P.

Proof. Suppose S knows that P on Lewis's analysis but lacks justified true belief. P must be true by the Rule of Actuality, so the remaining possibilities are that either a) S fails to believe P or b) S fails to be justified in believing P.

Case a. S fails to believe P. Then S assigns non-negligible credence to some non-P possibilities. So by the Rule of Belief, these are relevant. Since S's evidence is incompatible with all relevant non-P possibilities (for S knows P), her evidence is therefore incompatible with some possibilities to which she assigns non-negligibly credence. Hence S assigns non-negligible credence to possibilities where she has other evidence, i.e. other experiences, than she actually has. The situation is not normal.

Case b. S isn't justified to believe P. Then S ought to assign non-negligible credence to some non-P possibilities. So by the Rule of Belief, these are relevant. Since S's evidence is incompatible with all relevant non-P possibilities (for S knows P), her evidence is therefore incompatible with some possibilities to which she ought to assign non-negligible credence. But if S ought to assign non-negligible credence to possibilities in which she doesn't have the evidence she actually has, she isn't justified in believing that she has that evidence. Again, the situation is not normal.

Now we have a puzzle: Missed Clues cases appear to be very common, so it is hard to believe that they always take place in non-normal situations. Indeed, can't we just stipulate that the student S in Schaffer's example is perfectly aware of her experiences, and justifiedly so? Certainly that wouldn't help her in identifying the bird! But if she doesn't believe that the bird is a goldfinch, it follows by what I've just proven that she also doesn't know that it is a goldfinch. So something must have gone wrong. But what?

Tim Black (in the Australasian Journal of Philosophy 2003) suggests that Schaffer has overlooked a non-skeptical alternative: S does assign non-negligible credence to possibilities where she has the same (goldfinchy) experiences without seeing a goldfinch. But these are not skeptical possibilities in which, say, somebody has painted a canary to look just like a goldfinch. Rather, they are possibilities where ordinary canaries look like actual goldfinches.

That sounds plausible to me. How lucky that it is possible for a canary to look just like a goldfinch! Unfortunately, we can't rely on such luck in all Missed Clues cases.

Consider this variation: S watches a DNA sample being taken from the bird and analysed. She sees the result of the analysis: the bird's entire DNA sequence. Obviously this won't help her to find out whether the bird is a goldfinch. What are the relevant alternatives she gives sufficiently high credence to? They can't be possibilities where ordinary canaries have the DNA that actual goldfinches have, for there are no such possibilities. (I assume that a bird's DNA decides whether it is a goldfinch or not. If you disagree, just replace the DNA evidence by evidence for whatever you think decides that the bird is a goldfinch.) This time, the only possibilities left open by S's evidence really are irrelevant skeptical possibilities where, say, the DNA sequencing went completely wrong.

So this time it seems we can't escape the conclusion that S knows that the bird is a goldfinch. If we still assume the situation is normal it follows that S believes that the bird is a goldfinch. More precisely, it follows that S assigns relatively high credence to that proposition. But didn't we assume the contrary?

No. Consider the worlds that might, for all S believes, be her world. In all of them, the bird she is looking at has such-and-such a DNA sequence. But all birds with such DNA in all possible worlds are goldfinches. So in all the worlds that might, for all S believes, be her world, the bird is a goldfinch.

Still, S doesn't believe that the bird is a goldfinch. The reason is that "S believes that P" is not true iff S assigns relatively high credence to the proposition ordinarily expressed by "P". On a coarse-grained conception of propositions this is obvious, as otherwise "S believes that P" would be true whenever "P" ordinarily expresses a necessary proposition, which clearly it is not. More generally, on a coarse-grained conception of propositions, belief is closed under strict implication, whereas belief ascriptions -- at least on their straight-forward interpretation -- are not.

The same is true for knowledge. On Lewis's account, the content of our knowledge is a set of (centered) worlds. It follows that knowledge is closed under strict implication. But this goes against (the straight-forward interpretation of) our knowledge ascriptions. The student knows that she sees a bird with such-and-such a DNA sequence. That strictly implies that she sees a goldfinch. But she doesn't know she sees a goldfinch. In the same manner, not everybody knows that Hesperus is Phosphorus, that all Ophtalmologists are eye doctors and that x^n + y^n = z^n has no solution for n > 2.

It turns out that what genuine Missed Clues cases -- those that can't be solved by Black's strategy -- really show is that knowledge ascriptions aren't closed under strict implication (on their straight-forward interpretation). Everyone who puts forward a coarse-grained analysis like a RAT is probably well aware of that. What is needed to answer Missed Clues cases is a plausible semantics for knowledge ascriptions.

How could that look like? The simplest idea (roughly Stalnaker's) is that when we attribute knowledge that P, we do not attribute knowledge of the proposition that is ordinarily expressed by "P". Rather, the relevant proposition is some other proposition somehow associated with "P" in the context at hand, namely something like the proposition that would ordinarily be expressed by "'P' is true" (in the given context). The student doesn't know that the bird is a goldfinch because she assigns non-negligible credence to possibilities in which "the bird is a goldfinch" is false. Not because in these possibilities, the bird isn't a goldfinch, but because the word "goldfinch" there denotes some other kind of bird. On this simple account (too simple I think), the evidence S lacks is linguistic evidence about the meaning of "goldfinch".

Not Broken

philosophy

The computer is working again. Now I have to catch up with 200 non-junk mails.