Wednesday, April 18, 2012

Countable additivity

One of the Kolmogorov axioms of probability says that if A1,A2,... is a countable sequence of disjoint sets, then P(A1A2∪...)=P(A1)+P(A2)+.... I once (when writing my Philosophia Christi paper on fine and coarse tuning) thought that while we had intuitive reason to accept unrestricted additivity (where we do not restrict to countably many sets) and we had intuitive reason to accept finite additivity, there was no in-between reason for accepting countable additivity. Since unrestricted additivity is unsupportable (if you pick a random number between 0 and 1, the probability of picking any particular number is zero, and the sum of the uncountably many probabilities of particular numbers will be zero, but the probability of the union of these singleton sets is one), I thought we should go for finite additivity.

When I thought this, I was wrong, because at the time I didn't know about the phenomenon of non-conglomerability which countable additivity seems to be needed for ruling out. Non-conglomerability is where you have a measure P of probabilities (maybe not technically a probability measure), a set E of events where each event in E has non-zero probability and it is certain that exactly one of the events in E will happen, and an event A such that P(A|B)>x for all B in E but P(A)<x. In such a case, your probability of A is less than x even though you know that no matter which event in B will happen, you will have P(A)>x. This is pathological.

It is well-known that countable additivity entails conglomerability. I like proving this with a two-step argument. The first step is an easy argument that if the set E of events is countably additive, then because P(A)=P((AB1)∪(AB2)∪...)=P(AB1)+P(AB2)+..., if we have P(A|B)>x for all B in E, then P(A)>x as well.

The second step in the proof is that if the members of a set E of disjoint events each have non-zero probabilities, then E has only countably many events in it. This step allows us to rule out non-conglomerability using only countable additivity. This step follows from the following fact about real numbers

  1. If E is a set and f is a function that assigns to each member of E a non-negative number such that for any finite sequence x1,...,xn of distinct members of E we have f(x1)+...+f(xn)≤1, then all but countably many members of E are zero,
by letting f=P and using the finite additivity of P.

If we need countable additivity precisely to rule out non-conglomerability, then we have an explanation of why it only has to be countable additivity. The reason has to do with the property (1) of real numbers, which property in turn follows from the fact that the real numbers are Archimedean—for every pair of positive real numbers x and y, there is a finite natural number n such that nx>y.

In other words, we have countable additivity in the probability calculus precisely because the probability values have a countable-like, i.e., Archimedean, structure. (Another way of seeing the countable structure of the reals: they are the completion of the rationals.)

And if generalize the values of the probability function to a larger, non-Archimedean field, we will need to require something stronger than countable additivity in order to avoid non-conglomerability.

Tuesday, April 17, 2012

Multilocation

A sufficient condition for multilocation is being wholly present, at the same time, in two or more disjoint locations. This condition is not necessary, however. Two bosons, unlike two fermions, can share exactly the same location. Suppose that tomorrow I will travel back in time, and then very gently touch the shoulder of my blog-posting self, in such a way that a boson from my time-traveling self is co-located with a boson from my blog-posting self. In that case, surely, I am multilocated, but I am not wholly present at two disjoint locations, but at two slightly overlapping locations.

Fortunately, by making use of the notion of being present at a location (where I count as present wherever any of me is present) in addition to the notion of being wholly present at a location we can easily account for multilocation:

  • An object x is multilocated at time t if and only if that there are two disjoint locations, L1 and L2, such that at t, x is wholly present in L1 and present in L2.

Monday, April 16, 2012

Hope and afterlife

The following argument is valid, and is sound if we take the conditional in (2) to be material.

  1. (Premise) In despairing, one engages in a vice.
  2. (Premise) If there is no afterlife, it is sometimes appropriate to despair.
  3. (Premise) It is never appropriate to engage in a vice.
  4. So, there is an afterlife.

Let me say a little about (2). Despair is appropriate in situations of objective hopelessness. But if there is no afterlife, then when one has misspent one's life in wickedness, and is now facing death with no opportunity to make things up to those whom one has mistreated, then despair is appropriate.

If there is an afterlife, then one can hope for mercy or justice.

Saturday, April 14, 2012

Internal space

As David Lewis taught us, time travel calls for a notion of internal time. If I am about to travel to the time of the dinosaurs, then maybe in an hour I will meet a dinosaur. But that's an internal time hour. If I am going to spend the rest of my life in the Mesozoic, then—assuming nothing kills me—I will grow old before I am born, but this "before" is tied to external time, since of course in internal time, I grow old after being born.

Perhaps ordinary travel calls for a notion of internal space. Let's say today I am in room 304 of the hospital, and yesterday I was in room 200. The doctor comes and asks: "Does it still hurt in the same place as it did yesterday?" I tell her: "No, because yesterday it hurt in room 200, and today it hurts in room 304." But that's external place, and the doctor was asking about internal place.

Internal place is moved relative to external place while the body as a whole is locomoting. But it can also be moved when only parts of the body are moving. If my hands are hurting, and I clasp my hands to each other, I thereby make the internal places where it hurts be very close externally, but they are still as distant internally as they would be were I to hold my arms wide. If, on the other hand, my two hands grew together into a new super-hand, the two places would come to be close together.

I wonder: If I grow, does my head come to be internally further from my feet? I think so: There are more cells in between, for instance.

Rob Koons has suggested to me that the notion of internal place can help with Brentano's notion of "coincident boundaries": Suppose we have two perfect cubes, with the red one on top of the green one. Then it seems that the red cube's bottom boundary is in the same place as the green cube's top boundary. (Sextus Empiricus used basically this as an argument against rigid objects.) Question: But how can there two boundaries in the same place? Answer: There are two internal places in one external place here.

Friday, April 13, 2012

My unkillable Treo 700P

I bought my Treo 700P phone second-hand on ebay in 2008, and it's served me faithfully.  I've written enough apps for it so it works very much like I want it to work.  I only really wish it had a better web browser, but it's good enough to check my email on.

At one point, maybe a year or two into its service to me, I had to use an app to turn on the microphone and speakers to make phone calls, but this was fixed when I blew out the headphone jack--I think it was stuck in thinking that there were headphones attached. Two or three times, I've had keys become less reliable, but that's an easy fix--I just disassemble the phone, peel back the keyboard, and clean the contacts (acetone works well).

Yesterday, I thought it had finally kicked the bucket.  We had an on-campus Fiesta event, and there was a small pool of bubble solution, and my son and I were making bubbles, and the Treo slid from my shirt pocket into the bubble solution.  I checked that it didn't work, removed the battery, disassembled and dried it at home, and it still didn't work.

I then spent several hours looking at what Android options Sprint had for me.  I wanted a large screen (4.3" is really the smallest I'd want, at least in wide-screen format) and a hardware keyboard (on-screen keyboards aren't very good for typing serious technical emails, especially if you need to use braces and the like--apparently a lot of people don't use them much).  Alas, nothing met my desiderata.  The Galaxy S II had an OK sized screen (4.5") but no keyboard, and the Galaxy S had a keyboard but the screen is little too small (4").  Granted, my Treo's screen is much smaller, and its keyboard symbol support isn't great (but I wrote an app that helps with that), but if I am going to upgrade, I'd like to upgrade to something that will satisfy me, rather than make me wish for something else.

I was planning to drive to the Sprint store and get a Galaxy S this morning, when I did the last check of my Treo and found that after drying out more fully overnight, it's now back to good working order.  I wonder how many more months or years it'll last me.

Update: It's finally dead--see comments.

Thursday, April 12, 2012

Voyeurism and lustful fantasies

Consider the following three activities, all done for a sexual end and without the consent of the other parties:

  1. Wearing special "x-ray" goggles that show one what other people look like under their clothes
  2. Wearing special computerized goggles that quite accurately extrapolate from the visible features of other people and from visual data about how their clothes lie on them, using a large database of body types, and show what other people very likely look like under their clothes
  3. Walking around and using the visible features of other people and visual data about how their clothes lie on them to imagine what other people look like under their clothes.
Now, (1) is a clear case of voyeurism, a violation of sexual privacy, and hence wrong. But is (2) really significantly morally different from (1)? We can imagine a continuum of more and more accurate portrayals. But (3) is basically (2), as done with an inferior instrument. Hence, it is wrong as well.

The argument doesn't apply to every case of lustful fantasy—I think there are other arguments, like this one—but I think it captures some of why many cases of sexual fantasies are wrong and creepy, indeed are a kind of non-consensual sexual relation.

Wednesday, April 11, 2012

Zeno's arrow, Newtonian mechanics and velocity

Start with Zeno's paradox of the arrow. Zeno notes that over every instant of time t0, an arrow occupies one and the same spatial location. But an object that occupies one and the same spatial location over a time is not moving at that time. (One might want to refine this to handle a spinning sphere, but that's an exercise to the reader.) So the arrow is not moving at t0. But the same argument applies to every time, so the arrow is not moving, indeed cannot move.

Here's a way to, ahem, sharpen The Arrow. Suppose in our world we have an arrow moving at t0. Imagine a world w* where the arrow comes into existence at time t0, in exactly the same state as it actually has at t0, and ceases to exist right after t0. At w* the arrow only ever occupies one position—the one it has at t0. Something that only ever occupies one position never moves (subject to refinements about spinning spheres and the like). So at w* the arrow never moves, and in particular doesn't move at t0. But in the actual world, the arrow is in the same state at t0 as it is at w* at that time. So in the actual world, the arrow doesn't move at t0.

A pretty standard response to The Arrow is that movement is not a function of how an object is at any particular time, it is a function of how, and more precisely where, an object is at multiple times. The velocity of an object at t0 is the limit of (x(t0+h)−x(t))/h as h goes to zero, where x(t) is the position at t, and hence the velocity at t0 depends on both x(t0) and on x(t0+h) for small h.

Now consider a problem involving Newtonian mechanics. Suppose, contrary to fact, that Newtonian physics is correct.

Then how an object will behave at times t>t0 depends on both the object's position at t0 and on the object's velocity at t0. This is basically because of inertia. The forces give rise to a change in velocity, i.e., the acceleration, rather than directly to a change in position: F(t)=dv(t)/dt.

Now here is the puzzle. Start with this plausible thought about how the past affects the future: it does so by means of the present as an intermediary. The Cold War continues to affect geopolitics tomorrow. How? Not by reaching out from the past across a temporal gap, but simply by means of our present memories of the Cold War and the present effects of it. This is a version of the Markov property: how a process will behave in the future depends solely on how it is now. Thus, it seems:

  1. What happens at times after t0 depends on what happens at time t0, and only depends on what happens at times prior to t0 by the mediation of what happens at time t0.
But on Newtonian mechanics, how an object will move after time t0 depends on its velocity at t0. This velocity is defined in terms of where the object is at t0 and where it is at times close to t0. An initial problem is that it also depends on where the object is at times later than t0. This problem can be removed. We can define the velocity here solely in terms of times less than t0, as limh→0−(x(t+h)−x(t))/h, i.e., where we take the limit only over negative values of h.[note 1] But it still remains the case that the velocity at t0 is defined in terms of where the object is at times prior to t0, and so how the obejct wil behave at times after t0 depends on what happens at times prior t0 and not just on what happens at t0, contrary to (1).

Here's another way to put the puzzle. Imagine that God creates a Newtonian world that starts at t0. Then in order that the mechanics of the world get off the ground, the objects in the world must have a velocity at t0. But any velocity they have at t0 could only depend on how the world is after t0, and that just won't do.

Here is a potential move. Take both position and velocity to be fundamental quantities. Then how an object behaves after time t0 depends on the object's fundamental properties at t0, including its velocity then. The fact that v(t0)=limh→0(x(t0+h)−x(t0))/h, at least at times t0 not on the boundary of the time sequence, now becomes a law of nature rather than definitional.

But this reneges on our solution to The Arrow. The point of that solution was that velocity is not just a matter of how an object is at one time. Here's one way to make the problematic nature of the present suggestion vivid, along the lines of my Sharpened Arrow. Suppose that the arrow is moving at t0 with non-zero velocity. Imagine a world w* just like ours at t0 but does not have any times other than t0.[note 2] Then the arrow has a non-zero velocity at t0 at w*, even though it is always at exactly the same position. And that sure seems absurd.

The more physically informed reader may have been tempted to scoff a bit as I talked of velocity as fundamental. Of course, there is a standard move in the close vicinity of the one I made, and that is not to take velocity as fundamental, but to take momentum as fundamental. If we make that move, then we can take it to be a matter of physical law that mlimh→0(x(t0+h)−x(t0))/h=p(t0), where p(t) is the momentum at t.

We still need to embrace the conclusion that an object could fail to ever move and yet at have a momentum (the conclusion comes from arguments like the Sharpened Arrow). But perhaps this conclusion only seems absurd to us non-physicists because we were early on in our education told that momentum is mass times velocity as if that were a definition. But that is definitely not a definition in quantum mechanics. On the suggestion that in Newtonian mechanics we take momentum as fundamental, a suggestion that some formalisms accept, we really should take the fact that momentum is the product of mass and velocity (where velocity is defined in terms of position) to be a law of nature, or a consequence of a law of nature, rather than a definitional truth.

Still, the down-side of this way of proceeding is that we had to multiply fundamental quantities—instead of just position being fundamental, now position and momentum are—and add a new law of nature, namely that momentum is the product of mass and velocity (i.e., of mass and the rate of change of position).

I think something is to be said for a different solution, and that is to reject (1). Then momentum can be a defined quantity—the product of mass and velocity. Granted, the dynamics now has non-Markovian cross-time dependencies. But that's fine. (I have a feeling that this move is a little more friendly to eternalism than to presentism.) If we take this route, then we have another reason to embrace Norton's conclusion that Newtonian mechanics is not always deterministic. For if a Newtonian world had a beginning time t0, as in the example involving God creating a Newtonian world, then how the world is at and prior to t0 will not determine how the world will behave at later times. God would have to bring about the initial movements of the objects, and not just the initial state as such.

Of course, this may all kind of seem to be a silly exercise, since Newtonian physics is false. But it is interesting to think what it would be like if Newtonian physics were true. Moreover, if there are possible worlds where Newtonian physics is true, the above line of thought might be thought to give one some reason to think that (1) is not a necessary truth, and hence give one some reason to think that there could be causation across temporal gaps, which is an interesting and substantive conclusion. Furthermore, the above line of thought also shows how even without thinking about formalisms like Hamiltonian mechanics one might be motivated to take momentum to be a fundamental quantity.

And so Zeno's Arrow continues to be interesting.

Tuesday, April 10, 2012

Top-down and bottom-up syntax

There are two fundamentally different approaches to syntax. One way starts at the bottom, with fundamental building blocks like names, variables and predicates, and thinks of a sentence as built up out of these by applying various operators. Thus, we get "The cat is on the mat and the dog is beside the mat" from elements like "the cat", "is on", "the mat", "the dog" and "is beside", by using operators like conjunction and binary-predication:

  1. "The cat is on the mat and the dog is beside the mat" = conjunction(binary-predication("is on", "the cat", "the mat"), binary-predication("is beside", "the cat", "the mat")).
We can then parse the sentence back down into the elements it came from by inverting the operators (and if the operators are many-to-one there will be parsing ambiguity).

The other approach starts at the top with a sentence (or, more generally, well-formed formula) and then parses it by using parsing relations like conjoins (e.g., "p and q" conjoins "p" and "q") or binarily-applies (e.g., "the cat is on the mat" binarily-applies "is on" to "the cat" and "the mat").

There are four reasons I know of for preferring the top-down approach.

A. The possibility of multiple ways of expressing the same structure. For instance, "p and q" conjoins "p" and "q", but it's not the only way of conjoining these: "p but q" also conjoins "p" and "q". The bottom-up approach can handle this by having multiple conjunction operators like conjoin-with-and, conjoin-with-but and conjoin-with-and-also, but then we need to introduce a higher order property of these operators that says that they are conjunctions. Moreover, we should not suppose separate operators in cases where the meaning is the same, and sometimes the meaning will be exactly the same.

B. Partial sense. There is no way of forming the sentence

  1. 2+2=5 and the borogove is mimsy
in the bottom-up approach, because "borogove" is not a noun of English and "is mimsy" is not a predicate of English, so there is nothing to plug into a unary-predication operator to form the second conjunct. But on the top-down approach, we can do a first step of parsing the sentence: (2) conjoins "2+2=5" and "the borogove is mimsy". And we know that one conjunct is false, so we conclude that (2) isn't true before we even start asking whether the second conjunct makes sense.

C. Ungrammatical sentences. The bottom-up approach has no way of making sense of ungrammatical sentences like a non-native speaker's

  1. Jane love Bob.
For there is no predicate F such that the sentence is equal to binary-predicate(F, "Jane", "Bob"), so there is no way of parsing. But the top-down approach is not committed to all sentences coming from application of specified predicates. But the top-down approach can say that (3) binarily-applies "loves" to "Jane" and "Bob", school-marmish opinions to the contrary notwithstanding. The bottom-up approach can handle ungrammatical sentences in two different ways. One way is to suppose that any particular ungrammatical sentence is in fact a mistaken version of a grammatically correct sentence. Maybe that's true for (3), but I doubt that this is tenable for the full range of understandable but grammatically incorrect sentences. The second is to include a range of ungrammatical operators, such as binary-predicate-dropping-suffix-s. This is not satisfactory—there are too many such.

D. Extensibility. It's an oversimplification to think that a sentence that applies a predicate is formed simply out of the predicate and its arguments by means of a predication operator. There are other elements that need to be packaged up into the sentence, such as emphasis, degree of confidence, connotation, etc. These may be conveyed by tone of voice, context or choice of "synonym". One could handle this in two ways on the bottom-up view. One way is to add additional argument slots to the predication operators, slots for emphasis type, confidence, connotation, etc. This is messy, because as we discover new features of our language, we will have to keep on revising the arity of these operators. The second approach is to suppose that a sentence is formed by applying additional operators, such an emphasis operator or a confidence operator, after applying, say, the last predication operator. Thus, a particular instance of "Socrates is wise" might be the result of:

  1. confidence(emphasis(predication("Socrates, "is wise"), 3.4), .98).
But now we can't take the resulting sentence and directly parse it into subject and predicate by simply inverting the predication operator. We first have to invert the confidence operator, and then we have to invert the emphasis operator. In other words, parsing requires a large number of other operators to invert. But on the top-down approach, this is easy. For if S is our confidenced and emphasized token of "Socrates is wise", then applies(S, "is wise", "Socrates"). No need to invert several additional operators to say that. If we are interested in the other features of S, however, then we can see what other parsing predicates, such as has-confidence, can be applied to S. But that's optional. Because we are not parsing in principle by inverting compositional operators, we don't need to worry about the other operators when we don't care about that aspect of the communicative content.


There is also a down-side to the top-down approach. Because of point C, we have no way of codifying its parsing predicates like binarily-applies for natural languages. That, I think, is exactly how it should be.

Monday, April 9, 2012

Explanation-despite

It is a standard idea in stochastic explanation that factors that affect the probability of the explanandum enter into the explanation even when they lower that probability. While there are indeed cases where probability-lowering factors form an explanation (a standard case in the literature is that hitman A shoots at the victim can decrease the probability of the victim's death, even though the victim dies from the shot, because we can have cases where had A not shot at the victim, a more accurate hitman, B, would have shot at the victim, and the victim would have been even more likely to die), those cases are the exception rather than the norm.

If a building survives an earthquake, we might cite in our explanation of the building's survival the fact that the building had a resilient steel frame, but the fact that the earthquake was exceptionally strong would not be a part of the explanation. At most we would cite the exceptional strength of the earthquake in a "despite" clause:

The building did not collapse because of the innovative steel frame, despite the exceptional strength of the earthquake.
But the "despite" clause is not part of the explanans. Perhaps it qualifies the explanandum, the question being why the building did not collapse despite an exceptionally strong earthquake. (This paragraph adapted from my "Plans and their accomplishment: A new version of the Principle of Double Effect", Philosophical Studies, forthcoming.)

Sunday, April 8, 2012

Happy Easter

I wish a very happy Easter to all my readers.

Christ is risen! Indeed he is risen!

Saturday, April 7, 2012

The improbable and the impossible

This discussion from Douglas Adams' The Long Dark Tea-Time of the Soul (pp. 165-166) struck me as quite interesting:

[Kate:] "What was the Sherlock Holmes principle? 'Once you have discounted the impossible, then whatever remains, however improbable, must be the truth.'"
"I reject that entirely," said Dirk sharply. "The impossible often has a kind of integrity to it which the merely impossible lacks. How often have you been presented with an apparently rational explanation of something that works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, 'Yes, but he or she simply wouldn't do that.'"
"Well, it happened to me today, in fact," replies Kate.
"Ah, yes," said Dirk, slapping the table and making the glasses jump, "your girl in the wheelchair [the girl was constantly mumbling exact stock prices, with a 24-hour delay]--a perfect example. The idea that she is somehow receiving yesterday's stock market prices out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don't know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. ..."

This reminds me very much of the Professor's speech in The Lion, the Witch and the Wardrobe:

Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious that she is not mad. For the moment and unless any further evidence turns up, we must assume that she is telling the truth.

Both Dirk Gently and the Professor think that we need to have significantly greater confidence in what we know about other people's character than in our scientific knowledge of how the non-human world works. This seems to me to be just right. Our scientific knowledge of the world almost entirely depends on trusting others.

So, both C. S. Lewis and Douglas Adams are defending faith in Christ, though of course Adams presumably unintentionally. :-)

Thursday, April 5, 2012

"John and John"

I just sent out an email to two philosophers whose first name was "John" and the email's first line said "Dear John and John". After I sent the email, I wondered to myself: Is there a fact of the matter as to which token of "John" referred to whom?

Normally, if I write an email to two people, I think about the issue of which order to list their names in, and I typically proceed alphabetically. But in this case, I didn't think about the order of names I was writing down. It is possible that I thought about the one while typing the first "John" and then about the other while typing the second "John". Would that be enough to determine which token refers to whom? Maybe. But I don't know if I did anything like that, and we may suppose I didn't.

Now:

  1. John and John are philosophers.
This is true. But I didn't think of a particular one of the two while typing a particular "John" token. It seems unlikely that there be a fact of the matter as to which "John" refers to whom. But the sentence is, nonetheless, true, and hence meaningful.

Is the sentence ambiguous in its speaker meaning? If so, that's a hyperintensional ambiguity, because necessarily "x and y are Fs" and "y and x are Fs" have the same truth value. I am hesitant to say that (1) is ambiguous in its speaker meaning. (I will leave its lexical meaning alone, not to complicate things.)

Suppose that there is no ambiguity in speaker meaning, or at least none arising from the issue of which token refers to whom (maybe "philosopher" is ambiguous). Then this rather complicates compositional semantics on which the content of a whole arises from the content of the parts. For if either token of "John" in (1) has a content, the other token has the same content, since they are on par. But if the content is the same, we're not going to get out of this a sentence that means the same thing as (1) does. Suppose, for instance, the content of each token of "John" is the same as that of of "x or y", where "x" and "y" are unambiguous names for the two philosophers. Then we would have to say that (1) is equivalent to:

  1. (x or y) and (x or y) are philosophers,
but in fact (1) and (2) are not equivalent—all that (2) needs for its truth is that one of x and y be a philosopher.

Maybe the solution is this. Neither "John" in (1) refers. But "John and John" is the name of a plurality. I think not, though. Here's why. Suppose instead I said: "John and the most productive member of my Department and John are all philosophers." Well, "John and the most productive member of my Department and John" is not a name, as it does not refer rigidly.

I am just a dilettante on semantics, and it would not surprise me if this was exhaustively discussed in the literature.

Wednesday, April 4, 2012

"Wholly present"

Here's something I've been thinking about. I want to start with the technical notion of being located at a region. This notion allows for partial location. If I have one leg in Arkansas and one in Texas, then I count as located in Arkansas and located in Texas. If regions have points, then I am located at a region if and only if I located at some point in the region (maybe that's a more primitive notion).

I'd like to move from the notion of being located at a region to the notion of being wholly present in a region. I am now wholly located in Texas, but if I had a leg in Arkansas and a leg in Texas, I would be wholly located in neither.

I could take the notion of being wholly present as primitive. But I don't want to do because it's a three-dimensional notion, while I think I am a four-dimensional entity, so to me it's a derivative notion.

One obvious thing to say is:

  1. x is wholly present in A iff x is located at A and not located outside A.
This rules out multilocation—being wholly present at two distinct regions—by fiat. In so doing, it rules out both my view of the doctrine of transsubstantiation (since on my view, it is literally true that Christ is wholly spatially present in different places[note 1]) and the possibility of backwards time travel (since if you can travel back in time, you could be wholly present in two places, and shake hands with yourself).

A plausible move is to introduce parts or, more generally, features (the blueness of my eyes is a feature but not a part) and their locations (I stipulate that every part, proper or not, is a feature). Maybe, then:

  1. x is wholly present in A iff every feature of x is located at A.
While this works for transsubstantiation, it doesn't work for time travel. For suppose that tomorrow I lose a leg, and I travel back to today, so that I am in another room in addition to this one. Then it is false that every part of me is located in that other room, since the lost leg isn't there.

There is another interesting problem with (2) and time travel. Suppose that in ten seconds I travel back to the present, so that I am wholly present in two disconnected rooms, and suppose that in the ten seconds I have neither lost nor gained any features. Let AL be the left half of the space occupied by me in one of the rooms and BR be the right half (including the cut line—don't put the cut line in AL) of the space occupied by me in the other room. Let C be the disconnected region that is the union of AL and BR. Then by (2) I am wholly present at C as every one of my features is in either AL or in BR or in both. But surely I am not wholly located in the messy region C.

At this point things get difficult. My best solution today is moderately complex (but not as complex as my best solution yesterday). It requires the introduction of two sets of times for a persisting substance. There are internal times, which correspond to the internal development of the substance, and there are external times, which correspond to what goes on in the external world. Normally, the two are nicely correlated. But time travel discombobulates things. If in one minute I travel 24 hours into the future, then in one internal minute I will be 24 external hours forward. And if in a minute I travel 24 hours into the past, then in one internal minute it will be externally yesterday.

Now, take the case where I am right now in room A in the normal way, but in room B due to having time-traveled back to that room after losing a leg. Let T be the present external time. There are two internal times associated with t. At internal time t1, I am in room A, and at internal time t2, I am in room B. Moreover, at t1, I have two legs, though at t2 I only have one. I guess at the external time T, I have two legs. My being wholly present in B does not require that I have both of my legs in B. It only requires that I have in B all the legs that I have at the internal time t2.

This yields the following pair of definitions:

  1. x is wholly present in A at its internal time t iff every feature that x has at t is located at A at t.
  2. x is wholly present in A at external time T iff there is an internal time t of x such that (a) x's internal time t is externally at T and (b) x is wholly present in A at t.
This gives the right answers with respect to (a) transsubstantiation, (b) time-travel and loss/gain of parts, and (c) time-travel and the union of the AL and BR regions.

The account does, however, have the consequence that if x is an extended simple with all features spread over all of x (so, x is the same color all over, etc.), then it counts as wholly present at every point at which it is located. This consequence is perhaps not so plausible, but I can live with it.

Tuesday, April 3, 2012

An asymmetry between good and evil, and an argument against utilitarianism

Here is an asymmetry between good and evil actions. It is very easy to generate cases of infinitely morally bad actions. You just need to imagine an agent who has the belief that if she raises her right hand, she will cause torment to an infinite number of people. And she raises her right hand in order to do so. But there doesn't seem to be a corresponding easy way to generate infinitely morally good actions. Take the case of an agent who thinks that if she raises her right hand, she will save infinitely many people from misery. Her raising her right hand will be a good action, but it will not be an infinitely morally good action. In fact, it will not be morally better than raising her right hand in a case where she believes that doing so will relieve finitely many from misery.

To make the point clearer, observe that it is a morally great thing to sacrifice one's life to save ten people. But it is a morally more impressive thing to sacrifice one's life to save one person. Compare St Paul's sentiment in Romans 5:7, that it is more impressive to die for an unrighteous than a righteous person.

Chris Tweedt, when I mentioned some of these ideas, noted that they provide an argument against utilitarianism: utilitarianism cannot explain why it would be better to save one life than to save ten lives.

Now of course if the choice is between saving one life and saving ten lives with the sacrifice, then saving ten lives is normally the better action. In fact, if the one life is that of a person among the ten, to save only that one life would normally[note 1] be irrational, and we morally ought not be irrational. But that's because choices should be considered contrastively. Previously, when I said that giving one's life for one is better than giving one's life for ten, I meant that

  1. choosing to save one other's life over saving one's own life
was a better choice than
  1. choosing to save ten others' lives over saving one's own life.
But the present judgment was, instead, that:
  1. choosing to save one other's life over saving one's own life or saving ten others' lives
is normally rationally and morally inferior to
  1. choosing to save ten others' lives over saving one's own life or saving one other's life.
Cases (1) and (2) were comparisons between choices made in different choice situations, while cases (3) and (4) were comparisons between choices made in the same choice situation. The moral value of a choice depends not just on what one is choosing but on what one is choosing over (this is obvious).

But even after taking this into account, it's hard to see how a utilitarian can make sense of the judgment that (1) is morally superior to (2). In fact, from the utilitarian's point of view, if everything relevant is equal, (1) is morally neutral—it makes no net difference—while (2) is morally positive.

Perhaps, though, we need a distinction between moral impressiveness and moral goodness? So maybe (1) is more morally impressive than (2), but (2) is still morally better. This distinction would be analogous to that between moral repugnance and moral badness. Pulling wings off flies for fun is perhaps more morally repugnant than killing someone in a fair fight to defend one's reputation, but the latter is a morally worse act.

But I do not think the difference between (1) and (2) is just that of moral impressiveness. Here's one way to see this. It is plausible that as one increases the number of people whose lives are saved, or starts to include among them people one has special duties of care towards, one will reach the point where the sacrifice of one's life is morally obligatory. But to sacrifice one's life to save one stranger is morally supererogatory. And while I don't want to say that every supererogatory action is better than every obligatory action, this seems to be a case where the supererogatory action is better than the obligatory one.

On reflection, however, it is quite possible to increase the moral value of a good act. Just imagine, for instance, that you believe that you will suffer forever unless you murder an innocent person. Then refraining from the immoral action will be infinitely good (or just infinitely impressive?). So we can increase the badness of an action apparently without bound by making the intended result worse, and we can increase the goodness of an action by making the expected cost to self worse (as long as one does not by doing so render the action irrational--cases need to be chosen carefully).

Monday, April 2, 2012

Goedelian ontological argument in negative form

Assume:

  1. Necessarily, if a property B is limiting, so is any property A that entails B.
  2. Necessarily, if a property B is limiting, its negation is not limiting.
  3. Possibly lacking existence is limiting.
  4. Possibly lacking omniscience is limiting.
  5. Possibly lacking omnipotence is limiting.
  6. Possibly lacking perfect goodness is limiting.
  7. Possibly not being creator of everything else is limiting.
  8. It is not possible that x is a creator of y while y is a creator of x.

In a forthcoming paper, I prove using S5 that (1)-(8) entails:

  1. There exists a necessary being that is essentially omniscient, omnipotent, perfectly good and creator of everything else. This being has every property that it would be limiting to possibly-lack.