Showing posts with label utilitarianism. Show all posts
Showing posts with label utilitarianism. Show all posts

Monday, December 16, 2024

Two more counterexamples to utilitarianism

It’s an innocent and pleasant pastime to multiply counterexamples to utilitarianism even if they don’t add much to what others have said. Thus, if utilitarianism is true, I have to do so. :-)

Suppose you capture Hitler. Torturing him to death would appal many but, given fallen human nature, likely significantly please hundreds of millions more. This pleasure to hundreds of millions could far outweigh the pain to one. Moreover, even of those appalled by the torture, primarily only Nazis and a handful of moral saints would actually feel significant displeasure at the torture. For being appalled by an immoral action is not always unpleasant except to someone with saintly compassion—indeed there is a kind of pleasure one takes in being appalled. Normally in the case of counterexamples to utilitarianism one worries about making people more callous, the breakdown of law and order, giving a bad example to others, and so on. But the case of Hitler is so exceptional that likely the negative effects from a utilitarian point of view would be minimal if any.

One might think that an even better thing to do from the utilitarian point of view would be to kill Hitler painlessly, and then mark up his body so it looks like he was tortured to death, and publically lie about it.

Yet it is wrong to torture even Hitler, and it is wrong to lie that one has done so (especially if only for public pleasure).

Monday, October 21, 2024

Actual result utilitarianism implies a version of total depravity

Assume actual result utilitarianism on which there are facts of the matter about what would transpire given any possible action of mine, and an action is right just in case it has the best consequences.

Here is an interesting conclusion. Do something specific, anything. Maybe wiggle your right thumb a certain way. There are many—perhaps even infinitely many—other things you could have done (e.g., you could have wiggled the thumb slightly differently) instead of that action whose known consequences are no different from the known consequences of what you did. We live in a chaotic world where the butterfly principle very likely holds: even minor events have significant consequences down the road. It is very unlikely that of all the minor variants of what you did, all of which have the same known consequences, the variant you chose has the best overall consequences down the road. Quite likely, the variant action you chose is middle of the road among the variants.

So, typically, whatever we do, we do wrong on actual result utilitarianism.

Thursday, December 8, 2022

Utilitarianism and communication

Alice and Bob are both perfect Bayesian epistemic agents and subjectively perfect utilitarians (i.e., they always do what by their lights maximizes expected utility). Bob is going to Megara. He comes to a crossroads, from which two different paths lead to Megara. On exactly one of these paths there is a man-eating lion and on the other there is nothing special. Alice knows which path has the lion. The above is all shared knowledge for Alice and Bob.

Suppose the lion is on the left path. What should Alice do? Well, if she can, she should bring it about that Bob takes the right path, because doing so would clearly maximize utility. How can she do that? An obvious suggestion: Engage in a conventional behavior indicating a where the lion is, such as pointing left and roaring, or saying “Hail well-met traveler, lest you be eaten, I advise you to avoid the leftward leonine path.”

But I’ve been trying really hard to figure out how is it that such a conventional behavior would indicate to Bob that the lion is on the left path.

If Alice were a typical human being, she would have a habit of using established social conventions to tell the truth about things, except perhaps in exceptional cases (such as the murderer at the door), and so her use of the conventional lion-indicating behavior would correlate with the presence of lions, and would provide Bob with evidence of the presence of lions. But Alice is not a typical human being. She is a subjectively perfect utilitarian. Social convention has no normative force for Alice (or Bob, for that matter). Only utility does.

Similarly, if Bob were a typical human being, he would have a habit of forming his beliefs on the basis of testimony interpreted via established social conventions absent reason to think one is being misinformed, and so Alice’s engaging in conventional left-path lion-indicating behavior would lead Bob to think there is a lion on the left, and hence to go on the right. And while it woudl still be true that social convention has no normative force for Alice, Alice would think have reason to think that Bob follows convention, and for the sake of maximizing utility would suit her behavior to his. But Bob is a perfect Bayesian. He doesn’t form beliefs out of habit. He updates on evidence. And given that Alice is not a typical human being, but a subjectively perfect utilitarian, it is unclear to me why her engaging in the conventional left-path lion-indicating behavior is more evidence for the lion being on the left than for the lion being on the right. For Bob knows that convention carries no normative force for Alice.

Here is a brief way to put it. For Alice and Bob, convention carries no weight except as a predictor of the behavior of convention-bound people, i.e., people who are not subjectively perfect utilitarians. It is shared knowledge between Alice and Bob that neither is convention-bound. So convention is irrelevant to the problem at hand, the problem of getting Bob to avoid the lion. But there is no solution to the problem absent convention or some other tool unavailable to the utilitarian (a natural law theorist might claim that mimicry and pointing are natural indicators).

If the above argument is correct—and I am far from confident of that, since it makes my head spin—then we have an argument that in order for communication to be possible, at least one of the agents must be convention-bound. One way to be convention-bound is to think, in a way utilitarians don’t, that convention provides non-consequentialist reasons. Another way is to be an akratic utilitarian, addicted to following convention. Now, the possibility of communication is essential for the utility of the kinds of social animals that we are. Thus we have an argument that at least some subjective utilitarians will have to become convention-bound, either by getting themselves to believe that convention has normative force or by being akratic.

This is not a refutation of utilitarianism. Utilitarians, following Parfit, are willing to admit that there could be utility maximization reasons to cease to be utilitarian. But it is, nonetheless, really interesting if something as fundamental as communication provides such a reason.

I put this as an issue about communication. But maybe it’s really an issue about communication but coordination. Maybe the literature on repeated games might help in some way.

Thursday, October 27, 2022

Probabilistic trolleys

Suppose a trolley is heading towards five people, and you can redirect it towards one. But the trolley needs to go up a hill before it can roll down it to hit the five people, and your best estimate of its probability of making it up the hill is 1/4. On the other hand, if you redirect it, it’s a straight path to the one person, who is certain to be killed. Do you redirect? Expected utilities:  − 1.25 lives for not redirecting and  − 1 lives for redirecting.

Or suppose you are driving a fire truck to a place where five people are about to die in a fire, and you know that you have a 1/4 chance of putting out the fire and saving them if you get there in time. Moreover, there is a person sleeping on the road in front of the only road to the fire, and if you stop to remove the person from the road, it will be too late for the five. Do you brake? Expected utilities:  − 5 lives for braking and  − 1 − 3.75 =  − 4.75 lives for continuing to the fire and running over the person on the road.

I think you shouldn’t redirect and you should brake. There is something morally obnoxious about certainly causing death for a highly uncertain benefit when the expected values are close. This complicates the proportionality condition in the Principle of Double Effect even more, and provides further evidence against expected-value utilitarianism.

Thursday, July 21, 2022

Mill on injustice

Mill thinks that:

  1. An action is unjust if society has a utility-based reason to punish that actions of that type.

  2. An action is wrong if there is utility-based reason not to peform that action.

Mill writes as if the unjust were a subset of the wrong. But it need not be. Suppose that powerful aliens have a weird religious view on which dyeing one’s hair green ought to be punished with a week in jail, and they announce that any country that refuses to enforce such a punishment as part of the criminal code will be completely annihilated. In that case, according to (1), dyeing one’s hair green is unjust. But it is not guaranteed to be wrong according to (2). The pleasure of having green hair could be greater than the unpleasantness of a week in jail, depending on details about the prison system and one’s aesthetic preferences.

The problem with (1), I think, is that utility-based reasons to punish actions of some type need have little to do with moral reasons, utilitarian or not, against actions of that type.

Thursday, June 9, 2022

The variety of virtue ethical systems

One thinks of virtue ethics as a unified family of ethical systems. But it is interesting to note just how different virtue ethical systems can be depending on how one answers the question of what it is that makes a stable character trait T be a virtue? Consider, after all, these very varied possible answers to that question, any one of which could be plugged into a virtue ethical account of rightness as what accords with virtue.

  • having T is partly constitutive of eudaimonia (Aristotelian virtue ethics)

  • having T is required by one’s nature or by the nature of one’s will (natural law virtue ethics)

  • a typical human being is expected to gain utility by having T (egoist virtue ethics)

  • a typical human being is expected to contribute to total utility by having T (utilitarian virtue ethics)

  • it is pleasant to think of oneself as having T (hedonistic virtue ethics)

  • it is pleasant to think of another as having T (Humean sentimentalist virtue ethics)

  • God requires one to have T (divine command virtue ethics).

The resulting ethical systems are all interesting, but fundamentally very different.

Wednesday, September 9, 2020

Minor inconveniences and numerical asymmetries

As a teacher, I have many opportunities to cause minor inconveniences in the lives of my students. And subjectively it often feels like when it’s a choice between a moderate inconvenience to me and a minor inconvenience to my students, there is nothing morally wrong with the minor inconvenience to the students. Think, for example, of making online information easily accessible to students. But this neglects the asymmetry in numbers: there is one of me and many of them. The inconvenience to them needs to be multiplied by the number of students, and that can make a big difference.

I suspect that we didn’t evolve to be sensitive to such numerical asymmetries. Rather, I expect we evolved to be sensitive to more numerically balanced relationships, which may have led to a tendency to just compare the degree of inconvenience, in ways that are quite unfortunate when the asymmetry in numbers becomes very large. If I make an app that is used just once by each of 100,000 people, and my app’s takes a second longer than it could, then it should be worth spending about two working days to eliminate that delay. (Or imagine—horrors!—that I deliberately put in that delay, say in the form of a splashscreen!) If I give a talk to a hundred people and I spend a minute on an unnecessary digression, it’s rather like the case of a bore talking my ears off for an hour and a half. In fact, I rather like the idea that at the back of rooms where compulsory meetings are held there should be an electronic display calculating for each speaker the total dollar-time-value of the listeners’ time, counting up continuously. (That said, some pleasantries are necessary, in order to show respect, to relax, etc.)

Sadly, I rarely think this way except when I am the victim of the inconvenience. But it seems to me that in an era where more and more of us have numerically asymmetric relationships, sometimes with massive asymmetries introduced by large-scale electronic content distribution, we should think a lot more about this. We should write and talk in ways that don’t waste others’ time in numerically asymmetric situations. We should make our websites easier to navigate and our apps less frustrating. And so on. The strength of the moral reasons may be fairly small when our contributions are uncompensated and others’ participation is voluntary, but rises quite a bit when we are being paid and/or others are in some way compelled to participate.

One of my happy moments when I actually did think somewhat in this way was some years back when, after multiple speeches, I was asked to say a few words of welcome to our prospective graduate students. There were multiple speeches. I stood up, said “Welcome!”, and sat down. I am not criticizing the other speeches. But as for me, I had nothing to add to them but just a welcome from me, so I added nothing but a welcome from me. I should do this sort of thing more often.

Friday, August 28, 2020

The inhumanity problem for morality

When a state legislates, it often carves out very specific exceptions to the legislation. Sometimes, of course, one is worried that the exceptions are a sign that the legislators are pursuing special interests rather than the common good, but sometimes the exceptions are quite reasonable. For instance, you shouldn’t possess child pornography… except, say, if you are involved in the law enforcement process and need it as evidence to get the child pornographers. There is something ugly about carving out exceptions, but the point is to make society work well rather than make the laws elegant. Special-case clauses seem to be unavoidable in practice, given the messiness and complexity of human life. Elegant exceptionless legislation—with some important exceptions!—is apt to be inhuman.

I kind of wonder if an analogous thing might not be true in the case of morality, and for the same reason, the messiness and complexity of human life. Could it be that elegant exceptionless moral laws would necessarily have to be inhuman?

What solutions are available to this problem?

Well, we might just dig in our heels, either optimistically or pessimistically.

The optimistic version says: yes, we have elegant exceptionless moral laws, and they do work well for us. One way of running the optimistic variant is to make the moral laws leave a lot to human positive law. Thus, there are going to be exceptions to any prohibition of theft, but perhaps morality leaves the specification of this to the state. Or perhaps one could be really optimistic and have moral laws that do not leave a lot to positive law, but nonetheless they work. Act utilitarianism could be thought to provide this kind of solution, having a simple rule “Maximize utility!”, but its problem is that this rule is just wrong. Rule utilitarianism provides a nicer solution by having the elegant meta-rule “Do those things that fall under a utility-maximizing rule”, but I think the technical details here are insuperable.

The pessimistic variant says: yes, we have elegant exceptionless moral laws, and we’re stuck with that, even though it doesn’t work that great for us. That might be a better way to take act utilitarianism, but such pessimism is not a very attractive approach.

But what if we don’t want to dig in our heels? One could think that there are just brute (perhaps metaphysically necessary) facts about the moral rules, and many of these brute facts have specific exceptions: “Don’t lie, except to save a life or to prevent torture.” I think bruteness, and especially inelegant bruteness, is a last resort.

One might think that moral particularism is a solution: there are general elegant moral laws, but they all have unspecified exceptions. They say things like: “Don’t torture, other things being equal.” There is still a fact of the matter as to what to do in a particular situation, a fact that a virtuous agent may be able to discern, but these facts cannot be formulated in a general way, because any finite description of the particular situation will leave out factors that could in some other case trump the described considerations. There are exceptionless moral rules on such a view, but they are infinite in length. Unless some story is given as to where these infinite rules come from, this seems like it might be just an even worse version of the brute fact story.

Divine command theory, on the other hand, could provide a very nice solution to the problem, exactly analogous to the legislative solution. If God is the author of moral laws, he can legislate: “Thou shalt not kill, except in cases of types A, B and C.”

Natural law could also provide such a solution, at least given theism: God could select for instantiation a nature that has a complex teleology with various specific exceptions.

Where do I fall? I think I want to hold out for a two-level theistic natural law story. On one level, there is a simple, single and elegant moral rule embedded in our nature: “Love everything!” However, the content of that love is specified in a very complex way by our nature and by the circumstances (love needs to be appropriate to the specifics of the relationships). This specification is embedded in our nature by much more complex rules. And God chose this nature for instantiation because it works so well.

Thursday, September 19, 2019

Cupcakes and trolleys

A trolley is heading towards a person lying on the tracks. Also lying on the tracks is a delicious cupcake. You could redirect the trolley to a second track where there is a different person lying on the tracks, but no cupcake.

Utilitarianism suggests that, as long as you are able to enjoy the cupcake under the circumstances and not feel bad about the whole affair, you have a moral duty to redirect the trolley in order to save the cupcake for yourself. This is morally perverse.

Besides showing that utilitarianism is false, this example shows that the proportionality condition in the Principle of Double Effect cannot simply consist in a simple calculation comparing the goods and bads resulting from the action. For there is something morally disproportionate in choosing who lives and dies for the sake of a cupcake.

Tuesday, September 10, 2019

Ethics and complexity

Here is a picture of ethics. We are designed to operate with a specific algorithm A for generating imperatives from circumstances. Unfortunately, we are broken in two ways: we don’t always follow the generated imperatives and we don’t always operate by means of A. We thus need to reverse engineer algorithm A on the basis of our broken functioning.

In general, reverse-engineering has to be based on a presumption of relative simplicity of the algorithm. However, Kantian, utilitarian ethics and divine command ethics go beyond that and hold that A is at base very simple. But should we think that the algorithm describing the normative operation of a human being is very simple? The official USA Fencing rule book is over 200 pages long. Human life is more complex than a fencing competition. Why should we think that there are fundamental rules for human life that can be encompassed briefly, from which all other rules can be derived without further normative input? It would be nice to find such brief rules. Many have a hope of finding analogous brief rules in physics.

We haven’t done well in ethics in our attempts to find such brief rules: the Kantian and utilitarian projects make (I would argue) incorrect normative claims, while the divine command project seems to give the wrong grounds for moral obligations.

It seems not unlikely to me that the correct full set of norms for human behavior will actually be very complex.

But there is still a hope for a unification. While I am dubious whether one can find a simple and elegant set of rules such that all ethical truths can be derived from them with no further normative input, there may be elegant unifying ethical principles that nonetheless require further normative input to generate the complex rules governing human life. Here are two such options:

  • Natural Law: Live in accordance with your nature! But to generate the rules governing human life requires the further information as to what your nature requires, and that is normative information.

  • Agapic ethics: Love everyone! But one of the things that are a part of love is adapting the form of one’s love to fit the the persons and circumstances (fraternal love for siblings, collegial love for colleagues, etc.), and the rules of “fit” are extremely complex and require further normative input.

Friday, August 23, 2019

Utility monster meat farming

Suppose that:

  1. Intense pleasure is very good in itself

  2. Consequentialism applies to non-rational animals.

Then here is a modest proposal: Have all feedlot animals outfitted with electrical stimulators of brain pleasure centers. Sufficient stimulation of pleasure centers can outweigh the pains that the animals suffer in the feedlot, and indeed can hedonically (and also with regard to desire-satisfaction) beat the pleasures of a happy life on the range. The animals may not live very long lives in that setting, but this shorter length of life could well be outweighed by the intense pleasure that they will enjoy. It seems like a win-win: there are more happy non-rational animals and we have more yummy meat for rational omnivores. It seems to me that utilitarian vegetarians whose vegetarianism is based in concern for the welfare of the animals—rather than, say, ecological worries—should support this proposal.

Perhaps, though, the repugnance some people may feel at the modest proposal gives evidence that the proposal is a reductio of the conjunction of (1) and (2). I myself deny (1): I do not think empty pleasures have any intrinsic value, even in non-rational animals. (That said, even if (1) is false, intense empty may still be very instrumentally valuable as a pain-killer which might yet provide some consideration in favor of the proposal.) I am also somewhat dubious about (2).

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Monday, April 25, 2016

Another dice game for infinitely many people

Consider:

  • Case 1: There are two countably infinite sets, A and B, of strangers. The people are all alike in morally relevant ways. I get to choose which set of people gets a lifetime supply of healthy and tasty food.

Clearly, it doesn't matter how I choose. And if someone offers me a cookie to choose set A, no harm in taking it and choosing set A, it seems.

Next:

  • Case 2: Countably infinitely many strangers have each rolled a die, whose outcome I do not see. Set S6 is the set of people who rolled a six and set S12345 is the set of people who rolled something other than a six. The people are all alike in morally relevant ways. I get to choose which set of people gets a lifetime supply of healthy and tasty food.

Almost surely, S6 and S12345 are two countably infinite sets. So it seems like this is just like Case 1. It makes no difference. And if you offer me a cookie to choose S6 to be the winners, no harm done if I take it.

But now suppose I focus in on one particular person, say you. If I choose S6, you have a 1/6 chance of getting a significant good. If I choose S12345, you have a 5/6 chance. Clearly, just thinking about you alone, I should disregard any cookie offered to me and go for S12345. But the same goes when I focus on anybody else. So it seems that Case 2 differs from Case 1. If Case 1 is the whole story--i.e., if there is no backstory about how the two sets are chosen--then it really doesn't matter what I choose. But in Case 2, it does. The backstory matters, because when I focus in on one individual, I am choosing what that individual's chance of a good is.

But now finally:

  • Case 3: Just like Case 2, except that you get to see who rolled what number, and hence you know which people are in which set.

In this case, I can't mentally focus in on one individual and figure out what is to be done. For if I focus in on someone who rolled six, I am inclined to choose S6 and if I focus in on someone who rolled non-six, I am inclined to choose S12345, and the numbers of people in both sets are equal. So I don't know what to do in this case?

Maybe, though, even in Case 3, I should go for S12345. For maybe instead of deciding on the basis of the particular situation, I should decide on the basis of the right rule. And a rule of favoring the non-six rollers in circumstances like this is better for everyone as a rule, because every individual will have had a better chance at the good outcome then?

Or maybe we just shouldn't worry about the case where you see all the dice, because that's an impossible case according to causal finitism? Interestingly, though Cases 1 and 2 only require an infinite future, something that's clearly possible.

Tuesday, November 10, 2015

Parameters in ethics

In physical laws, there are a number of numerical parameters. Some of these parameters are famously part of the fine-tuning problem, but all of them are puzzling. It would be really cool if we could derive the parameters from elegant laws that lack arbitrary-seeming parameters, but as far as I can tell most physicists doubt this will happen. The parameters look deeply contingent: other values for them seem very much possible. Thus people try to come up either with plenitude-based explanations where all values of parameters are exemplified in some universe or other, or with causal explanations, say in terms of universes budding off other universes or a God who causes universes.

Ethics also has parameters. To further spell out an example from Aquinas' discussion of the order of charity, fix a set of specific circumstances involving yourself, your father and a stranger, where both your father and the stranger are in average financial circumstances, but are in danger of a financial loss, and you can save one, but not both, of them from the loss. If it's a choice between saving your father from a ten dollar loss or the stranger from an eleven dollar loss, you should save your father from the loss. But if it's a choice between saving your father from a ten dollar loss or the stranger from a ten thousand dollar loss, you should save the stranger from the larger loss. As the loss to the stranger increases, at some point the wise and virtuous agent will switch from benefiting the father to benefiting the stranger. The location of the switch-over is a parameter.

Or consider questions of imposition of risk. To save one stranger's life, it is permissible to impose a small risk of death on another stranger, say a risk of one in a million. For instance, an ambulance driver can drive fast to save someone's life, even though this endangers other people along the way. But to save a stranger's life, it is not permissible to impose a 99% risk of death on another stranger. Somewhere there is a switch-over.

There are epistemic problems with such switch-overs. Aquinas says that there is no rule we can give for when we benefit our father and when we benefit a stranger, but we must judge as the prudent person would. However I am not interested right now in the epistemic problem, but in the explanatory problem. Why do the parameters have the values they do? Now, granted, the particular switchover points in my examples are probably not fundamental parameters. The amount of money that a stranger needs to face in order that you should help the stranger rather than saving your father from a loss of $10 is surely not a fundamental parameter, especially since it depends on many of the background conditions (just how well off is your father and the stranger; what exactly is your relationship with your father; etc.) Likewise, the saving-risking switchover may well not be fundamental. But just as physicists doubt that one can derive the value of, say, the fine-structure constant (which measures the strength of electromagnetic interactions between charged particles) from laws of nature that contain no parameters other than elegant ones like 2 and π, even though it is surely a very serious possibility that the fine-structure constant isn't truly fundamental, so too it is doubtful that the switchover points in these examples can be derived from fundamental laws of ethics that contain no parameters other than elegant ones. If utilitarianism were correct, it would be an example of a parameter-free theory providing such a derivation. But utilitarianism predicts the incorrect values for the parameters. For instance, it incorrectly predicts that that the risk value at which you need to stop risking a stranger's life to certainly save another stranger is 1, so that you should put one stranger in a position of 99.9999% chance of death if that has a certainty of saving another stranger.

So we have good reason to think that the fundamental laws of ethics contain parameters that suffer from the same sort of apparent contingency that the physical ones do. These parameters, thus, appear to call for an explanation, just as the physical ones do.

But let's pause for a second in regard to the contingency. For there is one prominent proposal on which the laws of physics end up being necessary: the Aristotelian account of laws as grounded in the essences of things. On such an account, for instance, the value of the fine-structure constant may be grounded in the natures of charged particles, or maybe in the nature of charge tropes. However, such an account really does not remove contingency. For on this theory, while it is not contingent that electromagnetic interactions between, say, electrons have the magnitude they do, it is contingent that the universe contains electrons rather than shmelectrons, which are just like electrons, but they engaged in shmelectromagnetic interactions that are just like electromagnetic interactions but with a different quantity playing the role analogous to the fine-structure constant. In a case like this, while technically the laws of physics are necessary, there is still a contingency in the constants, in that it is contingent that we have particles which behave according to this value rather than other particles that would behave differently. Similarly, one might say that it is a necessary truth that such-and-such preferences are to be had between a father and a stranger, and that this necessary truth is grounded in the essence of humanity or in the nature of a paternity trope. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.

So in any case we have a contingency. We need a meta-ethics with a serious dose of contingency, contingency not just derivable from the sorts of functional behavior the agents exhibit, but contingency at the normative level--for instance, contingency as to appropriate endangering-saving risk tradeoffs. This contingency undercuts the intuitions behind the thesis that the moral supervenes on the non-moral. Here, both Natural Law and Divine Command rise to the challenge. Just as the natures of contingently existing charged objects can ground the fine-structure constants governing their behavior, the natures of contingently existing agents can ground the saving-risking switchover values governing their behavior. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.

It would be particularly interesting if there were an analogue to the fine-tuning argument in this case. The fine-tuning argument arises because in some sense "most" of the possible combinations of values of parameters in the laws of physics do not allow for life, or at least for robust, long-lasting and interesting life. I wonder if there isn't a similar argument on the ethics side, say that for "most" of the possible combinations of parameters, we aren't going to have the good moral communities (the good could be prior to the moral, so there may be no circularity in the evaluation)? I don't know. But this would be an interesting research project for a graduate student to think about.

Objection: The switchover points are vague.

Response: I didn't say they weren't. The puzzle is present either way. Vagueness doesn't remove arbitrariness. With a sharp switchover point, just the value of it is arbitrary. But with a vague switchover point, we have a vagueness profile: here something is definitely vaguely obligatory, here it is definitely vaguely vaguely obligatory, here it is vaguely vaguely vaguely obligatory, etc. In fact, vagueness may even multiply arbitrariness, in that there are a lot more degrees of freedom in a vagueness profile than in a single sharp value.

Monday, January 26, 2015

Act and rule utilitarianism

Rule utilitarianism holds that one should act according to those rules, or those usable rules, that if adopted universally would produce the highest utility. Act utilitarianism holds that one should do that act which produces the highest utility. There is an obvious worry that rule utilitarianism collapses into act utilitarianism. After all, wouldn't utility be maximized if everyone adopted the rule of performing that act which produces the highest utility? If so, then the rule utilitarian will have one rule, that of maximizing the utility in each act, and the two theories will be the same.

A standard answer to the collapse worry is either to focus on the fact that some rules are not humanly usable or to distinguish between adopting and following a rule. The rule of maximizing utility is so difficult to follow (both for epistemic reasons and because it's onerous) that even if everyone adopted it, it still wouldn't be universally followed.

Interestingly, though, in cases with infinitely many agents the two theories can differ even if we assume the agents would follow whatever rule they adopted.

Here's such a case. You are one of countably infinitely many agents, numbered 1,2,3,..., and one special subject, Jane. (Jane may or may not be among the infinitely many agents—it doesn't matter.) Each of the infinitely many agents has the opportunity to independently decide whether to costlessly press a button. What happens to Jane depends on who, if anyone, pressed the button:

  • If a finite number n of people press the button, then Jane gets n+1 units of utility.
  • If an infinite number of people press the button, then Jane gets a little bit of utility from each button press: specifically, she gets 2k/10 units of utility from person number k, if that person presses the button.

So, if infinitely many people press the button, Jane gets at most (1/2+1/4+1/8+...)/10=1/10 units of utility. If finitely many people press the button, Jane gets at least 1 unit of utility (if that finite number is zero), and possibly quite a lot more. So she's much better off if finitely many people press.

Now suppose all of the agents are act utilitarians. Then each reasons:

My decision is independent of all the other decisions. If infinitely many other people press the button, then my pressing the button contributes (2k)/10 units of utility to Jane and costs nothing, so I should press. If only finitely many other people press the button, then my pressing the button contributes a full unit of utility to Jane and costs nothing, so I should press. In any case, I should press.
And so if everyone follows the rule of doing that individual act that maximizes utility, Jane ends up with one tenth of a unit of utility, an unsatisfactory result.

So from the point of view of act utilitarianism, in this scenario there is a clear answer as to what each person should do, and it's a rather unfortunate answer—it leads to a poor result for Jane.

Now assume rule utilitarianism, and let's suppose that we are dealing with perfect agents who can adopt any rule, no matter how complex, and who would follow any rule, no matter how difficult it is. Despite these stipulations, rule utilitarianism does not recommend that everyone maximize utility in this scenario. For if everyone maximizes utility, only a tenth of a unit is produced, and there are much better rules than that. For instance, the rule that one should press the button if and only if one's number is less than ten will produce nine units of utility if universally adopted and followed. And the rule that one should press the button if and only if one's number is less than 10100 will produce even more utility.

In fact, it's easy to see that in our idealized case, rule utilitarianism fails to yield a verdict as to what we should do, as there is no optimal rule. We want to ensure that only finitely many people press the button, but as long as we keep to that, the more the better. So far from collapsing into the act utilitarian verdict, rule utilitarianism fails to yield a verdict.

A reasonable modification of rule utilitarianism, however, may allow for satisficing in cases where there is no optimal rule. Such a version of rule utilitarianism will presumably tell us that it's permissible to adopt the rule of pressing the button if and only if one's number is less than 10100. This version of rule utilitarianism also does not collapse into act utilitarianism, since the act utilitarian verdict, namely that one should unconditionally press the button, fails to satisfice, as it yields only 1/10 units of utility.

What about less idealized versions of rule utilitarianism, ones with more realistic assumptions about agents. Interesting, those versions may collapse into act utilitarianism. Here's why. Given realistic assumptions about agents, we can expect that no matter what rule is given, there is some small independent chance that any given agent will press the button even if the rule says not to, just because the agent has made a mistake or is feeling malicious or has forgotten the rule. No matter how small that chance is, the result is that in any realistic version of the scenario we can expect that infinitely many people will press the button. And given that infinitely many other people will press the button, if only by mistake, the act utilitarian advice to press the button oneself is exactly right.

So, interestingly, in our infinitary case the more realistic versions of rule utilitarianism end up giving the same advice as act utilitarianism, while an idealized version ends up failing to yield a verdict, unless supplemented with a permission to satisfice.

But in any case, no version of rule utilitarianism generally collapses into act utilitarianism if such infinitary cases are possible. For there are standard finitary cases where realistic versions of rule utilitarianism fail to collapse, and now we see that there are infinitary ones where idealized versions fail to collapse. And so no version generally collapses, if cases like this are possible.

Of course, the big question here is whether such cases are possible. My Causal Finitism (the view that nothing can have infinitely many
items in its causal history) says they're not, and I think oddities such as above give further evidence for Causal Finitism.

Friday, January 31, 2014

Consequentialism and doing what is very likely wrong

Consider a version of consequentialism on which the right thing to do is the one that has the best consequences. Now suppose you're captured by an eccentric evil dictator who always tells the truth. She informs you there are ten innocent prisoners and there is a game you can play.

  • If you refuse to play, the prisoners will all be released.
  • If you play, the number of hairs on your head will be quickly counted by a machine, and if that number is divisible by 50, all the prisoners will be tortured to death. If that number is not divisible by 50, they will be released and one of them will be given a tasty and nutritious muffin as well, which muffin will otherwise go to waste.
Now it is very probable that the number of hairs on year head is not divisible by 50. And if it's not divisible by 50, then by the above consequentialism, you should play the game—saving ten lives and providing one with a muffin is a better consequence than saving ten lives. So if you subscribe to the above consequentialism, you will think that very likely playing is right and refusing to play is wrong. But still you clearly shouldn't play—the risk is too high (and you can just put that in expected utility terms: a 1/50 probability of 10 being tortured to death is much worse than a 49/50 probability of an extra muffin for somebody). So it seems that you should do what is very likely wrong.

So the consequentialist had better not say that the right thing to do is the one that has the best consequences. She would do better to say that the right thing to do is the one that has the best expected consequences. But I think that is a significant concession to make. The claim that you should act so as to produce the best consequences has a very pleasing simplicity to it. In its simplicity, it is a lovely philosophical theory (even though it leads to morally abhorrent conclusions). But once we say that you should maximize expected utility, we lose that elegant simplicity. We wonder why maximize expected utility instead of doing something more risk averse.

But even putting risk to one side, we should wonder why expected utility matters so much morally speaking. The best story about why expected utility matters have to do with long-run consequences and the law of large numbers. But that story, first, tells us nothing about intrinsically one-shot situations. And, second, that justification of expected utility maximization is essentially a rule utilitarian style of argument—it is the policy, not the particular act, that is being evaluated. Thus, anyone impressed by this line of thought should rather be a rule than an act consequentialist. And rule consequentialism has really serious theoretical problems.

Monday, December 16, 2013

Pascal's Wager in a social context

One of our graduate students, Matt Wilson, suggested an analogy between Pascal's Wager and the question about whether to promote or fight theistic beliefs in a social context (and he let me cite this here).

This made me think. (I don't know what of the following would be endorsed by Wilson.) The main objections to Pascal's Wager are:

  1. Difficulties in dealing with infinite utilities. That's merely technical (I say).
  2. Many gods.
  3. Practical difficulties in convincing oneself to sincerely believe what one has no evidence for.
  4. The lack of epistemic integrity in believing without evidence.
  5. Would God reward someone who believes on such mercenary grounds?
  6. The argument just seems too mercenary!

Do these hold in the social context, where I am trying to decide whether to promote theism among others? If theistic belief non-infinitesimally increases the chance of other people getting infinite benefits, without any corresponding increase in the probability of infinite harms, then that should yield very good moral reason to promote theistic belief. Indeed, given utilitarianism, it seems to yield a duty to promote theism.

But suppose that instead of asking what I should do to get myself to believe the question is what I should try to get others to believe. Then there are straightforward answers to the analogue of (3): I can offer arguments for and refute arguments against theism, and help promote a culture in which theistic belief is normative. How far I can do this is, of course, dependent on my particular skills and social position, but most of us can do at least a little, either to help others to come to believe or at least to maintain their belief.

Moreover, objection (4) works differently. For the Wager now isn't an argument for believing theism, but an argument for increasing the number of people who believe. Still, there is force to an analogue to (4). It seems that there is a lack of integrity in promoting a belief that one does not hold. One is withholding evidence from others and presenting what one takes to be a slanted position (for if one thought that the balance of the evidence favored theism, then one wouldn't need any such Wager). So (4) has significant force, maybe even more force than in the individual case. Though of course if utilitarianism is true, that force disappears.

Objections (5) and (6) disappear completely, though. For there need be nothing mercenary about the believers any more, and the promoter of theistic beliefs is being unselfish rather than mercenary. The social Pascal's Wager is very much a morally-based argument.

Objections (1) and (2) may not be changed very much. Though note that in the social context there is a hedging-of-the-bets strategy available for (2). Instead of promoting a particular brand of theism, one might instead fight atheism, leaving it to others to figure out which kind of theist they want to be. Hopefully at least some theists get right the brand of theism—while surely no atheist does.

I think the integrity objection is the most serious one. But that one largely disappears when instead of considering the argument for promoting theism, one considers the argument against promoting atheism. For while it could well be a lack of moral integrity to promote one-sided arguments, there is no lack of integrity in refraining from promoting one's beliefs when one thinks the promotion of these beliefs is too risky. For instance, suppose I am 99.99% sure that my new nuclear reactor design is safe. But 99.9999% is just not good enough for a nuclear reactor design! I therefore might choose not promote my belief about the safety of the design, even with the 99.9999% qualifier, because politicians and reporters who aren't good in reasoning about expected utilities might erroneously conclude not just that it's probably safe (which it probably is), but that it should be implemented. And the harms of that would be too great. Prudence might well require me to be silent about evidence in cases where the risks are asymmetrical, as in the nuclear reactor case where the harm of people coming to believe that it's safe when it's unsafe so greatly outweighs the harm of people coming to believe that it's unsafe when it's safe. But the case of theism is quite parallel.

Thus, consistent utilitarian atheists will promote theism. (Yes, I think that's a reductio of utilitarianism!) But even apart from utilitarianism, no atheist should promote atheism.

Monday, October 21, 2013

Utilitarianism and trivializing the value of life

Consider these scenarios:

  • Jim killed ten people to save ten people and a squirrel.
  • Sam killed ten people to save ten people and receive a yummy and healthy cookie that would have otherwise gone to waste.
  • Frederica killed ten people to save ten people and to have some sadistic fun.
If utilitarianism is true, then in an appropriate setting where all other things are equal and no option produces greater utility, the actions of Jim, Sam and Frederica are not only permissible but are duties in their circumstances. But clearly these actions are all wrong.

I find these counterexamples against utilitarianism particularly compelling. But I also think they tell us something in deontological theories. I think a deontological theory, in order not to paralyze us, will have to include some version of the Principle of Double Effect. But consider these cases (I am not sure I can come up with a good parallel to the Frederica case):

  • John saved ten people and a squirrel by a method that had the death of ten other people as a side-effect.
  • Sally killed ten people and received a yummy and healthy cookie that w would have otherwise gone to waste by a method that had the death of ten other people as a side-effect.
These seem wrong. Not quite as wrong as Jim's, Sam's and Frederica's actions, but still wrong. These actions trivialize the non-fungible loss of human life. The Principle of Double Effect typically has a proportionality constraint: the bad effects must not be out of proportion to the good. It is widely accepted among Double Effect theorists that this constraint should not be read in a utilitarian way, and the above cases show this. Ten people dying is out of proportion to saving ten people and a squirrel. (What about a hundred to save a hundred and one? Tough question!)

Saturday, March 24, 2012

The vertical harmony of nature?

One kind of harmony of nature is widely noted: laws of nature that hold at one place and time tend to hold everywhere else. This is a kind of horizontal harmony of nature.

But maybe there is also a vertical harmony of nature. Nature has multiple levels. There is the fundamental physics, the chemistry, the biology, the psychology and the sociology of the world; and also, along a parallel hierarchy starting with the chemistry of the world, the geology and astronomy of the world. The unity I am interested in between these levels is subtler: it is that essentially the same scientific methods yield truth at all these levels. Granted, there are modifications. But at all the levels, the same inductive techniques are used, and relatively simple mathematical models are made to fit reality.

Suppose that all the higher levels reduce to the fundamental physics. I think it is still surprising that the methods that work for the reducing level continue to work for the reduced level. And if there is no reduction, then the vertical unity is even more surprising.

There may be a teleological argument here. But I am worried about three flies in the ointment. The first is that perhaps I am exaggerating the unity of methods of investigation between the different levels. In school, we learn about "the scientific method". But in fact the methods of investigation in the different sciences are perhaps rather less similar than talk of "the scientific method" suggests.

The second is that the unity between the levels may simply be an artifact of the method. In other words, we have a certain method of mathematically and inductively modeling reality. And the levels that I am talking about are nothing but areas where the method works fairly well. And there is nothing that surprising that given an orderly fundamental level, among the infinitely many other "levels" (not all in a single hierarchy; just as above we had two separate hierarchies, one going up to sociology and another to astronomy) of description of reality, there will be some that can be modeled using the same methods, and those are the levels we give names like "chemistry" and "geology". In other words, we have a selection bias when we set out the case for vertical order.

If this worry is right, then we should only be surprised by the order we find at the fundamental level. But, my, how surprised we should be by that!

And, further, if we see things in this way, we will see no reason to privilege scientific approaches epistemologically. For there is nothing that special about the sciences. There may be infinitely many levels of description of reality which can be better known using other methods.

The final fly in the ointment is that while there are a number of levels that can be known using the same methods, there seem to be areas where we have genuine knowledge, but the scientific methods do not work: ethics is a particularly important case.

So in the end, I do not know really what to make of the vertical harmony thesis. It bears more thought, I guess.

Monday, November 28, 2011

Parenthood, adoption and sperm banks

Al, a single father of young Beth, found himself destitute. To give Beth hope for a future life, he agreed to have Charlie adopt Beth. Charlie was much better off than Al, and as far as Al could tell, was an excellent prospect for fatherhood. Unfortunately, soon after the adoption, Al and Charlie's fortunes reversed. Now Charlie was destitute while Al was well off. Charlie approached Al, suggesting that perhaps Al could re-adopt Beth. But Al said: "She is your daughter and no longer mine, and hence the responsibility is yours." Charlie further asked for financial help for Beth, indicating that he and Beth's health was poor and he (Charlie) could not afford the treatment she needed. Al responded: "Beth is not my daughter. Thus, while her misery has a call on me, it no more has a call on me than the misery of my other people I come in contact with. And I am already sufficiently contributing to the alleviation of the misery of other people, by giving most of my income and available time to various organizations that work with the needy in the city. Moreover, my doing so is financially more efficient. Beth's medical needs are particularly expensive. For the cost of alleviating her misery, I can alleviate the misery of two other poor children. Of course, if Beth were my daughter, her needs would take priority. But she isn't—she's your daughter."

Unless you're a utilitarian, and perhaps even if you are, I think you will share my strong moral intuition that Al is doing something seriously wrong. There are two aspects of this wrong. First, we assume that Charlie has done something good to Al when Al was in need—he took on Beth—and Al is being ungrateful. But we can tweak the story to make Al owe no gratitude to Charlie. Perhaps Al had already done as great a good to Charlie, or perhaps Charlie took on Beth solely for the sake of a tax break and Al was initially mistaken about Charlie's motives.

Second, Al owes more to Beth than he owes to other needy children. Adoption does not, then, completely negate parental duties. In fact, many onerous duties remain with Al, conditionally on Charlie being unable to fulfill them. Beth is not a stranger to Al. I do not know whether we should say that Beth is Al's daughter, but even if she not Al's daughter, the relationship that Al has to her is sufficient to ensure that he is morally responsible for her needs in a way in which he is not morally responsible for a stranger's needs.

But now Al's relationship to Beth is that of merely biological father. This means that the relationship of merely biological father is sufficient to trigger serious duties.

And this, in turn, makes giving sperm to a sperm bank seriously morally problematic. For by so doing, the man is consenting to being the biological father to many children. Given the numbers, it is not unlikely that some of these children will not have their basic needs—whether emotional, intellectual, spiritual or physical—met. In those cases, the donor would have a serious responsibility for meeting these needs. But this is a responsibility he cannot fulfill since he does not even know who these biological children of his are. Therefore, by donating sperm, the donor has consented to a situation where it is likely that he would be failing to meet his serious responsibilities, and where he cannot even seriously try to meet his responsibilities due to confidentiality rules. And that is, surely, morally problematic, even if we bracket all the other problematic aspects of sperm donation.

Notice that any statistics to the effect that adopted children have their needs as well met as biological children will not help here. For what generates the problem I am now discussing are two things. The first is the man is apt to gain many biological children whom he does not know about, adopted into many families, and it is quite probable that at least one of these families is not going to meet the childrens' basic emotional, intellectual, spiritual and/or physical needs. Thus it is rather more probable that he will have responsibilities he is not fulfilling than if he just conceived several children with a woman he was married to, since in the latter case there would only be one family to worry about. Second, in the sperm donation case, the man has responsibilities he cannot even seriously try to fulfill, and that seems a very unfortunate situation.

This improves on an argument I posted a couple of years ago.