Showing posts with label harm. Show all posts
Showing posts with label harm. Show all posts

Tuesday, May 7, 2024

Socrates' harm thesis

Socrates famously held that a wrongdoer harms themselves more than they harm their victim.

This is a correct rule of thumb, but I doubt that it is true in general.

First, Socrates was probably thinking of the harm to self resulting from becoming a vicious person. But one can imagine cases where a wrongdoer does not become any more vicious, because they have already maxed out on the vice. I don’t know if such cases are real, though.

But here is a more realistic kind of case. It is said that often abusers were themselves abused. Thus it seems that by abusing another one may cause them to become an abuser. Suppose Alice physically abuses Bob and thereby causes Bob to become an abuser. Then Alice has produced three primary harms:

  1. Bob’s physical suffering

  2. Bob’s being an abuser, and

  3. Alice’s being an abuser.

It seems, then, that Alice has harmed Bob worse than she has harmed herself. For she has harmed herself by turning herself into an abuser. But she has harmed Bob by both turning Bob into an abuser and making him suffer physically.

Objection 1: If Bob becomes an abuser because he was abused, then his responsibility for being an abuser is somewhat mitigated, and hence the moral harm to Bob is less than the moral harm to Alice.

Response: Maybe. But this objection fails if we further suppose that Alice herself was the victim of similar abuse, which mitigated her responsibility to exactly the same degree as Alice’s abuse of Bob mitigates Bob’s responsibility.

Objection 2: One does not cause another to become vicious: one at worst provides an occasion for them to choose to become vicious.

Response: Whether one causes another to become vicious or not is beside the point. One harms the other by putting them in circumstances where they are likely to be vicious. This is why corrupting the youth is so wicked, and why Jesus talks of millstones in connection with those who make others trip up.

Monday, March 11, 2024

Promising punishment

I have long found promises to punish puzzling. The problem with such promises is that normally a promisee can release the promisor from a promise. But what’s the point of me promising you a punishment should you do something if you can just release me from the promise when the time for the promise comes?

Scanlon’s account of promising also faces another problem with promises to punish: Scanlon requires that the promisee wants to be assured of the promised action. But of course in many cases of promising a punishment, the promisee does not want any such assurance! (There are some cases when they do, say when they recognize the benefit of being held to account for something.)

Additionally, it seems that breaking a promise is wrong because of the harm to the promisee. But it is commonly thought that escaping punishment is not a harm. Here I am inclined to follow Boethius, however, who insisted that a just punishment is intrinsically good for one. But suppose we follow common sense rather than Boethius, or perhaps we are dealing with a case where the norm whose violation gains a punishment is not a moral norm.

Then there is still something interesting we can say. Let’s say that I promise you a punishment for some action, and you perform that action, but I omit the punishment. Even if the omission of the punishment is not a harm, you might feel a resentment that in your choice of activity you had to take my prospective punishment into account but I wasn’t going to follow-through on the punishment. There is something unfair about this. Perhaps the point is clearest in a case like this: I promise you a punishment each time you do something. Several times you hold yourself back due to fear of punishment, and then finally you do it, and out of laziness I don’t to punish. You then feel: “Why did I even bother to keep to the rule earlier?”

But note that even in a case like this, it seems better to locate the harm in my making of the promise if I wasn’t going to keep it than in the non-keeping of it. So, let’s suppose that the Boethius line of thought doesn’t apply, and suppose that I am now deciding whether to perform the onerous task of punishing you as per promise. What moral reason do I have to punish you now in light of the promise? Well, there are considerations having to do with future cases: if I don’t do it now, you won’t trust me in the future, etc. But we can suppose all such future considerations are irrelevant—maybe this is the last hour of my life. So why is it that I should punish you?

I think there are two mutually-compatible stories one can tell. One story is an Aristotelian one: it’s simply bad for my will that I not keep my promise. The other story is a trust-based one: I solicited your trust, and even if you want me to break trust with you, I have no right to betray your trust. Having one’s trust betrayed is in itself a harm, regardless of whether one is trusting someone to do something that is otherwise good or bad for one.

Thursday, February 15, 2024

Technology and dignitary harms

In contemporary ethics, paternalism is seen as really bad. On the other hand, in contemporary technology practice, paternalism is extremely widely practiced, especially in the name of security: all sorts of things are made very difficult to unlock, with the main official justification being that if if users unlock the things, they open themselves to malware. As someone who always wants to tweak technology to work better for him, I keep on running up against this: I spend a lot of time fighting against software that wants to protect me from my own stupidity. (The latest was Microsoft’s lockdown on direct access to HID data from mice and keyboards when I wanted to remap how my laptop’s touchpad works. Before this, because Chromecasts do not make root access available, to get my TV’s remote control fully working with my Chromecast, I had to make a hardware dongle sitting between the TV and the Chromecast, instead of simply reading the CEC system device on the Chromecast and injecting appropriate keystrokes.)

One might draw one of two conclusions:

  1. Paternalism is not bad.

  2. Contemporary technology practice is ethically really bad in respect of locking things down.

I think both conclusions would be exaggerated. I suspect the truth is that paternalism is not quite as difficult to justify as contemporary ethics makes it out, and that contemporary technology practice is not really bad, but just a little bad in the respect in question, even if that “a little bad” is very annoying to hacker types like me.

Here is another thought. While the official line on a lot of the locking down of hardware and software is that it is for the good of the user, in the name of security, it is likely that often another reason is that walled gardens are seen as profitable in a variety of ways. We think of a profit motive as crass. But at least it’s not paternalistic. Is crass better than paternalistic? On first, thought, surely not: paternalism seeks the good of the customer, while profit-seeking does not. On second thought, it shows more respect for the customer to have a wall around the garden in order to be able to charge admission rather than in order to control the details of the customer’s aesthetic experience for the customer’s own good (you will have a better experience if you start by these oak trees, so we put the gate there and erect a wall preventing you from starting anywhere else). One does have a right to seek reasonable compensation for one’s labor.

The considerations of the last paragraph suggest that the special harm of paternalistic behavior is a dignitary harm. There is no greater non-dignitary harm to me when I am prevented from rooting my device for paternalistic reasons than when I am prevented from doing so for profit reasons, but the dignitary harm is greater in the paternalistic case.

There is, however, an interesting species of dignitary harm that sometimes occurs in profit-motivated technological lockdowns. Some of these lockdowns are motivated by protecting content-creator profits from user piracy. This, too, is annoying. (For instance, when having trouble with one of our TV’s HDMI ports, I tried to solve the difficulty by using an EDID buffer device, but then I could no longer use our Blu-Ray player with that port because of digital-rights management issues.) And here there is a dignitary harm, too. For while paternalistic lockdowns are based on the presumption that lots of users are stupid, copyright lockdowns are based on the presumption that lots of users are immoral.

Objectively, it is worse to be treated as immoral than as stupid: the objective dignitary harm is greater. (But oddly I tend to find myself more annoyed when I am thought stupid than when I am thought immoral. I suppose that is a vice in me.) This suggests that in terms of difficulty of justification of technological lockdowns with respect to dignitary harms, the ordering of motives would be:

  1. Copyright-protection (hardest to justify, with biggest dignitary harm to the user).

  2. Paternalism (somewhat smaller dignitary harm to the user).

  3. Other profit motives (easiest to justify, with no dignitary harm to the user).

Wednesday, August 17, 2022

Murder without an intention of harm

I used to think that every murder is an intentional killing. But this is incorrect: beheading John the Baptist was murder even if the intention was solely to put his head on a platter rather than to kill him. Cases like that once made me think something like this: murder is an intentional injury that one expects to be lethal. (Cf. Ramraj 2000.)

But now I think there can be cases of murder where there is no intent to injure at all. Suppose that amoral Alice wants to learn what an exploding aircraft looks like. To that end, she launches an anti-aircraft missile at a civilian jetliner. SHe has the ordinary knowledge that the explosion will kill everyone on board, but in her total amorality she no more intends this than the ordinary person intends to contribute to wearing out shoes when going for a walk. Alice has committed murder, but without any intention to kill.

In terms of the Principle of Double Effect, Alice’s wrongdoing lies in the lack of proportionality between the foreseen gravely bad effect (mass slaughter) and the foreseen trivially good effect (satisfaction of desire for unimportant knowledge), rather than in a wrongful intention, at least if we bracket questions of positive law.

It is tempting to conclude that every immoral killing is a murder. But that’s not right, either. If Bob is engaged in a defensive just war and has been legitimately ordered not to kill any of the enemy before 7 pm no matter what (so as not to alert the enemy, say), and at 6 pm he kills an enemy invader in self-defense, then he does not commit murder, but he acts wrongly in disobeying an order.

It seems that for an immoral act to be a murder it needs to be wrong because of the lethality of the harm as such, rather than due to some incidental reason, such as the lethality of the harm as contrary to a valid order.

Friday, March 5, 2021

Harm Principle

Consider this Harm Principle:

  1. Without a relevant connection to actual, intended or risked harm, there is no wrongdoing.

Now suppose Carl tortures Bob because he has justified practical certainty that this torture will lead Bob to abandon beliefs that Carl takes to be heretical and thereby cause him to avoid the pains of hell. (How could Carl be justified in such practical certainty? Easy: we can imagine a ton of hallucinatons that evidentially support the claim.) Suppose, further, that Bob’s being tortured in fact transforms Bob’s life in ways quite different from those Carl envisioned. Bob’s own wholesome beliefs are deepened. He abandons his meaningless corporate job and becomes an advocate for the vulnerable, leading a deeply meaningful life. Moreover, were all known about Bob’s character at the time of the torture, this transformation would have been predictable with a very high probability.

It seems Bob is not actually harmed: his life becomes better. And Carl does not intend Bob to be actually harmed. Given Carl’s justified practical certainty that the torture will benefit Carl, Carl does not subjectively risk harm. And given that Bob’s transformation was quite predictable given full knowledge of his character, Carl does not objectively risk harm. So, it seems (1) is false.

There is, however, a natural response to (1): Carl does actually and intentionally harm Bob, just not on balance. The torture is a real harm, even if it results in an overall benefit.

This natural response seems right. Thus, in (1) we should not understand harm as on-balance or all-things-considered harm. The problem with this interpretation of (1), is that (1) becomes trivial in light of this plausible observation:

  1. Every significant human action has a relevant connection to some actual or risked harm (perhaps a very minor one).

Wednesday, November 4, 2020

Quinn, Double Effect and closeness

In a famous paper, Warren Quinn suggests replacing the distinction between intending evil and foreseeing evil in the Principle of Double Effect (PDE) with a distinction between directly and indirectly harmful action. For concreteness, let’s talk about the death of innocents. Classical PDE reasoning says that it’s wrong to intend the death of an innocent, but it is permissible to accept it as a side-effect for a proportionate reason. Quinn thinks that this has the implausible consequence that craniotomy is permissible: that it is permissible to crush the skull of a fetus to get it through birth canal, because one is not intending the fetus’s death, but only the reduction in head size. This is a special case of the closeness problem: intending to crush the skull is too close to death for a moral distinction, but yet technically one can intend the crushing without intending the death, and so Double Effect makes a moral distinction where there is none.

Quinn suggests that what is instead wrong is to intentionally cause an effect on an innocent that has the following two properties:

  1. the effect is a harm, and

  2. this harm is foreseen to result in death.

The doctor is intending to crush the fetus’s skull: that is an intended effect on the fetus. This effect is a harm, and it is foreseen to result in death. So craniotomy is ruled out. Similarly, blowing up the fat man blocking the entrance of the cave in which other spelunkers are trapped is ruled out, because even though it is possible to blow someone up without intending that they die, being blow up is a clear case of harm, and it is foreseen to lead to death.

This is clever, but I think it fails. For we can imagine that a callous doctor does not intend any effect to the fetus. All he intends is the change in arrangement of a certain set of molecules in order to facilitate their removal from the uterus. These molecules happen to be the ones that the fetus is made of. But that they make up the body of the fetus need not be relevant to the doctor’s intention. If instead there were something other than a fetus present that for health reasons needed to be removed (not at all a remote possibility: consider the body of an already deceased fetus), and the molecules there were similarly arranged, our callous doctor would take exactly the same course of action. Similarly, the spelunkers need not be intending to break up the fat man’s body, but simply to disperse a cloud of molecules.

Now, we could say that the molecules constitute or even are the body of the fetus or of the fat man, and we could say that if you intend A and you know that A is or constitutes B, then you intend B. But if you say that, then you don’t need the Quinn view to get out of craniotomy. For you can then take Fitzpatrick’s solution to the problem of closeness that crushing the skull constitutes death, and hence that the doctor intends death. In fact, though, the constitution principle is false: intention is hyperintensional, and not only doesn’t transfer along constitution lines but we can intend the identical object under one description but not under another. Anyway, the point here is that the molecule problem shows that we need some other solution to the problem of closeness to make Quinn’s story work: the Quinn solution might help with some cases, but it cannot be taken to be the solution.

Saturday, November 10, 2018

Medical conscience exemptions

After listening to a talk by Christopher Kaczor, and the ensuing discussion, I want to offer a defense of a moderate position on the state not compelling healthcare professionals to violate their conscience, even when their conscience is unreasonably mistaken. I think a stronger position than the moderate position may be true, but I won’t be defending that.

This is the central insight:

  1. It is a significant harm to an individual to violate their conscience, even when the conscience is irrationally mistaken.

One reason that (1) is true is the Socratic insight is that it is much better to suffer wrong than to do wrong, together with the Conscience Principle that to act against conscience is always wrong.

My argument will need something a bit more precise than (1). For convenience, I will stipulate that I use “grave” for normative considerations, goods, bads and harms whose importance is at least of the order of magnitude of the value of a human life. The coincidence that “grave” not only means very serious but also place of burial in English—even though the etymologies are quite different—should remind us of this. When you read the following, whenever you read “grave” and cognates, don’t just read “serious”, but also imagine a grave.

Then what I need is this:

  1. It is a grave harm to a conscientious individual to gravely violate their conscience, even when that conscience is unreasonably mistaken.

(I suspect this is true even if one drops the “conscientious” and “gravely”, but I am only defending a moderate position.) The reasons for (2) are moral and psychological. The moral reasons are based on the aforementioned Socratic insight about the importance of avoiding wrongdoing. But there are also psychological reasons. A conscientious person identifies with their conscience in such a way that gravely violating this conscience is shattering to the individual’s identity. It is a kind of death. It is no coincidence that the Catholic tradition talks of some sins as “mortal”.

Next, here is another reasonable principle:

  1. Normally, the state should not require a healthcare professional to provide care when the care is likely to come at a grave cost to the professional.

For instance, the state should not require a healthcare professional to donate her own kidney to save a patient. For a less extreme case that I will consider some variations of, neither should the state require a professional who has a severe bee allergy to pass through a cloud of bees to help a patient when allergy reaction drugs are unavailable and when other professionals lacking such an allergy are available.

In order for (3) to be useful in pracice, we need some way of getting rid of the “Normally” in it.

Notice that (3) is true even when the grave cost to the professional results from the professional’s irrationality. For instance, normally a healthcare professional who has a grave phobia of bees should not be required to pass through the cloud of bees, even if it is known that the professional would not be seriously physically harmed. In other words, that the cost results from irrationality does count as an abnormality in (3).

Under what abnormal conditions, then, may the state require the professional to offer care that comes at grave cost to the professional? This is clearly a necessary condition:

  1. The need is grave.

But even if the need is grave, if someone else can offer the care for whom offering the care does not come at a grave cost, they should offer it instead. If the way to save a patient’s life is for one doctor to pass through a cloud of bees, and there is a doctor available who is not allergic to bee stings, then a doctor who is allergic should not be made to do it. Thus, we have this condition:

  1. There is no way of meeting the need without someone being required to take on a likely grave cost.

We can combine these two conditions into a neater condition (which may also be a bit weaker than the conjunction of (4) and (5)):

  1. If the care is not provided by this professional, a grave harm will likely result to someone.

This suggests some principle like this:

  1. Unless failure of this professional to provide this instance of care will likely result in a grave harm, the state should not require a healthcare professional to provide care when the care is likely to come at a grave cost to the professional.

Now we go back to (2), the claim about the grave cost of violating conscience. Let us charitably assume that most medical professionals are conscientious, so that any given medical professional is likely to be conscientious. Then we get something like this:

  1. Unless failure of this professional to provide this instance of care will likely result in a grave harm, the state should not require a healthcare professional to provide care that gravely violates their conscience, even when that conscience is unreasonably mistaken.

But this cannot be the whole story. For there are also conditions that render one incapable of doing central parts of one’s job. For instance, someone with a grave phobia of fires should not be allowed to be a fire fighter. And while a fire fighter with that grave phobia should not be made to fight a fire when someone else is available, if they had the phobia at the time of hiring, they should not have been hired in the first place. And if they hid this phobia at the time of hiring, they should be fired.

We have, however, a well-developed societal model for dealing with such conditions: the reasonable accommodations model of disability legislation like the Americans with Disabilities Act. It is reasonable to require an office building to put in a ramp for an employee in a wheelchair who is unable to walk; it would be unreasonable for a bank to have to hire a guard specially to watch a kleptomaniac teller. What is and is not a reasonable accommodation depends on the centrality of an aspect of a job, the costs to the employer, and so on.

So my moderate proposal says that we handle the worry that a particular conscientious objection renders a professional incapable of doing their job by analogy to the reasonable and unreasonable accommodations model, and qualify (8) by allowing in hiring or licensure the requirement that the accommodations for a conscientious restriction on practice would have be reasonable in ways analogous to reasonable disability accommodations. A healthcare professional who has only one hand could, I assume, be reasonably accommodated in a number of specialities, but likely not as a surgeon.

The disability case also should push us towards a less judgmental attitude towards a healthcare professional whose conscientious objections are unreasonably mistaken. That an employee became a paraplegic from unreasonable daredevil recreational activity does not render the employee uneligible for otherwise reasonable accommodations.

What about the worry about the rare cases where a healthcare professional has morally repugnant conscientious views that would require discriminatory care, such as refusing to care for patients of a particular race? Could one argue that if patients of that race are rare in a given area, then allowing a restriction of practice on the basis of race could be a reasonable accommodation? We might imagine an employee who has panic attacks triggered by a particular rare configuration of a client’s personal appearance, and that does seem like a case for reasonable accommodations, after all.

Here I think there is a different thing to be said. We want our healthcare professionals to have certain relevant moral virtues to a reasonable degree. Moral virtues go beyond obedience to conscience. Someone with a mistaken conscience may not be to blame, for the wrongs they do, but they may nonetheless lack certain virtues. The case of the conscientious racist is one of those. So it is not so much because the conscientious racist would refuse to care for patients of a particular race that they should not be a healthcare professional but it is because they fail to have the right kind of respect for the dignity of all human beings.

One may think that this consideration makes the account not very useful. After all, a pro-life individual is apt to be accused of not caring enough for women. Here I just think we need to be honest and reasonably charitable. Caring about the embryo and fetus has human dignity does not render it less likely that one cares about women. Compare this case: A vegan physician believes that all higher animal life is sacred, and hence refuses to prescribe medication whose production essentially involves serious suffering of higher animals. Even if such a physician’s actions might cause harm to patients who need such (hypothetical?) medication, the belief that all higher animal life is sacred is not evidence that the physician does not care about such patients–indeed, it seems to render it more likely that the physician thinks the patients’ lives to be sacred as well, and hence to be cared for. There may be specialties where accommodation is unreasonable, but the mere fact of the belief is not evidence of lack of relevant virtues.

Wednesday, February 15, 2017

Dignitary harms and wickedness

Torturing someone is gravely wrong because it causes grave harm to the victim, and the wickedness evinced in the act is typically proportional to the harm (as well as depending on many other factors).

But there are some wrongdoings which are wicked to a degree disproportionate to the harm. In fact, torture can be such a case. Suppose that Alice is caught by an evildoer who in a week will torture Alice by one second for every person who requests this by email. About a hundred thousand people make requests, and Alice gets over a day of torture. Each requester’s harm to Alice is real but may be quite small. But each requester’s deed is very wicked, disproportionately to the harm. The case is similar to a conspiracy where each conspirator contributes only a small amount of torment but collectively the conspirators cause great torture—the law would be just in holding all the conspirators guilty of the whole torture.

Here’s another way to see the disproportion. Suppose that someone is deciding whether to request torture for Alice or to steal $100 from her. Alice might actually self-interestedly prefer an extra second of torture to having $100 stolen. Nonetheless, requesting the torture seems much more wicked than stealing $100 from Alice (unless Alice is destitute).

Similarly, the evildoer could kill Alice with probability 1 − (1/2)n where n is the number of requesters. Given sad facts about humanity, everyone might know that the probability that Alice will die is going to be nearly certain, and no one requester makes any significant difference to that probability. So the harm to Alice from any one requester is pretty small, but the wickedness of making the request is great.

Another case. It is wicked to fantasize about torturing someone. And to be thought of badly is indeed a kind of harm. But if one can be sure that that the fantasy stays in the mind—think, maybe, of the sad case of a dying woman who spends her last twenty minutes fantasizing about torturing Bob—one might self-interestedly prefer the fantasy to, say, a theft of $100. Hence, the harm is relatively small. Yet the wickedness in fantasizing about torture is great, in disproportion to the harm.

Yet another case. Suppose that with science-fictional technology, someone destroys my heart, while at the same time beaming into my chest a pump of titanium that is in every respect better functioning than my natural heart. I think I have been harmed in one respect: a bodily function, that of pumping blood by my heart, is no longer being fulfilled. But blood is still being pumped, and better. So overall, I may not be harmed. (I may even be benefited.) Yet it seems that to destroy someone’s heart is to do them a grave harm. I am least confident about this case. (I am confident that the deed is wrong, but not of how wrong it is.)

In all these cases, there is a dignitary harm to the victim. And even if it is self-interestedly rational for the victim to prefer this dignitary harm to a modest monetary harm, imposing the dignitary harm is much more wicked. This is puzzling.

Solution 1: Imposing the dignitary harm causes much greater harm to the wrongdoer, and that’s what makes it so much more wicked.

But that seems to get wrong who the victim is.

Solution 2: Alice and Bob are mistaken in preferring not to be robbed of $100. The dignitary harm in fact is much, much worse.

Maybe. But I am not sure. Is it really much, much worse to have ten thousand people request one’s death rather than five thousand? It seems that dignitary harm drops off with the numbers, too, and each individual harmer’s anti-dignitary contribution is small.

Solution 4: Wrongdoings are not a function of harm, but of irrationality (Kant).

I fear, though, that this has the same problem of dislocating the victim from the center of the wrong, just as Solution 1 did.

Solution 3: Dignitary harms to people additionally harm God’s extended well-being, by imposing an indignity on the imago Dei that each human being constitutes. Dignitary harms to people are dignitary harms to God, but they are either much greater when they are done to God (because God’s dignity is so much greater?) or else they are much more unjust when they are done to God (because God deserves our love so much more?).

Like Solution 1, this may seem to get wrong who the victim is. But if we see the imago Dei as something intrinsic to the person (as it will be in the case of a Thomistic theology on which all our positive properties are participations in God) rather than as an external feature, this worry is, I think, alleviated.

I am not extremely happy with Solution 4, either, but it seems like it might be the best on offer.

Tuesday, December 20, 2016

Bestowing harms and benefits

A virtuous person happily confers justified benefits and unhappily bestows even justified harms. Moreover, it is not just that the virtuous person is happy about someone being benefitted and unhappy about someone being harmed, though she does have those attitudes. Rather, the virtuous person is happy to be the conferrer of justified benefits and unhappy to be the bestower even of justified harms. These attitudes on the part of the virtuous person are evidence that it is non-instrumentally good for one to confer justified benefits and non-instrumentally bad for one to bestow even justified harms. Of course, the bestowal of justified harms can be virtuous, and virtuous action is non-instrumentally good for one. But an action can be good for one qua virtuous and bad for one in another way—cases of self-sacrifice are like that. Virtuously bestowing justified harms is a case of self-sacrifice on the part of the virtuous agent.

When multiple agents are necessary and voluntary causes of a single harm, the total bad of being a bestower of harm is not significantly diluted between the agents. Each agent non-instrumentally suffers from the total bad of bestowing harm, though the contingent psychological effects may—but need not—be diluted. (A thought experiment: One person hits a criminal in an instance of morally justified and legally sentenced corporal punishment while the other holds down the punishee. Both agents are equally responsible. It makes no difference to the badness of being the imposer of corporal punishment if instead of the other holding down the punishee, the punishee is simply tied down. Interestingly, one may have a different intuition on the other side—it might seem worse to hold down the punishee to be hit by a robot than by a person. But that’s a mistake.)

If this is right, then we have a non-instrumental reason to reduce the number of people involved in the justified imposition of a harm, though in particular cases there may also be reasons, instrumental and otherwise, to increase the number of people involved (e.g., a larger number of people involved in punishing may better convey societal disapprovat).

This in turn gives a non-instrumental reason to develop autonomous fighting robots for the military, since the use of such robots decreases the number of people who are non-instrumentally (as well as psychologically) harmed by killing. Of course, there are obvious serious practical problems there.

Friday, November 18, 2011

Punishing what harms no one else

The following is a plausible Liberal principle:

  1. It is only appropriate punish that which harms someone or something else, or is intended or sufficiently likely to do so.
(If one thinks that exposing someone to a sufficient probability of harm is itself a harm, one can simplify this. Note also that what counts as sufficiently likely is relative to the degree of punishment and degree of harm. Doing something that has a one in a million chance of causing me a hangnail is probably not deserving of punishment, except maybe of the most trivial sort, but doing something that has a one in a million chance of blowing up New York may well deserve serious punishment—cf. Parfit on small chances.)

I shall argue against this principle. Recall Mill's very plausible insistence that:

  1. Being subject to social opprobrium is a kind of punishment.
(And one often would rather pay a hefty fine than be subject to social opprobrium, so it can be a heavy punishment.) Now observe:
  1. Some irrational beliefs are appropriately subject to social opprobrium even though they harm no one else, are not intended to harm anyone else and are not sufficiently likely to do so.
For instance, consider someone's really crazy conspiracy theoretic beliefs which were formed irrationally, out of a desire to be different, rather than out of an honest investigation of the truth, and suppose that this is someone whom no one is likely to believe, and hence someone harmless in these beliefs. Or consider the racist beliefs of someone who is too prudent to ever act on them because she does not wish to risk social disapproval.

Therefore:

  1. It can be appropriate to punish something that harms no one else, and is neither intended nor sufficiently likely to do so.

Now, one can get out of this consequence if one makes some sort of a communitarian assumption that no man is an island, that one person's irrationality is a constitutive part of the community's being thus far irrational, and is eo ipso harmful to other members of the community even if they do not themselves follow this irrationality, since now they are made to be participants in a community that exhibits this irrationality. But if one allows such "extended harms", then the principle (1) becomes uninteresting. Likewise, if one brings in "extended harms" to God, where God is said to be harmed in an extended sense provided that one acts against his will.

Could one turn this around and make it an argument for tolerance of irrationality? This would involve insisting on (1) and concluding that harmless irrationality should not be the subject of opprobrium. Yet such opprobrium seems to be an important part of what keeps us rational, and it seems obviously appropriate, especially when the irrationality is a result of the agent's moral failings.

Wednesday, December 15, 2010

Risk reduction policies

The following policy pattern is common.  There is a risky behavior which a portion of a target population engages in.  There is no consensus on the benefits of the behavior to the agent, but there is a consensus on one or more risks to the agent.  Two examples:
  • Teen sex: Non-marital teen sex, where the risks are non-marital teen pregnancy and STIs.
  • Driving: Transportation in motor vehicles that are not mass transit, where the risks are death and serious injury.
In both cases, some of us think that the activity is beneficial when one brackets the risks, while others think the activity is harmful.  But we all agree about the harmfulness of non-marital teen pregnancy, STIs, death and serious injury.

In such cases, it is common for a "risk-reduction" policy to be promoted.  What I shall (stipulatively) mean by that is a policy whose primary aim is to decrease the risk of the behavior to the agent rather than to decrease the incidence of the behavior.  For instance: condoms and sexual education not centered on the promotion of abstinence in the case of teen sex; seat-belts and anti-lock brakes in the case of driving.  I shall assume that it is uncontroversial that the policy does render the behavior less risky.  

One might initially think--and some people indeed do think this--that it is obvious, a no-brainer, that decreasing the risks of the behavior brings benefits.  There are risk-reduction policies that nobody opposes.  For instance, nobody opposes the development of safer brakes for cars.  But other risk-reduction policies, such as the promotion of condoms to teens, are opposed.  And sometimes they make the argument that the risk-reduction policy will promote the behavior in question, and hence it is not clear that the total social risk will decrease.  It is not uncommon for the supporters of the risk-reduction policy to think the policy's opponents "just don't care", are stupid, and/or are motivated by something other than concerns about the uncontroversial social risk (and indeed the last point is often the case).  For instance, when conservatives worry that the availability of contraception might increase teen pregnancy rates, they are thought to be crazy or dishonest.

I will show, however, that sometimes it makes perfect sense to oppose a risk-reduction policy on uncontroversial social-risk principles.  There are, in fact, cases where decreasing the risk involved in the behavior increases total social risk by increasing the incidence.  But there are also cases where decreasing the risk involved in the behavior decreases total social risk.  

On some rough but plausible assumptions, together with the assumption that the target population is decision-theoretic rational and knows the risks, there is a fairly simple rule.  In cases where a majority of the target population is currently engaging in the behavior, risk reduction policies do reduce total social risk.  But in cases where only a minority of the target population is currently engaging in the behavior, moderate reductions in the individual risk of the behavior increase total social risk, though of course great reductions in the individual risk of the behavior decrease total social risk (the limiting case is where one reduces the risk to zero).

Here is how we can see this.  Let r be the individual uncontroversial risk of the behavior.  Basically, r=ph, where p is the probability of the harm and h is the disutility of the harm (or a sum over several harms).  Then the total social risk, where one calculates only the harms to the agents themselves, is T(r)=Nr, where N is the number of agents engaging in the harmful behavior.  A risk reduction policy then decreases r, either by decreasing the probability p or by decreasing the harm h or both.  One might initially think that decreasing r will obviously decrease T(r), since T(r) is proportional to r.  But the problem is that N is also dependent on r: N=N(r).  Moreover, assuming the target population is decision-theoretic rational and assuming that the riskiness is not itself counted as a benefit (both assumptions are in general approximations), N(r) decreases as r increases, since fewer people will judge the behavior worthwhile the more risky it is.  Thus, T(r) is the product of two factors, N(r) and r, where the first factor decreases as r increases and the second factor increases as r increases.  

We can also say something about two boundary cases.  If r=0, then T(r)=0.  So reducing individual risk to zero is always a benefit with respect to total social risk.  Of course any given risk-reduction policy may also have some moral repercussions--but I am bracketing such considerations for the purposes if this analysis.  But here is another point.  Since presumably the perceived benefits of the risky behavior are finite, if we increases r to infinity, eventually the behavior will be so risky that it won't be worth it for anybody, and so N(r) will be zero for large r and hence T(r) will be zero for large r.  So, the total social risk is a function that is always non-negative (r and N(r) are always non-negative), and is zero at both ends.  Since for some values of r, T(r)>0, it follows that there must be ranges of values of r where T(r) decreases as r decreases and risk-reduction policies work, and other ranges of values of r where T(r) increases as r decreases and risk-reduction policies are counterproductive.

To say anything more precise, we need a model of the target population.  Here is my model.  The members of the population targeted by the proposed policy agree on the risks, but assign different expected benefits to the behavior, and these expected benefits do not depend on the risk.  Let b be the expected benefit that a particular member of the target population assigns to the activity.  We may suppose that b has a normal distribution with standard devision s around some mean B.  Then a particular agent engages in the behavior if and only if her value of b exceeds r (I am neglecting the boundary case where b=r, since given a normal distribution of b, this has zero probability).  Thus, N(r) equals the numbers of agents in the population whose values of b exceed r.  Since the values of b are normally distributed with pre-set mean and standard deviation, we can actually calculate N(r).  It equals (N/2)erfc((r-B)/s), where erfc is the complementary error function, and N is the population size.  Thus, N(r)=(rN/2)erfc((r-B)/s).

Let's plug in some numbers and do a graph.  Suppose that the individual expected benefit assigned to the behavior has a mean of 1 and a standard deviation of 1.  In this case, 84% of the target population thinks that when one brackets the uncontroversial risk, the behavior has a benefit, while 16% think that even apart from the risk, the behavior is not worthwhile.  I expect this is not such a bad model of teen attitudes towards sex in a fairly secular society.  Then let's graph T(r) (on the y-axis it's normalized by dividing by the total population count N--so it's the per capita risk in the target population) versus r (on the x-axis). (You can click on the graph to tweak the formula if interested.)

We can see some things from the graph.  Recall that the average benefit assigned to the activity is 1.  Thus, when the individual risk is 1, half of the target population thinks the benefit exceeds the risk and hence engages in the activity.  The graph peaks at r=0.95.  At that point one can check from the formula for N(r) that 53% of the target population will be engaging in the risky activity.

We can see from the graph that when the individual risk is between 0 and 0.95, then decreasing the risk r always decreases the total social risk T(r).  In other words we get the heuristic that when a majority (53% or more for my above numbers) of the members of the population are engaging in the risky behavior, we do not have to worry about increased social risk from a risk-reduction policy, assuming that the target population does not overestimate the effectiveness of the risk-reduction policy (remember that I assumed that the actual risk rate is known).

In particular, in the general American adult population, where most people drive, risk-reduction policies like seat-belts and anti-lock brakes are good.  This fits with common sense.

On the other hand, when the individual risk is between 0.95 and infinity, so that fewer than 53% of the target population is engaging in the risky behavior, a small decrease in the individual risk will increase T(r) by moving one closer to the peak, and hence will be counterproductive.

However, a large enough decrease in the individual risk will still put one on the left side of the peak, and hence could be productive.  But the decrease may have to be quite large.  For instance, suppose that the current individual risk is r=2.  In that case, 16% of the target population is engaging in the behavior (since r=2 is one standard-deviation away from the mean benefit assignment).  The per-capita social risk is then 0.16.  For a risk-reduction policy to be effective, it would then have to reduce the individual risk so that it is far enough to the left of the peak that the per-capita social risk is below 0.16.  Looking at the graph, we can see that this would require moving r from 2 to 0.18 or below.  In other words, we would need a policy that decreases individual risks by a factor of 11.

Thus, we get a heuristic.  For risky behavior that no more than half of the target population engages in, incremental risk-reduction (i.e., a small decrease in risk) increases the total social risk.  For risky behavior that no more than about 16% of the target population engages in, only a risk-reduction method that reduces individual risk by an order of magnitude will be worthwhile.

For comparison, condoms do not offer an 11-fold decrease in pregnancy rates.  The typical condom pregnancy rate in the first year of use is about 15%;  the typical no-contraceptive pregnancy rate is about 85%.  So condoms reduce the individual pregnancy risks only by a factor of about 6.

This has some practical consequences in the teen sex case.  Of unmarried 15-year-old teens, only 13% have had sex.  This means that risk-reduction policies aimed at 15-year-olds are almost certainly going to be counterproductive in respect of reducing risks, unless we have some way of decreasing the risks by a factor of more than 10, which we probably do not.  In that population, the effective thing to do is to focus on decreasing the incidence of the risky behavior rather than decreasing the risks of the behavior.

In higher age groups, the results may be different.  But even there, a one-size-fits-all policy is not optimal.  The sexual activity rates differ from subpopulation to subpopulation.  The effectiveness with regard to the reduction of social risk depends on details about the target population.  This suggests that the implementation of risk-reduction measures might be best assigned to those who know the individuals in question best, such as parents.

In summary, given my model:
  • When a majority of the target population engages in the risky behavior, both incremental and significant risk-reduction policies reduce total social risk.
  • When a minority of the target population engages in the risky behavior, incremental risk-reduction policies are counterproductive, but sufficiently effective non-incremental risk-reduction policies can be effective.
  • When a small minority--less than about 16%--engages in the risky behavior, only a risk-reduction policy that reduces the individual risk by an order of magnitude is going to be effective;  more moderately successful risk-reduction polices are counterproductive.

Friday, September 17, 2010

What is the essential harm in murder?

Murder is wrong because it harms the victim in a particularly serious way. But what sort of harm does it impose on the victim? Some will say: takes away consciousness, severs connections with loved ones and interrupts projects. However, that on balance there is such a harm is far from obvious, while it is obvious that murder is wrong. For most people in our culture believe that the dead are conscious, and that many of the dead enjoy a life of bliss that include contact with many loved ones, and the continuation of at least the central project of one's life, namely the relationship with God. The wrongness of killing had better not be based on the controversial—and false!—thesis that there is no afterlife.

Now, one might say: Even if there is an afterlife, death interrupts many projects that involve other living people. Maybe. Yet on some views of the afterlife, the dead contribute at least as significantly to the lives of the living as they did when they were alive, for instance by praying for them. And even if death does interrupt many projects that involve other living people, that can't be central to what makes murder wrong. For consider Joe. He is a nice guy and has below average intelligence. Joe has no close friends, but he does have acquaintances. He lives a decent day-to-day life, but has no significant earthly projects that would be interrupted by death. He longs for heaven, but enjoys his daily life. By nobody's standards is he a candidate for euthanasia. Killing him would be a clear case of murder. But one cannot ground the wrongness of killing Joe in terms of projects involving other living people, because Joe just does not have enough such projects to yield the kind of moral weight that the wrongness of killing him has.

If this is right, then we should not look at the central harm in murder as involving a loss of the goods distinctive of the good human life. Rather the central harm in murder is the loss to a human being of the good of life itself—it is the destruction of the human's living body.  And hence to kill a permanently unconscious human being is wrong for the same central reason as it is wrong to kill a conscious human being.

Objection: But then the central good lost in killing the human is apparently of the same sort as the central good lost in killing a mosquito, and hence it should be just as wrong to kill a mosquito as to kill a human.

Response 1: Who loses a good can be morally relevant, over and beyond the question of what the lost good is.

Response 2: While in some sense for the mouse to breathe and for a human to breathe are the same thing, even the non-instrumental value of the mouse's breathing is not the same as the non-instrumental value of the human's breathing. For the mouse's breathing does not have as its telos the support of distinctively human activity, while the human's breathing does have as its telos the support of distinctively human activity. This value in the human's breathing is present even when, in fact, the human is unable to engage in any distinctively human activity. For there is a value in a striving for an end even when the end is not expected to be achieved, and that value derives from the value the value of the end (this is related to issues in sexual ethics), and the human's breathing strives for the end of distinctively human activity.

Thursday, April 23, 2009

Torture

I'll take for granted three things:

  1. Long-term incarceration for serious crimes is permissible
  2. Income tax, at roughly the level of taxation in the U.S., is permissible (though there may be particular features of the present U.S. tax code that are unjustifiable)
  3. Torture is wrong.
The last claim is a rough-and-ready claim. I do not mean it to exclude the possibility that there are extremely rare circumstances (such as where there is literally a time bomb placed in a populous area that cannot be evacuated) where torture is impermissible, but I mean the claim in the same sense in which people say to their kids "You need to keep your promises"—they do not intend to exclude the possibility of rare occurrences where promises should not be kept. (On the basis of divine revelation—as expressed in the documents of Vatican II—I take it that torture is literally always wrong, but I am not assuming this strong claim.)

Let's add some further, plausible claims. Some things may be wrong to do due to some complex moral reasoning which shows that even though the action does not prima facie seem to harm anybody, nonetheless the action is wrong (contraception is like that). But some things are wrong for a very straightforward reason: they are wrong because of the clear and obvious harm they impose on the victim. Torture seems to be one of those things:

  1. Torture is wrong because of the harm imposed on the victim.
The following claim is also plausible:
  1. If an action is wrong because of the harm imposed on the victim, then an action which imposes a greater harm under the same circumstances on the same victim will also be wrong.
Finally, add the following two claims:
  1. If a self-interestedly rational and well-informed person would prefer B to A, where B is harmful, then it would be more harmful for her to receive A instead of B.
  2. Some self-interestedly rational and well-informed persons would prefer some (perhaps moderate) instances of torture to a lifetime of taxation (at the level of U.S. income taxes) or to long-term incarceration.
But now we have a paradox. By (6) and (7), together with the fact from (4) that torture is harmful, for some people life-long taxation or long-term incarceration would be worse than some kinds of torture. But then by (5), life-long taxation and long-term incarceration is wrong in the case of these people. And this is in tension for (1) and (2) (I am assuming it is possible for one of these people to be guilty of a serious crime).

I think (1)-(3) are correct. I also think (7) is true. We would not think that someone who endured severe pain for, say, 15 minutes in the course of escaping from a twenty-year jail sentence was self-interested irrational. According to some stuff I found online, the average American in 2004 paid $9377 in income taxes. If this amount were annually invested at 8% (which is I think fairly conservative for such a long-term investment), in 40 years, it would yield $2.4 million. We would not think that someone who ran through non-life-threatening but very painful flames in order to get to a treasure chest containing $2.4 million, even if the chest could only be opened in 40 years, would be irrational in so doing.

So, we need to reject (4), (5) or (6) to get out of the difficulty. Of these claims, I find (5) the most plausible. So that leaves (4) and (6) as candidates for rejection. I think (6) is a bit more plausible than (4), though I am suspicious of the whole concept of self-interested rationality. If so, then (4) should be rejected.

But if torture is not wrong because of the harm inflicted to the victim, what makes it wrong? I am inclined to say the following: It is wrong because to torture someone is unloving, and the duties of love are the whole of the moral law. And it is unloving not just because of the harm inflicted on the victim, because there is more to being loving than providing benefits and more to being unloving than inflicting harms. Love is a unitive relationship, and acts that are innately counter-unitive, such as torture, marital contraception, or lying (I am not putting them all on an equal moral footing—equally, they are wrong, but they are not equally wrong, if you get my drift), are also wrong.

A different way of rejecting (4) might be given by a Kantian.