Wednesday, July 27, 2011
Infinite promises
It looks like by the simple neglect of bringing ice cream to the party, I have violated three promises in infinitely many ways.
But this action doesn't seem to be infinitely wrong, or if it is infinitely wrong, it is such because of the offense against God implicit in the promise-breaking, and not because of the infinite sequence of violations.
But why isn't it infinitely wrong (at least bracketing the theological significance)?
Is it because it's just one action? No: for a single action can be infinitely wrong, as when someone utters a spell to make infinitely many people miserable while believing that the spell will be efficacious (it doesn't matter whether the spell is efficacious and whether there are infinitely many people).
Is it because only a finite number of promises are broken? No: for a single promise can be broken infinitely often (given an infinite future, or a future dense interval of events if that's possible) with the demerit adding up. (Imagine that I promise never to do something, and then I do it daily for eternity.)
Maybe one will bite the bullet and say that the action is infinitely wrong. What's the harm in saying that? Answer: incorrect moral priorities. Keeping oneself from infinitely wrong actions is a much higher priority than keeping oneself from finitely wrong actions. But it doesn't seem that one should greatly, if at all, prioritize being the sort of person who brings ice cream to parties in the above circumstances over, say, refraining from finitely but seriously hurting people's feelings.
Puzzling, isn't it?
The above generated a puzzle by infinite reflection. But one can generate puzzling cases without such reflection. Suppose x loves y, and I harm y. I therefore also harm x, since as we learn from Aristotle, Aquinas and Nozick, the interests of the beloved are interests of the lover. Now suppose infinitely many people love y. (If a simultaneous infinity is impossible, assume eternalism and imagine an infinite future sequence of people who love y. Or just suppose I falsely believe that infinitely many people love y.) It seems that by imposing a minor harm on y, I impose a minor (perhaps very minor) harm on each of infinitely many people, and thereby an infinite harm. Now, suppose that I have a choice whether to impose a minor harm on y, who is loved by infinitely many persons, or a major harm on z, who is loved by only finitely many. As long as the major harm is only finitely greater than the minor harm, it seems that it is infinitely worse to impose the minor harm on y than the major harm on z. But that surely is mistaken (and isn't it particularly bad to harm those who have fewer friends?).
One might try to bring God in. Everyone is loved by God, and God is infinite, and so the major harm to z goes against the interests of God, and God's interests count infinitely (not that God is worse off "internally"), so the major harm to z multiplied by the importance of God's interests will outweigh the minor harm to y, even if one takes into account the infinitely many people who love y, since divine infinity trumps all other infinities. But this neglects the fact that God also loves all the infinitely many people who love y, and hence the harm to the infinitely many lovers of y also gets multiplied by a divine infinity.
Nor is infinity needed to generate the puzzle. Suppose that N people love y and only ten people love z, and my choice is whether to impose one hour of pain on y or fifty years of pain on z. No matter how little the badness of x's suffering to x's lovers, it seems that if you make N large enough, it seems it will overshadow the disvalue of the fifty years of pain to z.
I think the right answer to all this is that wrongs, benefits and harms can't be arithmetized in a very general way. There is, perhaps, pervasive incommensurability, so that the harms to y's lovers are incommensurable with the harms to y.
But I don't know that incommensurability is the whole story. It is a benefit to one if a non-evil project one identifies with is successful. Now imagine two sport teams, one that has a million fans and the other of which has a thousand. Is it really the case that members of the less popular team have a strong moral reason to bring it about that the other team wins because of the benefit to the greater number of fans, even if it is a moral reason overridden by their duties of integrity and special duties to their fans? (Likewise, is it really the case that the interests of Americans qua Americans morally count for about ten times as much as the interests of Canadians qua Canadians?)
Yet some harms and benefits do arithmetize fairly well. It does seem about equally bad to impose two hours of suffering on ten people as one hour of suffering on twenty.
So whatever function combines values and disvalues is very complicated, and depends on the kind of values and disvalues being combined. The only way a simple additivity can be assured is if we close our eyes to the vast universe of types of values, say restricting ourselves to pleasure and suffering as hedonistic utilitarians do.
Monday, June 20, 2011
Siblings and moral theory
- We have special duties, independent of communal enactments, to take care of our siblings, precisely because they are siblings.
Notice that (1) creates problems for a number of moral theories. Utilitarians will deny (1). The mere fact that someone is our brother or sister only makes it easier for us to know how to help him or her, which does not make a for a special duty to take care of him or her precisely because he or she is our brother or sister. It is difficult to see how Kantians could justify (1) without relying on something like a dollop of Natural Law (which Kant himself, but not so much contemporary Kantians, is happy with).
Contractarians can accept (1) minus the "independent of communal enactments" proviso. For perhaps it would be irrational for us to reject a communal tradition of a network of duties of special care that is below a certain level of onerousness, and our community's network includes siblinghood. But the contractarian probably could not object if the community instead had a network of duties of special care that put similar emphasis on, say, first-cousins-of-the-same-eye-color instead of siblings: what one gets from the contractarian structure is at most the irrationality of rejecting whatever one's community network of duties of special care is.
Divine command theorists can perhaps accept (1), though one might worry if the "precisely" in (1) is correct if this happens also because of divine command.
Natural law, of the well-developed moral theories, is probably the theory that best passes the test of fitting with (1). So, (1) provides a kind of argument for natural law theory (and to a lesser degree for divine command theory).
Does (1) matter? I think so. It is an important moral insight that somehow all human beings are brothers and sisters, but this moral insight is unhelpful, and maybe harmful, apart from (1).
Monday, October 11, 2010
Threats of self-torture
This post is inspired by the (public domain) story "Warrior Race" by Robert Sheckley (of whom I am a big fan).
Suppose I want a hundred dollars from you, but have no claim on it. So I resolve to torture myself in a way that would have significantly more disvalue than whatever good you can do with a hundred dollars on the condition that you don't give me the money, and convince you of my resolve. I also ensure you have no way of stopping me except by paying up. Or perhaps, if you're not sure of my resolve, I set up a machine that will torture me until you pay up. I also convince you that (a) I won't do this again, and (b) I will ensure nobody will ever find out about it. (If there are worries about the epistemic appropriateness of your trusting me, suppose that I have a little device implanted in my brain which will kill me if I am about to violate these rules.)
If you're a consistent utilitarian, you will pay up. Utilitarians, thus, are open to this particularly odd sort of blackmail.[note 1] Intuitively, I think, there is no duty for you to pay up. You could just say: "You made your bed, now lie in it." And so this is an argument against utilitarianism.
But why is it that non-utilitarians don't have to pay up? After all, it seems plausible independently of utilitarianism that if a moderate expenditure can prevent an immense amount of suffering, one has a duty to do that.
Or if that's not right, other forms of threat might work. You wanted to vote against Smith's getting tenure. But Smith informs you that if you vote against his tenure, he'll literally torture himself for the rest of his life to an intensity far disproportionate to the values involved in a fair tenure process. It is plausible that something like the proportionality condition from the Principle of Double Effect is a necessary condition on the permissibility of an action with a foreseen bad effect: the bad effect cannot be disproportionate to the good effect. But here the bad effect seems to be disproportionate to the good effect. (If causation doesn't filter through others' decisions, then suppose Smith set up a machine to torture him if you vote against him.) If this is right, then we don't have an argument against utilitarianism. We just have the observation that threats of self-harm will be effective against virtuous people.
One might think that anybody who would issue such threats of self-harm is insane, and maybe it is not so implausible to suppose that an insane person could get you to do whatever (within very broad limits) she wants by means of threats of self-harm. But if you're known to consistently act by a moral theory, like utiltiarianism, that requires you to give in to the demand, then it is not insane to threaten self-harm in this way, as the threatener knows that she won't have to carry out the threat. It can, indeed, be narrowly self-interestedly rational.
I think there may be a move available to the non-utilitarian. She could insist that your suffering the torture involves goods of justice. There are (at least) two kinds of punishment: imposed and natural. And justice is involved with both. It does seem plausible that if two people are drowning, and only one can be rescued, and one is there because she murderously pushed the other in and in the process toppled in with her victim, the innocent has a call on us that the other does not.
Notice, though, that in these sorts of cases as individuals we have no right to impose a punishment on the person other than public disapproval. As individuals certainly we have no right to impose torture on someone who threatens self-harm and no right to impose death on the drowning attempted-murderer. So if the right story involves natural punishment, and "You made your bed..." suggests that, then we will still need a doing/non-doing or foreseeing/intending distinction. Actually, doing/non-doing won't work in the tenure case, since there Smith threatens you with self-harm if you vote against him, and voting is a doing. So it seems one needs a foreseeing/intending distinction to make this work out: you foresee that Smith will suffer, but because the suffering would be a good of justice, that shouldn't sway you from your vote against him.[note 2]
Furthermore, the concept of punishment without a punisher appears incoherent. So to make the "natural punishment" line go through, one may need a God behind nature. Maybe one could try for "natural consequences" that aren't punishment. But if they aren't punishment, it's not clear how the threatener's suffering the torments is a good of justice. If they aren't punishment, all we can get is that it's not unjust that the threatener should suffer. But that could leave intact the argument that you shouldn't vote in such a way that will cause this disproportionate suffering as, plausibly, that wasn't an argument from justice but from non-maleficence.
So it could well be that supporting the "You made your bed..." line in these cases requires fair amount of philosophical doctrine: justice and natural punishment, foreseeing/intending and maybe even theism.
Of course, it could be that the hard-nosed "You made your bed..." intuitions are wrong.
Tuesday, September 23, 2008
Hedonistic utilitarianism
George is 20 years old and Jake is 50. Neither has friends, or is very likely to make a significant contribution to the pleasure of others, in George's case because of anti-social tendencies, and in Jake's because of severe disability. George hates Jake and pushes him overboard. As Jake flies overboard, George loses his balance and falls in, too. Both call to us for help. We can only pull out one. What should be done?
Here is a hedonist utilitarian answer (it makes a lot of assumptions, but the assumptions are not crazy). We should pull out George, and have him tried and convicted of murder. Then we should publicly sentence him to a lifetime of pain. We then need to hook him to electrodes in a cell, for the rest of his life. But unbeknownst to the public, the electrodes will deliver intense pleasure for the rest of his life. George will never tell anyone this, because he will be enjoying the pleasure too much. We need to tell George about this before he is sentenced to the lifetime of pain, so that he doesn't get too scared of the sentence.
On hedonist utilitarian grounds shouldn't pull out Jake, because (a) Jake is probably not going to agree to being hooked up to the pleasure-machine, and (b) even if he did, he wouldn't have as long left to live, and hence as much pleasure to experience, as George would, since George is younger. To satisfy the public, we might need to lie that we couldn't pull out Jake. George will support us in that lie. Since contortions of pleasure don't look too different from contortions of pain, we can exhibit George to the public, and this will have a significant deterrent effect on murder.
Yes, I know that hedonist utilitarians will cavil at this and at that in the story. But of course the real reason the story is all wrong is that it is surely wrong to save the life of the murderer while letting his victim drown (unless maybe the victim requests that we save the life of the murderer instead—for instance, if the victim is the murderer's parent).
Tuesday, August 19, 2008
Spock
Spock's "logic" has a theoretical and practical component. The practical component appears to be a utilitarianism to some extent constrained by deontological rules, in particular the duty not to kill the innocent and the duty to be faithful to commitments expressly undertaken, such as to the Federation. The other characters criticize him for lack of "emotion". In the theoretical context, this largely refers to the inability to predict the behavior of others (and occasionally maybe of self) due to a lack of emotional imagination (I am sceptical whether emotional imagination is needed to predict the behavior of others, and I think a psychopath could be very effective at predicting others' behavior). In the practical context, this seems to refer to a failure (and not a total one, since he is part human) to be moved by certain kinds of reasons, in particular reasons of friendship that go beyond commitments expressly undertaken.
Monday, August 18, 2008
Utilitarianism's deceptive simplicity
What I have always found most attractive about utilitarianism is its elegant simplicity. What according to the utilitarian is the obligatory thing to do? That which maximizes the good. What is the good? The total welfare of all beings capable of having a welfare. Thus, facts about duty can either be fully characterized in terms of welfare (normative utilitarianism) or will reduce to facts about welfare (metaethical utilitarianism). Moreover, we might further give a full characterization of welfare as pleasure and the absence of pain or as the fulfillment of desire, thereby either fully characterizing facts about welfare in terms of prima facie non-normative facts, or maybe even reducing facts about welfare to these apparently non-normative facts. Thus, utilitarianism gives a characterization (necessary and sufficient conditions) for duty in terms of apparently non-normative facts, and maybe even reduces moral normativity to non-normative facts. This is a lovely theory, though false.
But this illusion of having given a description of all of obligation in non-normative terms is deceptive. There are two ways of putting the problem. One is to invoke uncertainty and the other is to invoke ubiquitous indeterminism (UI) and anti-Molinism (AM). I'll start with the second. According to anti-Molinism, there is no fact of the matter about what would result from a non-actual action when the action is connected to its consequences through an indeterministic chain of causes. Thus, if Frank doesn't take an aspirin, and if aspirin takings are connected indeterministically to headache reliefs, there is no fact of the matter about whether Frank's headache would be relieved by an aspirin. And according to ubiquitous indeterminism, all physical chains of causes are indeterministic. The most common interpretations of quantum mechanics give us reason to believe ubiquitous indeterminism, while libertarianism gives us reason to believe in practically ubiquitous indeterminism (because human beings might intervene in just about any chain of causes.
Of course, this means that given UI and AM, duty cannot simply be equated with the maximization of the good. A more complex formula is needed, and this, I think, introduces a significant degree of freedom into the theory—namely, how we handle the objective probabilities. This, in turn, makes the resulting theory significantly more complex and less elegant.
But, perhaps, it will be retorted that there is a canonical formula, namely maximizing the expected value of each action. This, however, is only of many formulae that could be chosen. Another is maximizing the worst possible outcome (maximin). Yet another is maximizing the best possible outcome (maximax). And there are lots of other formulae available. For instance, for any positive number p, we might say that we should maximize is E[|U|p sgn U] (sgn x = 1 if x>0 and = -1 if x<0) or maybe E[(pi/2+arctan(U))], where U is utility.
But perhaps maximizing the expected value is the simplest of all plausible formula (maximax is implausible, and minimax is trivialized by the kind of ubiquitous indeterminism we have, which ensures that each action has basically the same set of possible utility outcomes, but with different probabilities). However, maximizing expected value leads to implausibilities even greater than in standard deterministic utilitarianism. It is implausible enough that one should kill one innocent person to save two or three innocent lives. But that one should kill one innocent person for a 51 percent chance of saving two innocent lives or for a 34 percent chance of saving three (which the expected value rule will imply in the case where the future happinesses of all the persons are equal) is quite implausible. Or suppose that there are a hundred people, each of whom is facing an independent 50 percent chance of death. By killing one innocent person, you can reduce the danger of death for each of these hundred people to 48.5 percent. Then, you should do that, according to expected value maximization utilitarianism.
Or let's try a different sort of example. Suppose action A has a 51 percent chance of doubling the total future happiness of the human race (assume this happiness is positive), and a 49 percent chance of painlessly destroying the whole of the human race. Then (at least on the hedonistic version—desire satisfaction would require some more careful considerations), according to expected value maximization utilitarianism, you should do A. But clearly A is an irresponsible action.
There may be ways of avoiding such paradoxes. But any way of avoiding such paradoxes will be far from the elegant simplicity of utilitarianism.
Exactly the same problems come up in a deterministic or Molinist case in situations of uncertainty (and we are always in situations of uncertainty). We need an action-guiding concept of obligation that works in such situations. Whether we call this "subjective obligation" or "obligation" simpliciter, it is needed. And to handle this, we will lose the elegant simplicity of utilitarianism. Consider for instance the following case. Suppose action A is 99% likely in light of the evidence to increase the happiness of the human race by two percent, and has a one percent chance of destroying the human race. Then, you might actually justifiedly believe, maybe even know, that A will increase the happiness of the human race, since 99% likelihood may be enough for belief. But plainly you shouldn't do A in this case. Hence a justified belief that an action would maximize utility, and maybe even knowledge, is not enough.
Thursday, April 17, 2008
Could a perfect act-utilitarian make assertions?
I suspect in the end the answer to my title question may be positive, but I want to run through an argument that makes it problematic that a perfect act-utilitarian, i.e., someone who always figures out the maximum utility and acts in according to it, could make assertions. I am making no claims of soundness or validity for the argument.
Making assertions is a norm-governed practice. An essential part of what it is to engage in a norm-governed practice is to accept the norms as applicable to oneself. What exactly the norm of assertion is—what conditions are such that making an assertion that p is appropriately—is controversial. Proposals made have included truth, belief, justified belief and knowledge. All of these have something to be said for them. But the following does not: "The norm of assertion is the maximization of utility." The practice of uttering that which the uttering of maximizes utility is not the practice of assertion. The perfect act-utilitiarian is governed by the norm of utility-maximization in all actions. Therefore, she does not accept the norm of assertion. Therefore, she does not engage in the practice of assertion. Therefore, she does not make assertions.
Monday, April 14, 2008
Yet another counterexample to utilitarianism and reason why personal identity matters
Suppose that the world contains an infinite row of people, whom we can (if we don't mind doing such a thing at least in a thought experiment) number in order ...,-4,-3,-2,-1,0,1,2,3,4,.... All of these people are the same in all morally relevant, with one exception. The folks with negative numbers are all very miserable, with an equal amount of misery, and the folks with non-negative numbers are all blissfully happy, with an equal amount of happiness. A reliable genie offers you a choice: If you raise your left hand, person with number -1 will be made blissfully happy, like the people with numbers 0,1,2,3,4,...; if you don't raise your right hand, person number 0 will be made as miserable as the people with negative numbers.
What should you do? It's clear: lift your left hand. You clearly have decisive reason to do this. But notice that total utility need not be changed by your action (assume for simplicity your own and the genie's utility is not changed). In fact, the situation where persons numbered ...,-4,-3,-2 are miserable and those numbered -1,0,1,2,3,4,... are blissfully happy is isomorphic to the situation where those numbered ...,-4,-3,-2,-1,0 are miserable and those numbered 1,2,3,4,... are blissfully happy. So on utilitarian grounds, there is nothing to choose from between these two options.
Someone whose ethics is not centered on the maximization of utility will notice that even though the total utility in both cases is the same (whatever it is: it seems to be infinity minus infinity!), there is a difference for two specific people, namely those numbered -1 and 0. This is yet another way in which personal identity matters. Unless persons have an identity over time or between worlds (or both), we have a hard time making sense of the difference the two cases. Utilitarianism does not particularly care about the identities of persons, and that's why it has trouble with this case.
Utilitarianism can perhaps be fixed to account for this. One might supplement it with the idea that when comparing utilities between possible outcomes, we only compute differences in utility. When choosing between options A and B, we let u(x,F) be the utility that possible person x has if F is chosen, and then sum up u(x,A)-u(x,B) over the union of the possible persons in the relevant A-world and the relevant B-world. Notice, though, that looking at it this way emphasize the importance of personal identity between worlds—it matters which goods and bads befall whom. Once we agree that it matters which goods and bads befall whom, utilitarianism should seem significantly less plausible. And we may still be able to manufacture counterexamples. Suppose the genie adds that however you choose, an infinite number of equally blissful genies causally isolated from everybody else, will pop into existence, but these genies will be numerically different in the scenario where you lift your left hand from the ones who pop into existence in the scenario where you don't lift your left hand. Then, the above utility difference method will generate infinite minus infinity as the difference between the two scenarios, which doesn't allow for a decision.
Monday, January 21, 2008
Consequentialism and counterfactuals
According to standard act consequentialist theories, an action is right if and only if that there is no alternate action within one's power do that would in fact have better consequences. Focus on the words "would in fact". Here we have a counterfactual. Moreover, it is a counterfactual where the consequent depends indeterministically on the antecedent. But suppose that one denies Molinism, and more generally denies that there can be any non-trivial counterfactuals where the consequent depends indeterministically (either via libertarian-free actions or through quantum randomness) on the antecedent. Then the act consequentialist theory cannot work.
One might say that our actions concern a subset of the world that we may assume is deterministic. But remember that standard consequentialist theories involve also the weighing of distant consequences. It is highly likely that between the present and a distant future, indeterministic events will have some quite significant consequences. It seems pretty likely that over a long enough period of time, for instance, there will be some car crashes for indeterministic causes (e.g., indeterministic effects in the brains of drivers, or quantum effects in defective engine-control electronics, or the like). Moreover, we surely shouldn't assume something false. If we accept quantum indeterminism, then strictly speaking all the stuff around is indeterministic, though it may have extremely high probability. But extremely high probability won't help those worried about whether there are non-trivial counterfactuals involving stochastic dependence.
Suppose one bites the bullet. One denies that there are any true counterfactuals about future results, but one accepts the analysis of rightness. Then one gets the result that every action is right. For no action is such that there is an action that would have better consequences, since there are not enough facts to make such a "would" true.
If we take this criticism seriously, we will either abandon consequentialism, or define rightness not in terms of what it is true to say "would happen", but in terms of expected values of actual and counterfactual outcomes. There is still a problem, though, whether it makes sense to talk of the expected values of counterfactual outcomes when one believes that there is no such thing as a "counterfactual outcome", as the typical Molinist does. One might be able to define the expected values in terms of present tendencies, but now the theory is sounding less and less like consequentialism.[note 1]
Friday, December 7, 2007
Epistemic norms are a species of moral norms
"Don't accept the testimony of unreliable witnesses." "Avoid having contradictory beliefs." "Discard beliefs all of the justification for which has been undercut." "Accept the best available explanation that is not absurd." "If you assigned a probability to a hypothesis H, and then you received evidence E, you should now assign probability P(H|E)=P(E|H)P(H)/P(E) to the hypothesis."
But why? Well, if I don't follow these injunctions, then I am less likely to achieve knowledge, comprehensiveness, understanding, true belief, etc., and more likely to be ignorant and to believe falsely. Moreover, following these injunctions will develop habits in me that are more likely in the future to lead me to gain knowledge, comprehensiveness, understanding, true belief, etc., and to avoid ignorance and false belief.
But if that is all there is to it, then epistemic injunctions run the danger of not being norms at all. Rather, they seem to be disguised conditionals like:
- If you accept the testimony of unreliable witnesses, you are likely to gain false beliefs.
- If you don't accept the best available explanation that is not absurd, you're unlikely to gain comprehensiveness in your beliefs.
- If you don't raise your right arm, at most one of your arms will be raised
This is unless knowledge, comprehensiveness, understanding, true belief, etc. are worth having, unless they are good. If they are good, then they are to be pursued, and their opposites are to be avoided. But now we see that the force of epistemic norms just comes down to the fact that, as Aquinas put it, "good is to be done and pursued, and evil is to be avoided". But the pursuit of the good and the avoidance of the bad is what morality is. Hence, the imperative force of epistemic norms--that which makes them be genuinely normative--is the same as the imperative force of moral norms. Epistemic norms just are moral norms, but moral norms concerning a particular, and non-arbitrary, subset of the goods and bads, namely the epistemic goods and bads. Likewise, there is a subset of moral norms dealing with goods and bads that come up in medical practice, and we call that "bioethics", and there is a subset of moral norms dealing with goods and bads to the agent, and we call these "norms of prudence", and so on. Non-communal epistemological norms are, in fact, a subset of the norms of prudence. Any subset of the goods and bads defines a subset of morality.
One might object that only some goods and bads fall in the purview of morality. Thus, while good is to be pursued and evil avoided, only in the case of the moral goods is this a moral injunction. But I find quite implausible the idea of identifying specifically "moral" goods. I will argue against the distinction between epistemic and moral goods in two parts. The first part of the argument will establish, among other things, that epistemic norms are a species of prudential norms. The second part will argue that prudential norms are a species of moral norms.
To help someone learn something--i.e., to help her gain certain instances of epistemic goods--for the sake of her learning is to benefit her, and can be just as much an instance of kindness or charity as relieving her pain. (Of course, not every instance of teaching is kind or charitable, just as not every relieving of pain is kind or charitable--the parallel continues to hold.) To distinguish helping others attain epistemic goods from helping others attain non-epistemic goods and to say that only the latter is moral, is to take an unacceptably narrow view of morality--indeed, I think the only major moral view that makes such a claim is hedonistic utilitarian, and its making this claim is a count against it. But if there is no difference in regard to whether we are acting in accordance with morality whether we help others achieve epistemic or non-epistemic goods, why should there be a difference in our own case? The epistemic goods in our own case are not different in kind from the epistemic goods in the case of others. If pursuit of the human good of others involves helping them achieve epistemic goods, so too the pursuit of the human good of ourselves involves helping ourselves achieve epistemic goods. But pursuit of our own human good is what prudence calls us to. Hence, epistemic norms are a species of moral norms. It is no less a part of prudence to strive for true belief than it is to surround oneself with beauty or to keep one's body healthy; it is just as much a duty of prudence to keep from false beleif as it is to avoid promoting ugliness in one's environment and disease in one's body.
Now, one might say that there is a defensible distinction between the agent's goods and the goods of others, and it is only the pursuit of the goods of others that morality is concerned with. But this is mistaken. It is an essential part in learning to be moral to realize that I am (in relevant respects) no different from anybody else, that I shouldn't make an exception for myself, that I am one of many, that if others are cut, they bleed just as I do. Utilitarianism and Kantianism recognizes this. Aquinas recognizes this in respect of charity (he thinks we owe more charity to ourselves, because we owe more to those who are closer to us, but there is no difference in the kind of duty; in charity we love people because the God whom we love loves them, and so we love ourselves in charity for the same reason that we love others in charity). And a theistic ethics that grounds our duties to people in their being in the image of God, or in God's loving them, will just as much yield duties in regard to one's own goods as duties in regard to the goods of others, since the agent is in the image of God and loved by God just as others are. And if we have duties to our friends, and our friends are in morally relevant respects "other selves", then we likewise have duties to ourselves (Aristotle would certainly endorse this). It is true that some social-contract accounts of morality do not recognize this, but so much the worse for them.
Prudential norms and prudential virtues, then, are a species of moral norms and moral virtues. And epistemic norms and epistemic virtues are a species of prudential norms and prudential virtues.