Friday, June 17, 2011

Enlightened Self-Interest

A member of the studio audience wrote:

[I]n support of the role of prescriptivity in moral language one might point out that the facts you're pidgeonholing as "moral" look like they fall more naturally under "enlightened self-interest" or somesuch.

Actually, no.

"Enlightened self-interest" is an ambiguous term - having two possible meanings.

One of these meanings makes claims about enlightened self-interest trivially true - true in a very uninteresting and unimportant way. The other makes claims of enlightened self-interest false, though not so trivially false.

The distinction here is between interests OF the self versus interests IN the self.

Those who suggest that claims about enlightened self-interest are both true and important usually equivocate between these two meanings. They start off with speaking as if all interests we have are interests IN the self. They all aim for the benefit of the agent. When they are backed into a corner by arguments that show this to be false they switch the second meaning of self-interest - interests OF the self. This version of the theory is true, but does not have any of the implications of the version they began with. When their opponent gives up attacking this second (trivially true) claim, the advocate of self-interest theory declares victory and switches back to the first (false) definition of self-interest.

The trivially true version of self-interest (interests OF the self) states that an agent's actions are motivated entirely by the agent’s own desires. The desires of others may affect his actions - but only insofar as he has a desire to fulfill (or to thwart) the other person's desires.

This is true in a biological sense - only my brain is hooked up to my muscles in the right way. My choices have to come from my brain - meaning my brain states (my beliefs and my desires). They do not come from outside my brain.

More importantly, though, this is logically true. Let us say that you were to hook up a remote control device such that you could control my body remotely. Now it is your beliefs and your desires that control this body. If that is the case, then the actions that this body now performs are no longer my actions. They would be your actions. Actions belong to (are the responsibility of) the brain whose brain states (beliefs and desires) are the proximate cause of the choices controlling those actions.

That is to say, all interests that motivate an agent's action are interests of the self - the agent's own beliefs and desires. If they do not come from an agent's beliefs and desires, then they are not his actions.

When people talk about enlightened self-interest they tend to want to be saying something more robust than saying, "The desires that cause my intentional actions are the desires in my brain."

Now, the claim that all actions are motivated by desires OF the self is quite different from the claim that all actions are motivated by desires IN the self.

A desire IN the self is a desire that P where the self ("I") is the object of the desire - "I desire that I...". I desire that I experience pleasure. I desire that I have more money. I desire that I am admired by all.

While the claim that the desires that motivate an agent's action are desires OF the self is trivially true, the claim that the desires that motivate an agent's actions are desires IN the self is sometimes false. The range of possible desires that an agent can have is as broad as the range of possible beliefs that an agent can have.

An agent can have a desire that no child suffers, or a desire that a particular piece of wilderness remain untouched by humans. He can desire that humans (or their descendents) exist far into the indefinite future and desire that the SOB that kidnapped and raped that child be made to suffer for his crimes. Just as he can believe that a God exists, he can desire that a God exist. And just as he can believe that the claims made in the Bible are true, he can desire that the claims made in the Bible are true.

The desires that Jim might have that would give him reason to condemn bank robbers almost certainly includes some self-interested desires (he desires that his money be safe), but it could also include desires in things other than the self. he may desires the well-being of others for its own sake - not for any benefit it may provide to him. He sees a society where bank robbing is rampant as one of widespread suffering and condemns bank-robbery as a way of reducing that potential for suffering.

Desirism says that Jim's reasons for actions that exist for condemning bank robbers must necessarily be his desires - desires OF Jim. But it not necessarily be desires IN the self - desires of which Jim is the object.

Perhaps more importantly, the desires that agents have reason to create in others are often not desires IN the self. In fact, praise and condemnation are more reasonably used to inhibit or reduce desires in the self (selfishness) and to promote desires OF the self that are desires IN the well-being of others, or desires in things that tend to lead to the well-being of others, or aversions to things that tend to thwart the desires of others.

The Categorical Issue and the Elements of Morality

A member of the studio audience wrote:

[Y]ou seem to share a lot of ground (indeed, all the ground except the "ineliminability" of certain things from moral language) with what I would regard as one of the standard moral anti-realist positions.

There is another thing I do not share.

I do not share the idea that "categorical" was ever a part of morality as practiced. There is nothing to eliminate. Morality was adopted and embraced as a technique for fulfilling desires. At some point some theorists came along and asserted that its principles are categorical, but that never made it into the meaning.

Desirism accurately describes morality as practiced. That makes it a realist theory.

Look at the elements that it can account for:

The central role that rewards (such as praise) and punishments (such as condemnation) play in the institution of morality. This is something that very few competing theories even address. Let us say that moral claims are categorical - how does this account for the practice of responding to virtue with praise and vice with condemnation?

It accounts for 'ought' implies 'can' because it only makes sense to apply praise and condemnation where it can cause the reward-learning system to effect a change in desires.

It accounts for the fact that moral claims act like truth-bearing propositions. They are truth-bearing propositions in that they make claims as to whether an act is indicative of desires that people have many and strong reasons to promote or inhibit through praise or inhibit through praise or condemnation.

It also accounts for the emotive component of moral utterances - because they often contain the praise or condemnation that the truth-bearing component says that people generally have reason to present.

It accounts for the types of evidence that people bring to moral debates - evidence supporting or refuting the truth of the proposition that people generally have reason to apply rewards such as praise or punishments such as condemnation in particular ways.

It accounts fit the three categories of moral claims - obligation, non-obligatory permission, and prohibition.

By the way, it also accounts for a fourth moral category - supererogatory action or acts above and beyond the call of duty. Some actions exhibit desires that people generally have many and strong reason to promote. However, praise and condemnation cannot be expected to bring about a desire of such strength in the public at large. We have reason to praise these people and call them heroes. But we recognize that most people can never acquire such a virtue. By virtue of 'ought' implies 'can' we do not hold them to be obligated to do so.

It fits moral claims into a general theory that can be applied to all value-laden terms. It says that all value-laden terms relate states of affairs to desires. They differ in the objects of evaluation, the desires that are relevant, the nature of the relationship (direct or indirect), and whether the relevant desires are fulfilled to thwarted.

Beauty. This term is applied to things seen and heard based on whether the experience of seeing or hearing directly fulfills the desires of the seer or hearer.

Illness and Injury: These terms evaluate changes or deviations in physical or mental functioning according to whether they tend to thwart (or give others reason to thwart) - directly or indirectly - the desires of the agent whose functioning is being examined. Furthermore, if the cause of the change is a macro cause that primitive people can see (getting trampled by a horse) it is an injury. If it is a micro cause (such as cancer or poisoning) it is an illness.

Useful: This term can refer to just about anything, but never in terms of its ability to fulfill desires directly. It is always used to identify the object of evaluation as something with the capacity to fulfill desires indirectly - in virtue of its ability to bring about something else that can fulfill desires directly.

Dangerous: This term is also used to evaluate just about anything according to its potential to thwart desires indirectly.

Virtue: This term is applied to malleable desires - desires that can be learned through triggering the reward-laerning system. A virtue tends to fulfill other desires - giving others reason to use rewards (such as praise) or punishments (such as condemnation) to facilitate the learning of that desire.

There is no categorical theory that can come close to accounting for so much of the actual use of value-laden terms in general, and moral terms in specific, such that it makes sense to claim that a "categorical" component is built into the meaning of these terms. Not only can this element be eliminated, it never existed to start with.

"Anti-realism," to most people, means the loss of moral restraint. It means passion unconstrained by the effects of praise and condemnation so that everybody does what they please whenever they please no matter what they please. The fact that this is what "anti-realism" with respect to morality means to most people tells us something about what "morality" means to most people. It tells us what they take to be "real" by telling us what they think anti-realism says is not real.

What is real is the institution of using rewards (such as praise) and punishment (such as condemnation) to promote desires that tend to fulfill other desires, while inhibiting desires that tend to thwart other desires. The "categorical" nature of morality is a mistake. "Categorical" was never a part of the meaning of moral terms. This is just a theory that some people adopted - a mistake, to be discarded.

If a non-categorical theory does a better job of accounting for the elements of morality (and can fit it into a broader theory that also handles a wide variety of non-moral value-laden terms - and can fit the theory in with what is known about biology; specifically, the reward-learning system and the effects of desires on choosing actions), then I am more than comfortable with saying that this claim of "categorical" values was never there to be eliminated.

Having said this, I do not think that the meanings of moral terms are worth debating. I have little interest in convincing somebody who holds that moral terms contain some claim about categorical value that cannot be eliminated that they are wrong. If they are right, this means that all of their moral claims are false and irrelevant anyway. The debate over whether desirism is the best account of morality as practiced, or the next-best alternative to an account that renders all moral claims false and irrelevant - is only of passive interest.

Thursday, June 16, 2011

Obligation, Prohibition, and Non-Obligatory Permission

Before travelling much further, let me make sure that we keep this discussion in its context.

Assume that you are an intentional agent with desires, surrounded by a community of intentional agents with desires.

Desires - expressed in the form "desire that P" where P is a proposition - motivate agents to realize states of affairs in which the propositions that are the objects of their desires are true.

As I have argued, you have four ways to cause other agents to choose actions that would realize the propositions that are the objects of your desires.

If one of those agents has a desire that Q, you could:

(1) Bargain: "If you help me to realize P, then I will help you to realize Q".

(2) Threaten: "If you do not help me to realize P, then I will act so as to realize not-Q".

(3) Alter beliefs: "You can most efficiently realize Q through actions that will have, as a side effect, the realization of P."

(4) Alter desires: Give the person with a desire that Q a desire that R that will, given his beliefs, motivate him to act in ways that will realize P.

I have asserted that morality has to do with method (4). Specifically, morality is concerned with the use of rewards in the biological sense (such as praise) and punishments in the biological sense (such as condemnation) to trigger the reward-learning system to adjust desires.

A good desire - or a "virtue", in this sense - is a desire that people can and generally have many and strong reasons to create or promote using these tools. A "vice", on the other hand, is a desire that people generally have many and strong reasons to inhibit or extinguish using these tools.

I should also extend this to cover the concepts of moral obligation, non-obligatory permission, and moral prohibition.

A "moral obligation" is an act that a person with good desires (a virtuous person) would perform.

If an agent performs this action, we have at least prima facie evidence that the agent has those desires that people generally have many and strong reasons to create or promote, and lacks those desires that people generally have reasons to inhibit or extinguish.

The tools for creating and promoting this virtue in others involve using rewards (such as praise). The virtuous person - the person who does what he ought - gets praise and reward, as a way of encouraging like desires in them and in others who are a witness to these rewards and praise.

On the other hand, the person who does not do what he ought - who shirks an obligation - is to be subject to what are punishments in the biological sense, which includes moral condemnation.

But, remember, an obligation is not what people actually praise others for doing and condemn them for not doing. It is what people have the most and strongest reasons to praise people for doing and condemn them for not doing.

People - even whole cultures - might be wrong about what they have reason to praise or condemn, as with a group who thinks that eliminating a certain disposition in others (homosexuality) is necessary to prevent widespread suffering at the hands of an evil and malicious deity.

A moral prohibition, on these same terms, is an act that a person with good desires would not perform. Of an agent does perform such an act, then it follows that the agent either lacks certain virtues, or has certain vices. People generally have many and strong reasons to bring the social tool of punishment (in the biological sense - which includes condemnation) to bear against such individuals. This serves to trigger the reward-learning system to inhibit the relevant desires or promote the virtuous aversions that would then cause people to choose not to perform similar actions.

Between these, we have the realm of non-obligatory permissions.

I cannot simply use the term "permission" here because "permitted" means "not prohibited" - and even obligatory actions are permitted. However, obligatory actions do not exhaust the realm of that which is permitted.

My decision to write this post, for example, is permissible, but not obligatory. I have a non-obligatory permission to eat oatmeal for breakfast, or to have some of the leftover pizza instead.

The fact is, there are some desires that we have reason to want some people to have, but not all people. A variety of desires, in some areas, reduces conflict and produces a mutually beneficial harmony.

One clear example of this are desires related to the choice of a job. Rather than having everybody want to be engineers and trying to get some to live the disappointed life of a teacher, we get better social harmony and desire fulfillment if some people liked engineering and others liked teaching. So, the professions of engineering and teaching fall into the realm of non-obligatory permissions.

We reduce competition and conflict as well if we like different foods. What to eat tends to fall into the realm of non-obligatory permissions. What we do for entertainment fits that category.

In all of these cases, we can certainly find a subset that is morally prohibited. "Burglar" is not a morally permissible profession because people generally have many and strong reason to use rewards and punishments to promote an aversion to taking the property of others without consent. I think we have many and strong reasons to promote an aversion to eating human flesh, and to entertain oneself with child pornography.

However, the existence of some prohibitions in this realm does not disprove the claim that there is also, within this realm, a vast area of non-obligatory permissions. And the reason for non-obligatory permissions is because there are some desires we have no particular reason to make universal or to extinguish using those social tools that touch on the reward-learning system.

Tuesday, June 14, 2011

Categorical Prescriptivity and the Meaning of Moral Terms

Another comment from the studio audience.

How is this not simply a metaethical error theory (there are no moral facts, strongly construed), combined with an attempt to reconstruct something "almost as good" as morality? . . . For example, someone who held this position might say that there are no moral facts as people generally construe them, since there are no categorical prescriptions, and that is an ineliminable part of standard moral discourse. However, insofar as we form a community of people who care about other people's welfare, there are certain "moral-like" imperatives that apply to us because of that fact.

Well, I would accept that if it were the case that categorical prescriptions were an ineliminable part of standard moral discourse, then it would follow that desirism is an error theory combined with a proposal for something "almost as good" (though I would argue that it is, in fact, significantly better than the fiction and myth of categorical prescriptions).

As it turns out, I reject the antecedent. I hold that people invented and embraced morality because they saw in it a significant potential for realizing those states of affairs in which the propositions that are the objects of their desire-states are true.

They did not fully grasp what it is exactly that had this great potential. Some suggested that they must be categorical imperatives. However, we have to reject this option because categorical imperatives, in virtue of the fact that they do not exist, do not have any potential to help people in realizing their desires.

So, categorical prescriptions - far from being an ineliminable part of moral discourse - is a theory about the nature of what has this great potential that can easily be eliminated in virtue of the fact that categorical prescriptions do not exist.

Furthermore, I don't think that there is any such thing as an ineliminable part of discourse. Language is an invention, and we can do with it what we choose. If chemists can elminate "having no parts" from the definition of an atom, and biologists can eliminate "bad air" from the definition of "malaria", then ethicists can eliminate "categorical prescriptivity" from moral terms.

Still, as I final point, I do not think that this question is worth a great amount of debate. If somebody wants to insist that moral terms, to them, refer to categorical prescriptivity, I do not need to argue that this fails to correspond to the public use of the term. It is enough to argue that 'morality' understood this way does not exist and, as such, it has no relevance in real-world decision making and is not worth bringing up as if it is relevant to any choice being made.

Whereas desires that people generally have many and strong reason to promote using rewards such as praise and punishments such as condemnation are very real and are very much worth bringing up when discussing real decisions that are to be made in the real world. The fact of these desires are particularly relevant to decisions governing the use of rewards such as praise and punishments such as condemnation.

I have "real" and "of great importance in real-world decision making" on my side. You can keep "categorical prescriptivity" and, in keeping it, render all of your moral claims false and irrelevant in the real world.

Moral "Ought" and Prescriptivity

I would like to address this comment:

I am not sure I understand. I think I agree with you that people act from desires and influence others to implement those desires. To me this describes the situation as it is and does not imply a moral "should" or "ought".

Could you explain what elements of moral "ought" might be missing from this description.

It is commonly understood that description and prescription are mutually exclusive categories. I disagree with this.

Ultimately, I hold that it is a very strange view that seems to assume that the universe is made up of two different types of things that can somehow interact with each other - things that can be described and not prescribed, and things that can be prescribed but not described.

Ultimately, I hold that "prescriptions" are a subset of "descriptions". All squares are rectangles, but not all rectangles are squares. All prescriptions are descriptions, but not all descriptions are prescriptions.

So, what does a prescription describe?

It describes a relationship between some possible state of affairs and desires.

There are two types of ought - practical ought and moral ought.

If Agent has a desire that P, and action X can improve the possibility of P, then A ought to do X (unless A has more and stronger reasons - desire that Q - that are incompatible with P).

For moral ought, I propose that is a description of the case in which people generally have many and strong reasons act (desires) so as to apply rewards (such as praise) and punishments (such as condemnation) in order to promote or inhibit particular desires or aversions.

My question would then be - what aspects of conventional "ought" as used in practice is not captured by this claim?

"It's wrong to lie."

People generally have many and strong reasons to apply rewards (such as praise) and punishments (such as condemnation) so as to promote an aversion to lying.

What is there that is found in the actual use of moral "ought" and "should" that is missing from this account?

Ultimately . . . let's say you don't want to use moral "ought" in this case. You want to insist that moral "ought" requires some kind of categorical imperative or a command from God.

I answer . . . Fine. Then moral "ought" does not exist. We quit using "ought" statements in all real-world decision making. Desires that people generally have many and strong reasons to promote or inhibit using rewards and condemnation still exist.

Or, let's say that you want to apply moral "ought" to . . . day, the greatest good for the greatest number.

Again, are you limiting yourself to that which is objectively true of "the greatest good for the greatest number?" If you are, then I am going to agree with everything you say. But, if you are assigning qualities to "the greatest good for the greatest number" (e.g., that it has some sort of intrinsic prescriptivity or that people are always justified in condemning those who do not promote the greatest good for the greatest number), then I am going to accuse you of making things up.

When I apply moral "ought" to "that which people generally have many and strong reasons to apply rewards (such as praise) and punishments (such as condemnation) so to promote those desires that would motivate such an action," am I saying anything about this subject that is not objectively true?

If I am, then I would also be guilty of making stuff up. So, I try to avoid that. But if that is your accusation, I need you to specify, exactly, what I am saying about these desires that people have many and strong reasons to promote that is not true. What is it, exactly, that I am making up or leaving out?

A Desire for Justice

Austin Nedved is kind enough to be providing me with a useful foil with which i can work. His comments are reasonably well informed, well presented, and represent common forms if objections that I gave encountered. I hope that Austin does not mind my use of these conveniences.

AUSTIN: [W]hat we desire the most is justice.

ALONZO: Regardless of what your personal concept if justice may be, we can falsify this claim just by pointing out the massive differences in what different communities call "justice". This alone should disprove any claim that there is a thing called "justice["] that we desire the most.

AUSTIN: The conventionalist argument you are making here refutes itself. A great number of cultures subordinate experimental science to divine revelation. Some have even rejected the validity of experimental science outright. Others still have rejected anything that conflicted with what Aristotle had said. But surely this does not entail that there is no truth, or that there are no legitimate sources of knowledge. The Conventionalist's claim that the multiplicity of understandings of what constitutes truth prevents us from having an objectively true understanding of truth, is self-defeating.

I was not making a conventionalist claim. My argument was not, "Everybody disagrees, so there is no truth". Instead, my argument took Austin's claim as a claim that has implications in the observable world, and showed that the observations of the world falsify the hypothesis.

Let us assume that somebody were to make the claim that what we desire most is broccoli. If true, this would have implications for what we would expect to observe in the eating habits of different cultures. That is to say, we would expect to find people throughout the world eating a lot of broccoli if it were available, and putting a great deal if effort into making sure it is available.

Let's assume that we discover in places where broccoli is available that one group mostly eats potatoes, another mostly eats beef, and yet another mostly eats pasta, while a fourth mostly eats broccoli. In the light of these observations, it would be hard to maintain the thesis that what we desire the most is broccoli.

One way out of this would be to note that the first culture's word for potatoes us 'broccoli'. The second culture calls beef 'broccoli', while the third calls pasta 'broccoli'. If this us what we find, then thus too would refute the thesis that there is a single thing called 'broccoli' that we desire.

Neither horn of the dilemma makes use of "the conventionalist argument". That is to say, if one were to make these objections to the claim, "What we desire the most is broccoli," we would not expect the broccoli theorist to answer, "Your conventionalist argument refutes itself." The conventionalist argument is not in play.

There are those who argue that the fact of moral disagreement among individuals or cultures implies that there is no fact of the matter. One leading proponent of this argument was J.L. Mackie. He had two main arguments against 'objective value' - one of which is the Argument from Disagreement. People have different opinions on what has value, so objective value does not exist.

However, this us as problematic as saying that people have different opinions on the age of the earth, so there us no fact of the matter. Or, even more problematically, people have different opinions on objective value, so there is no objective value.

Now, we could interpret Mackie as saying that we gave no objective way to resolve these disputes. However, this is a mere assertion - not an argument. And it us a question-begging assertion at that.

Anyway, I am a moral realist. I hold that there are moral facts independent of the sentiments of the speaker. There is moral disagreement, but that simply implies that some people are wrong. It does not imply that there is no fact of the matter.

The types of things that people can be wrong about include beliefs in a god, or intrinsic values, or making inferences from false premises such as a social contract, impartial observers, or decisions made behind a veil of ignorance.

Certainly, one of the things we can know in this world of facts is that it is not the case that what we desire the most is broccoli. We know this by looking at the world and seeing people showing great interest in a number of things, many of which are not broccoli. This appeal to the fact that people have a number of different likes and dislikes is not a "conventionalist argument". It is an observation that falsified the hypothesis, "What we desire the most is broccoli." This same set if observations also falsified the thesis, "What we desire the most is stamp collecting," and, as it turns out, "What we desire the most is justice."

Instead, we have a range of desires - for sex, for pleasure, to eat, to drink, for companionship, to avoid pain. We have the capacity to learn desires based on our experience - cultural preferences, learned fears, and other likes and dislikes. In this, there is no evidence that what we desire most is broccoli or stamp collecting or justice.

Friday, June 10, 2011

Implications of a Moral Should

In my last post I suggested that moral statements have two main components.

(1) A truth-bearing component that says, "People generally have many and strong reasons to apply the tools of reward (such as praise) and punishment (such as condemnation) to the reward-learning system of others so as to promote those desires and aversions that would cause people to choose that which is called good, and refrain from choosing that which is called evil.

(2) an emotive component - the very act of praise or condemnation that the truth-bearing component says that people generally have many and strong reasons to employ.

Also, desires are the only reasons for action that exist. The many and strong reasons I mentioned in (1) turn out to be many and strong desires. Desires, in turn, are propositional attitudes. A desire that P is a mental state that motivates an agent to choose those actions that would realize P in a universe in which the agent's beliefs are rue and complete.

People, when they make moral claims, actually make all sorts of references to reasons for action other than desires. Divine commands, categorical imperatives, intrinsic values, social contracts, impartial observers - all are offered up as reasons to offer rewards such as praise and punishments such as condemnation. Yet, all of these claims are false. None of these reasons exist.

Others offer the suggestion that the truth-bearing component only refers to the desires of the speaker. It merely states, "I have a pro or con attitude towards you doing X" yet, they consistently deny that merely having a pro or con attitude justifies the implications that moral claims have. Is it morally permissible to kiss somebody based on the fact, "I have a pro-attitude towards killing you"?

It is true that people never sincerely assert, "P is true" unless they believe that P is true. however, this does not imply that, "I believe that P" is a part of the meaning of "P is true." similarly, a person does not generality "P is good" (or, what amounts to the same thing, "P is good" is true) unless they have a pro attitude towards P. But this does not imply that ir is a part if the meaning.

But if you want to use the term "moral should" to refer to something else - a red flower, for example - you are free to do so - so long as you limit yourself to making objectively true claims about those red flowers. Everything else I would put in the category of "make-believe".

So, what is objectively true about these components - when combined with the fact that desires are the only reasons for action that exist?

(3) The truth of the truth-bearing component is independent of the sentiments of the person making the claim.

It does not matter what you believe, or what your opinion is, or what you feel - there is a fact of the matter as to what sentiments people generally have the mist and strongest reasons to promote or inhibit. Any assertions you make about thus fact could be completely wrong.

(4) Whole societies can be mistaken about what is right and wrong.

The beliefs and sentiments, even those that dominate a society, are not necessarily the right one's for that society. People may think that they have reason to promote or inhibit certain desires, only to be totally wrong.

A clear example would be a society under the grips of a primitive superstition. Such people might think, for example, that some busy-body dirty with nothing more useful to do with its time will visit suffering on a community that tolerates homosexual activity. Even if everybody agrees with thus, they would all be wrong. No such reason for action exists. It would be a mistake to appeal to the sentiments of the majority to decide right and wrong.

(5) A person can know that something is wrong and not care.

There is nothing about the fact that people generally have reason to employ punishment events to the reward-learning systems of others that would inhibit their dispositions to perform certain acts that implies that a particular agent has a reason not to perform those acts.

The purpose of morality is not to keep people from doing what they already have reasons to refrain from doing. It is to give them reasons they might not already have to refrain from those actions.

Some of those reasons take the form if incentives and deterrence. These incentives and deterrence act on the desires the agent already has - desires to be fulfilled by the incentives or thwarted by deterrence. But these are not the reasons that morality speaks of.

The reasons of morality involve the creation or strengthening of some desires, and the weakening and extinguishing of others. It does not appeal to the reasons the agent has, but those that reward and punishment have the capacity to cause.

Some will continue to protest that this is at odds with the fundamental definition of morality. However, against those protests, I remind the reader that you cannot define things into existence. Decide how you want to define the word 'Pegasus', defining it as a winged horse will not allow winged horses to come into existence.

You can define morality as what appeals to the sentiments of the speaker. Even under desirism, "the sentiments of the speaker" are real, and we can make objectively true claims about them. Any objectively true statement about the sentiments of the speaker has to be one that desirism agrees with – otherwise, desirism is in error. Otherwise, the implication is that desirism contradicts a fact about the sentiments of the speaker.

However, when you go outside if what is objectively true of the sentiments of the speaker, or draw implications that do not follow from these facts, you have left reality behind and entered the realm of make believe. It is said that you cannot derive 'ought' from 'is', and that there us a gap between 'fact' and 'value'. I have a better term for what stands outside of the realm of 'fact' and it is not 'value'. It is 'fiction'.

Complain, if you want, that this does not capture your perfect super-dictionary definition of moral 'should'. But take care - your quest for the best definition may well define morality right out of existence.

Thursday, June 09, 2011

Moral "Should" - A Comment from the Audience

A member of the studio audience writes:

There seems to be a problem here. If I personally have no reasons not to lie, and doing so would overall benefit me, there can be no possible reason why I should not lie. (Suppose I am unbothered by the negative consequences that others would inflict on me for lying.) This results in an absurd situation in which it is "reasonable" for me to lie, while it is also reasonable for others to try to prevent me from lying.

I do not see this as an absurd situation. In fact, I think it is quite common.

Lying would still be counted as immoral in this case. It is still true that people generally have many and strong reasons to promote an aversion to lying. The person who lies can be condemned as evil for not having aversions that people generally have many and strong reasons to create through acts of condemnation. Yet, it may still be the case that he has no reason not to lie. People generally have failed to give him such a reason.

Fortunately, I think there is a solution to this problem, and this solution involves distinguishing between two different sorts of "oughts": "ought" in the non-moral sense, and "ought" in the ethical sense. We are using the term non-morally when we say something like "If you want your car to have a long life, you ought to change the oil frequently." "Ought" is being used in the ethical sense when we say something along the lines of "I understand that, while murdering that person might benefit you, you ought not to kill him."

Desirism allows for something very similar to what you write here. "I understand that, while murdering that person may benefit you, people generally have many strong reasons to apply forms if punishment (such as condemnation) to the reward-learning system of others as a way of inhibiting the desires that would motivate such an action."

However - i suspect you are wanting to assert some sort of Kantian categorical imperative - an "ought" that does not have a goal. It's "just wrong" and that is all there is to it.

"Ought" in this categorical sense does not exist. There is no such thing as "just wrong". All 'just wrong' claims, no matter how popular, are false.

Desires are the only reasons for action that exist. They are the only kinds we find any actual evidence for - found in their ability to explain and predict intentional actions. Desires are propositional attitudes that can be expressed in the form "desire that P". The goal of a desire that P is a state of affairs in which P is true. All of our behavior is goal directed - including praise and condemnation. All of our motivation comes from our own desires.

Your definition captures the categorical nature of moral statements, but at the cost if making them mythical entities of no relevance or importance in the real world. My use sacrifices the categorical element of moral ought, but allows moral claims to remain true an important. They are all about malleable desires that people have many, strong, and real interests in promoting.

There is a precedent for this in chemistry. It was proposed that atoms were made up of parts. It could have been argued that thus claim violated the essential meaning of the word 'atom'. The word comes from ancient Greece and means literally, "without parts". Chemists faced a choice. They could have kept the essential meaning and insisted that a huge number of claims made in chemistry before that point were false. Or it could drop this essential meaning and allow chemistry tp progress much as it had.

Please note that this choice in no way threatened the objectivity of chemistry.

Ethics faces the same choice. It can preserve the categorical element of moral term and render all moral claims false. Or it can abandon that element and allow moral claims to remain potentially true and important.

I opt for the second option.

It should go without saying what we desire the most is justice.

Actually, this is false.

We evolved dispositions towards those desires that brought our ancestors biological success. Desires for sex, desires for food and drink, desires for the protection of our offspring, aversions to that which increase the possibility of injury and illness (e.g, the view down a steep cliff or the smell of rotting flesh).

Plus we have some malleable desires - modified by experience (particularly the social norms we pick up as children in cultures that have widely varying amounts of justice and injustice).

Regardless of what your personal concept if justice may be, we can falsify this claim just by pointing out the massive differences in what different communities call "justice". This alone should disprove any claim that there is a thing called "justice that we desire the most.

Thursday, June 02, 2011

Moral "Should" Statements

A couple of weeks ago, I began what one member of the studio audience has called a reboot of desirism by talking about 'should' statements.

(1) The only sensible answer to a "should" question (e.g., Why should I do X?) is to present the agent with some reason for action that exists, or some fact that ties the action or its consequence with some reason for action that exists.

That was the last time I talked about the word "should" in its prescriptive sense.

Instead, I went into a series of posts imagining that you are an intentional agents motivated to realize that which you desire - surrounded by other intentional agents motivated to realize what they desire.

Under these assumptions, I asked what you could do as an agent with a desire that P to get another agent with a desire that Q to realize P - or at least refrain from realizing not-P.

I discussed four options:

(1) Bargaining: If you help me to realize P, I will help you to realize Q.

(2) Threatening: Unless you act to realize P, I will act to realize not-Q.

(3) Belief modification: Give the agent with a desire that Q those beliefs that will motivate him to try to realize Q with actions that would realize P.

(4) Desire modification: Instead of taking his "desire that Q" as a given, modify those desires so that the agent has desires that he will tend to realize through actions that realize P.

For example, I argued that in a simple bargain, if you should realize your side of the bargain before your counterpart realizes his, your counterpart will lose all motivation to complete their part. Realizing P will cease to be instrumental to realizing Q. Realizing P will only be completed if your counterpart has some additional motivation for realizing P after you have done your part to realize Q.

I discussed the options of reputation and an aversion to breaking promises - of which the second would be the more reliable motivation. Your "desire that P" implies a motivating reason to seek out bargains with others who you have reliably determined have an aversion to breaking promises.

And what is true of you is true of those other agents.

These are facts about the world - implications of having a desire that P bargaining with an agent with a desire that Q.

Yet, this discussion and others like it, I did not draw any conclusions expressed in the form of what you "should" do. I did not prescribe any action - I simply described the actions that were compatible with your desire that P.

Now, I want to bring back the claim that "should" has to do with "reasons for action that exist," and desires are the only reasons that exist. Should statements ARE descriptions of actions compatible with given desires that exist.

When I say, "You should do X", a sensible question to ask is, "Why should I do X?"

The sensible answer to this question is for me to describe the relationship that exists between the action that I am recommending and the reasons for action that exist. Reasons that do not exist are not relevant to what you should really do. And desires are the only reasons for action that actually exist. So, my answer to your "should" question is to relate the action to various reasons for action (desires) that exist.

At this point, we can divide these reasons for action that exist (desires) into two groups. There are reasons for action that you have, and reasons for action that exist but that you do not have. This second group of reasons for action are the reasons that other intentional agent has. It refers to desires that exist that are not your own. They are real. They exist. However, they are not yours in the same way that other hands and feet are real, but are not yours.

You are only going to be directly motivated by the reasons for action (desires) that you have - not by all of the reasons for action (desires) that exist. Your desires motivate your actions. The desires of other people motivate their actions. A claim that a particular reason for action exists does not motivate you to act directly unless that reason for action that exists is one that you have.

However, this is not all that can be said about reasons for action that exist - but that you do not have.

While those reasons may not motivate you directly, they are reasons for other people to act in particular ways that will affect you. They are reasons that exist for other people to bargain with you or threaten you. They are reasons that exist that determine whether they will keep or break bargains, tell you the truth, or act so as to realize not-P.

For the purposes of this series, one important fact is that they are reasons that exist for them to act so as to modify your desires - to give you different reasons for action. That is to say, they have reasons to use reward (such as praise) and punishment (such as condemnation) to trigger your reward-learning system in a way so as to create and strengthen in you certain desires, and to weaken or eliminate others.

In that sea of reasons for action that exist, there are a great many and strong reasons for promoting (or inhibiting) some desires - such as the desire to keep promises, to tell the truth, to refrain from threatening those who do not make threats, and the like. I can make real-world claims about the desires you have or could have and the sea of reasons for action that exist for offering rewards and condemnations.

When I say, "You should not lie" in this sense - the moral sense - I am not saying that you HAVE reasons not to lie. I am saying that there exist a great many and strong reasons for people to cause you to have a reason not to lie. I am saying that they have many and strong reasons to offer rewards (such as praise) to those who are honest, and to offer punishments (such as condemnation) to those who lie.

But I am not just making these factual claims. I am also, at the same time, giving praise to those who are honest, and condemning those who lie. I am not only stating that reasons exist to trigger the reward-learning system so as to promote honesty and discourage lying, I am trying to trigger the reward-learning system so as to promote honesty and discourage lying.

There are theories that say that moral claims aim to point out some fact that, itself, would motivate an agent to behave differently. Those claims that exist. Beliefs only interact with the desires that an agent already has - they do not create new desires or modify existing desires. The reward-learning system modifies desires. But the reward-learning system does not respond to facts. It responds to rewards (such as praise) and punishments (such as condemnation).

You can respond sensibly to my claim that you should not lie by providing evidence that people, in fact, do not generally have reasons to praise those who are honest and condemn those who are dishonest. Or, you can respond to a claim that homosexual acts are wrong by pointing out that people do not really have reasons to praise those who refrain from homosexual acts and condemn those who engage in such acts. Thus, the praise or condemnation you offer is not, in fact, praise or condemnation that people actually have genuine reasons to give. It is unjustified praise and condemnation, grounded, ultimately, on the false beliefs or malicious interests (interests or desires that people generally have reason to condemn) of those who provide it.

You may respond that this is not what you mean by the word "should", or that you do not agree with the claim that this captures how the word is actually used. Neither of these counter-claims are actually worth a great deal of effort. Neither proves that the substantive claims of this theory are false. They are merely disagreements over the language used in expressing those substantive claims, not the substantive claims themselves.

Regardless of the words people actually use, the substantive claim that people generally have many and strong reasons to use rewards (such as praise) and punishments (such as condemnation) to promote a desire to be honest and an aversion to lying remains true. The fact that you - and people generally - often have reason to bargain only with those who have an aversion to breaking promises remains true. They are true no matter what language you decide to speak when making these claims.

Thursday, May 26, 2011

The Next Big Space Project

Apollo astronauts Neil Armstrong, Jim Lovell and Gene Cernan have written an opinion piece in USA Today accusing Obama of killing America's space program and destroying America's leadership in space.

(See: USA Today: Is Obama grounding JFK's space legacy?)

I hold the opposite view. Obama's vision for space development was the best things and most hopeful plan I have seen since Apollo – at least until Congress ran over it – severely wounding (though not entirely killing) it.

As I see it, Armstrong, Lovell, and Cernan, as well as a great many space activists, are simply stuck in the past. They want to relive the glory days in which a President steps up to the microphone to announce to the nation another grand space project comparable to Apollo, whereby the nation rallies around the cause and agrees to devote massive amounts of will and resources reaching this grand and glorious goal.

Because Obama did not do that, he deserves their contempt. On the other hand, they had praise for Bush, who offered a plan that fit this model – the Constellation project.

However, these types of huge programs are substantially worthless.

The Apollo program was not worthless. It served its purpose as a proxy war with the Soviet Union that accomplished something great – much better than destroying half the world with a rain of nuclear weapons. However, we have no proxy war to fight now. That is precisely why we lack the public will to have another program like Apollo.

When Apollo 11 landed on the moon, the proxy war ended. We had won. Immediately after this – well before Apollo 13 even launched, the United States had moved it. The game was over. But, unfortunately, some of the players in that game have not gotten used to the idea that the game has ended. They want to keep playing the same old game.

It’s not difficult to understand this attitude. While the game was being played, they were national heroes. They still live in the glory of that wonderful victory that they pulled off against the Soviet Union. Wanting to relive (at least by proxy) their glory days, they demand more of the same. Because the President does not wish to provide it, they accuse the President of ruining America.

That game is over. It ended 40 years ago. It was a great effort. We won. That's something to be proud of (and something to feel a great deal of relief over). But that game has ended. It is time to move on.

Going forward, we need a space program that makes sense in the 21st century, not an instant replay of cold-war posturing.

The new space program does not consist of proxy-war projects headed by a government bureaucracy. It is to be found in the efforts of companies such as SpaceX, Virgin Galactic, Bigalow, SpaceDev, and Armadillo.

I have intentionally left Lockheed and Boeing off of this list. These companies made a great deal of money supporting the proxy war of the 1960s and its aftermath. They have a great deal of incentive to see the proxy-war form of project continue. Their laziness and lack of innovation is exactly what makes the smaller companies listed in the previous program viable. The biggest threat those other companies face is that these giants might actually decide to become entrepreneurs again.

In saying this, it is also the case that space development is a public good that deserves some amount of public support. Ultimately, space development is the best hope we have for the long-term survival of the human race. The possibility of setting up a set of diverse and independent cultures will allow for a degree of social experimentation in different political and social models that we have not seen in nearly two centuries.

These benefits argue that governments should invest some money in these projects. However, that investment should be consistent with harvesting the public goods that space development promises to provide. This is NOT done by announcing another Apollo-style proxy war project. This is done by offering support for projects that lie in the same direction as the profit-making opportunities that private companies have identified.

America is in the forefront of the computer industry. However, we do not keep our lead by having the President announce hundred-billion dollar projects to build the largest and fastest computer. It happens because American entrepreneurs realize that there is a profit to be made in making and selling computers that serve real human needs. The effect of a hundred-billion dollar computer project would not be to ensure American leadership, but to divert a hundred billion dollars from productive computer development that serves human needs into a public computer project that exists merely for show.

We might get a few useful spinoffs from another proxy war type space project - but we would also get spinoffs from a hundred-billion project to dig the biggest hole we can dig in 10 years and filling it in again, or a hundred billion dollar project to make the largest possible ball of string (or pyramid).

Here, again, I come back to the notion that the Apollo program was a proxy war. Yes, the Apollo program produced a great many technological innovations that ultimately proved useful. But not nearly as many - and not in nearly as short a time - as did World War II. Proxy wars, like real wars, are great at producing these types of benefits. But that does not make them worth the cost.

We do not need, and it does not serve our national interests, to have another huge proxy-war space project. What we need is for the government to provide what help it can to whatever private initiatives people can imagine that aim to serve real needs and interests.

We need to stop living in the 1960s and start living in the 2010s.

Rewards and Punishments

Before I took a brief detour to discuss the rapture, I was posting about the fact that you are an intentional agent in the world with desires motivating you to realize that which you desire. You are surrounded by other intentional agents. However, the fact that you desire that P provides them with no motivation to realize P or to refrain from realizing not-P.

So, what can you do to get these other agents to realize P or, at least, refrain from realizing not-P?

I have looked at three options so far. Given an intentional agent in the community with a desire that Q, you might have an opportunity to realize P with any of the following:

(1) Bargaining: "If you act so as to realize P, then I will act so as to realize Q."

(2) Threats: "Unless you act so as to realize P, I will act so as to realize not-Q."

(3) Belief Modification: "The best way for you to realize Q is via Action A (which will realize P).

There are versions of each of these for getting other agents to refrain from realizing not-P.

In this post, I would like to discuss a fourth option.

Desire modification.

I want to begin by introducing another fact about those other intentional agents that you find yourself living with. For the most part, they each have a reward-learning system. The way this works is that, when an agent performs an act that produces a state called a "reward", their desire to perform that type of action gets stronger. And if an action produces a state of "punishment", an aversion to performing such an act grows stronger.

Furthermore, those other intentional agents have mirror neurons. This means that if Agent A experiences a reward or punishment, and agents B, C, and D witness it, then they will experience something very similar to the same state that A has experienced, with these same effects.

So, when A performs a type of action that would help to realize P, and you reward him in the presence of B, C, and D, then all four agents will likely acquire a slightly stronger desire to perform that type of act - contributing to the greater realization of P in the future.

The same is true if you punish A for acts that tend to realize not-P in the presence of B, C, and D.

In fact, you don't even need to have a real agent A performing a real action resulting in a real reward or punishment to have this effect on B, C, and D. Agent A might be a fictitious character, enduring fictitious rewards and punishments, while the community identifies him as somebody who would be worthy of rewards or punishments in the real world. The effect will still be to trigger the reward-learning system in the audience so as to promote some desires and inhibit others.

Using these tools, you have the ability to cause that intentional agent with the desire that Q to acquire a desire that R which, in turn, will help to realize P. Perhaps you can cause him to have a desire that P.

In my post on bargaining, I mentioned that bargains, where one person would complete their terms before the other, are doomed to failure. In these cases, you would be wise to seek out those with an aversion to breaking promises. And you have reason to acquire this property yourself so that others (with useful bargains to present) will have reason to seek you out.

Now, you have a way of promoting this aversion to breaking promises. By publicly rewarding and praising those who keep promises - and punishing and condemning those who do not, one can strengthen the desire to keep promises in the community at large. Stories in which the heroes keep promises even at great cost and villains break promises would also be useful.

The same methods can be used to promote an aversion to making threats and an aversion to lying.

Of course, you are not the only one in the community who has reason to trigger the reward-learning system to promote aversions to promise-breaking, threats, and lies. You should be able to convince a great many others that they have many and strong reasons to join you in this project.

Remember, false beliefs can seriously muck up this project.

If people get it into their head that eating with the left hand will offend the gods, who will punish the people with floods and famine, they might draw the false conclusion that they have reason to condemn - and to promote an aversion to - left-handed eating.

A fish-vendor might have difficulty convincing people to eat more fish, until he circulates a story about some divine power that will bestow blessings on the community whose people who eat fish on Friday - causing people to think that they have reason to praise those actions and punish non-compliance.

People, victims of foolish notions like prayer will bring rain or prevent terrorist attacks, or who come to think that God directs the course of hurricanes to punish the acceptance of homosexuality, might think that they have reason to direct the reward-learning system in directions that there is no real-world reason to travel.

It's rewards and punishments are unjustified.

Of course, the same is true of non-religious systems that are grounded on false premises - such as act-utilitarianism, Ayn Rand's Objectivism, intrinsic value theories, and any and all forms of social contract theory.

To avoid these unjustified rewards and punishments and the misdirection of our learned sentiments, we have reason to surround ourselves with people who have some aversion to making these mistakes.

People who have this aversion will be motivated to think twice about who they reward and who they punish. They will seek to double-check their work for possible errors. In the realm of punishment, they will want to presume innocence and will need to have guilt proved beyond a reasonable doubt. The greater the punishment, the greater the strength of these presumptions.

Just as we have reason to promote in others an aversion to unjustified rewards and punishments, they have reason to promote this aversion in us.

Now, we have a way to promote this aversion to unjustified rewards and punishments. We do this by praising those who seek solid ground for their rewards and punishments, while we condemn those who assign praise and condemnation recklessly.

Wednesday, May 25, 2011

Apollo +50: The Space Race Begins

50 years ago today, President John F. Kennedy started the space race.

First, I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth.

No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.

We propose to accelerate the development of the appropriate lunar space craft.

We propose to develop alternate liquid and solid fuel boosters, much larger than any now being developed, until certain which is superior.

We propose additional funds for other engine development and for unmanned explorations--explorations which are particularly important for one purpose which this nation will never overlook: the survival of the man who first makes this daring flight.

But in a very real sense, it will not be one man going to the moon--if we make this judgment affirmatively, it will be an entire nation. For all of us must work to put him there.

Let's be honest. This was a proxy-war with the Soviet Union. In an age of intercontinental missiles and nuclear warheads, people were reluctant to enter into a genuine shooting war. The Bay of Pigs Invasion happened only 5 weeks earlier - on April 17th. Five days before that, the Soviet Union had put a man into space. It seemed as if America was weak - scientifically and militarily.

And these two elements - science and military - were not distinct. It is important to note that the space race used the most modern military technology - missiles and satellites for weather, communication, and observation. One of the fears of the 1960s was that the Soviet Union could control "the high ground" of space.

Space activists tend to ignore this historic context in order to portray space exploration as a peaceful project aiming at the exploration and development of space. They do not understand why the same motivation does not exist today and wonder about what inspire a new generation to the same ends.

As a proxy war, Kennedy needed to pick an end that would provide a genuine test of the country's abilities and will. Landing a person on the moon by the end of the decade had a lot in common with the earlier task of defeating Germany and Japan - and doing so by 1945.

Another part of the context that it is necessary to understand is that, by May 25th, 1961, the United States had a total of 15 minutes of flight experience on space missions - and about 5 minutes of experience in space itself. There had been one (1) sub-orbital flight. That was it. NASA was charged with going from launching one astronaut to an altitude of 100 miles and letting him fall back to Earth, to putting people on the moon and bringing them back to Earth, and it had 8 years to do it in.

As for the timeline, I find it interesting to note that, while the decade did not actually end until 1970, the goal of landing on the moon by the end of the decade took 1969 as its objective. This meant that the goal was to put a man on the moon and return him in 8.5 years - not in 10 years.

They would accomplish it in a little over 8 years.

So, on May 25th, Kennedy declared a proxy-war against the Soviet Union. Now, the challenge would be to see if we could win the war. It would take an investment in national effort, development, and technology that would be the rival of many violent conflicts. However, all things considered, it would prove to be far less costly - and far less destructive - than the alternative.

That depends, of course, on whether the American people were up to the challenge.

Tuesday, May 24, 2011

Science, Religion, and the Rapture

Let me identify what I hold to be the most significant difference between science and religion.

Science allows people to make useful predictions of the future. Religion does not.

Science says that the Sun will use up all of its hydrogen in five to seven billion years and swell in size - making life on Earth impossible at best, or consuming the Earth, unless steps are taken to change this future. It also tells us what steps can be taken and how to take them to increase our chances of success.

The vast majority of predictions we get from religion fall into two groups.

One group consists of predictions whose failure cannot be confirmed. The utter failure on the part of religion to make useful predictions on matters that can be confirmed suggests that we should expect the same failure rate with respect to these unverifiable predictions.

The other group consists of riddles of vague claims where a wide variety of results can be fit into the prediction after the fact. A willingness to stretch the meanings of words entirely out of shape means that there is no future event that cannot be fit into the "prediction' after the fact.

Of course, as any fortune teller will tell you, if you make enough predictions a few of them are going to turn out right simply by blind luck.

All of this is simply worthless nonsense. It has no practical value whatsoever.

Science has made our lives better off because, with science, people can make specific predictions and those predictions actually come true.

Think of the computer on which you are reading this posting. This act of reading what I wrote is the product of being able to accurately predict a huge set of events whereby what I put on this post now will show up on your screen at the time that you are reading this. The people who designed and built this computer did so by stringing together a massive set of predictions.

Not one of those predictions came out of scripture. You can't read scripture and come away with a set of reliable predictions. You can try to have faith that the unconfirmable predictions are reliable - but they are far more likely to be wrong than right.

Why is that?

Because there are simply far more options that fit in the category of "wrong".

You pick at a card out of a deck. I guess that you picked the King of Hearts. I can have faith that you picked the king of hearts. However, it is not the case that you're guess has a 50-50 chance of being right. Chances are, you are far more likely to be wrong than right. Given a sufficiently large deck (one with a near infinite number of cards), I can virtually guarantee that whatever card you guess without evidence to be the correct card, you are almost certainly wrong.

This is the source of my own confidence that the claims of any religion can be rejected. Religious claims draw a random card out of a nearly infinitely large deck. I don’t need to know what the card actually says to know that the religious person sitting next to me who, without the slightest evidence, claims to “know” what the card is, is almost certainly wrong.

The predictions we get from science are not perfect - and they almost certainly never will be. However, this does not change the fact that the predictions we get from science are the only truly useful and reliable predictions we have available.

I have often condemned people for making overly broad claims about religion. Yet, this is a claim about religion that I would not classify as overly broad. Religion has no ability to make useful predictions above and beyond those that science can provide.

If somebody wants to predict the end of the world using religion - they can pretty much be ignored. They know nothing, and their guess is almost certainly wrong. However, the predictions for the end of the world that science has given us can be accepted with the degree of precision that science allows. We've got serious problems ahead - a couple hundred million years down the road - and extremely serious problems to worry about in five to seven billion years.

It is possible - though unlikely - that we might meet our end earlier by colliding with another object in space. We could be hit by a passing star, a black hole, or a rogue planet thrown out of some other solar system and sent on a collision with Earth.

There are options that fall far short of destruction of the Earth that are still tragic - and still worth avoiding. But the ability to predict them and avoid them gains nothing from scripture. They come from science, or they do not come at all.

Monday, May 23, 2011

Ethics After the Rapture

Since this rapture did not happen, I think it us now time to seek answers to a few questions.

I will start with these. . .

(1) How much money did Harold Camping's http://en.wikipedia.org/wiki/Family_Radio and its executive officers take in on this campaign?

(2) Did its decision makers act in any way as if the Rapture they claimed would happen was certain to take place? Did they disavow Camping's claims?

I am interested here in the possibility of fraud - perhaps not in the legal sense (making one subject to criminal penalties) but at least the moral sense (making one worthy of the condemnation and contempt of good people).

How much profit did those people realize? And what is the specific nature of their culpability?

Here's a question - why did they not sell off their broadcast license to those who denied the Rapture and use the money to set up the means to protect people they loved who might be "left behind"?

Would a lawsuit be able to compel them to release emails and other records detailing their activities in the months leading up to May 21? Would it be possible to see if they were delivering private assurances to friends that contradicted their public statements - planning vacations to places that should not exist? What is the relationship between what they claimed and what they actually believed?

Fraud identifies one level of culpability. Did these agents display the lack of aversion to making claims they believed to be false? As I mentioned in my previous post, we have reason to condemn such people to inhibit these traits - because of the harm that they do (a harm that is very much evidence in the effects that this failure will have on the lives of those who actually believed it). It certainly does us no good to construct a society in which people like this profit. If they profit, that will encourage others to pursue the same path, leading to more suffering.

Even if these agents are innocent of moral fraud, that does not clear them of the charge of intellectual recklessness.

The drunk driver may not be guilty of intentionally murdering the child he runs over on the way home from the bar, but that hardly proves that he is a moral saint. He is still guilty - still worthy of condemnation - because of the lack of concern with the risks he creates for others through his actions.

Here, I see no possible defense against this charge of moral recklessness - a type of intellectual laziness and disregard for the potential harm caused to others that others can certainly influence through condemnation.

Another facet of their moral culpability is exposed by asking the question, "What are they going to do for the people made worse off by their actions."

Family Radio owns over $100 million in assets - not counting the private wealth of its executives. Their actions lead people to quit their jobs and empty their retirement accounts (and send the money to Family Radio). They had to. The only way to be saved is to truly believe, and no person who held onto their assets could be thought of as "true believers". The threat was explicit. Believe, or suffer with the Earth through five months of hell until the world is destroyed. Prove your belief by acting as a person who believes the world will end on May 21st would act. Give away your worldly possessions - you will have no need of them. Of course, Family Radio is right there asking for donations.

Now that they have been proved wrong, and they have a large audience of loyal listeners made worse off by their actions - do they feel even a slight bit of shame and guilt, of moral responsibility, for the consequences of their ill-conceived actions? A true believer shows his belief through his actions - by acting as a person who believes that the world will end would act. A morally responsible person shows his moral character through his actions - by taking action to restore something to those that they have harmed.

So, is Family Radio going to take any percentage of its $100 million in assets and try to restore something at least to those harmed the most by putting trust in them? Or will they simply take what they were given, and ask for more? This tells us something of their moral character - something of what type of people they are.

Of course, the executives at Family Radio could respond, "It's not our responsibility. We did not force our listeners to drain their accounts and give us their money."

In one sense, this is not true. Threatening them with hell on earth is not much different than threatening them with a gun. Threatening them with hell is not much different than threatening them with a gun that is not loaded. It counts as force - when the gun is put to the head of somebody who does not already know that the gun is not loaded.

Even without this argument. Somebody trusted you, and now they are worse off because of it. You did not warn them of your possibilities of being wrong. You told them that you were certain you were right. You told them that they would suffer severe consequences if they did not listen. And, now, they are worse off then they were before. A decent person would feel some sense of responsibility for those results. A person who says, "Hey, tough luck. That's what you get for trusting me," deserves only our contempt.

Thursday, May 19, 2011

Beliefs and Lies

You are an intentional agent in the world. You have desires that motivate you to act so as to create or preserve states of affairs in which the things that you desire are realized.

You are surrounded by other intentional agents with their own desires.

Furthermore, your “desire that P” does not, in itself, motivate anybody but you to realize states in which P is true.

How can you get others to act so as to realize states in which P is true or, at least, refrain from acting so as to realize states in which P is false?

I have already discussed two options.

(1) Bargaining. “If you act so as to realize P, then I will act so as to realize Q.” You give the other person an instrumental reason to realize P – as a means to realizing Q.

(2) Threats. "Unless you act so as to realize P, I will act so as to realize not-Q."

In this post, I want to introduce another fact about intentional agents and discuss another way in which you might get them to realize P.

This fact us that, while desires motivate agents to realize states in which those desires are objectively fulfilled, their actions are mediated by their beliefs. If an agent is thirsty and believes that a pitcher contains clean, cool water, he will drink the water - perhaps discovering after the fact that his belief was mistaken.

So, as an intentional agent with a desire that P, one way you can get another intentional agent with a desire that Q to act to realize P is by altering his beliefs. If, given his beliefs, an act that realizes P will realize Q, he now has a motivating reason to perform the act that will realize P on his way to realizing Q.

This phrase, "reason to act so as to realize P" is intentionally ambiguous. He may be caused to intentionally realize P as a means to realizing Q. Or he may be caused to act in ways that realize P as an unintended side effect or realizing Q. Your desire that P is only concerned with the realization of P, not with how it is done (unless that is a part of P).

So, if you can convince another agent with a desire that Q to believe T, he will act so as to realize P. If you convince him to believe not-T, he will not act so as to realize P. Given these facts, in this simple model, your only motivation is to convince that agent to believe T. That is the only option that will realize P.

Is T true?

You have no motivation to even ask, let alone answer, that question in this simple model. The other agent's motivation to act depends only on believing T, not on whether T is true.

What is true of you in this case is true of every other agent out there giving you information. If convincing you to believe V will cause you to act so as to realize Q, then that other agent has a motivating reason to cause you to believe V. He has no reason at all to refrain from convincing you to believe V based on the fact that V is false - not unless this is built into Q or that agent has some other motivating reason to refrain from convincing others of falsehoods.

That other agent may have no motivating reason to consider the truth value of V in getting you to believe V, but you do. Your desire that P gives you a motivating reason to realize P. You use your beliefs to choose those actions (and inactions) most likely to realize P. You will act as if your beliefs are true. When they are not true, this will likely have an adverse effect on your ability to predict accurately. You might choose the action that realizes not-P.

You are thirsty. There is a glass of what you believe is clean, cool water free for the taking in the server tray. You drink from the glass. You are mistaken; it is not clean water. You end up being violently ill. If you had known that in advance, you would have never drank from the glass. Your lack of true and relevant beliefs would cause you to act in a way that you would not have acted if your relevant beliefs were true and complete. And, of the two actions, the one grounded in false beliefs was mistaken.

Your choice of actions that (you predict) will realize P will almost always (though, in important cases, not always) depend on having true beliefs. So, while that hypothetical other agent only has motivating reason to convince you of what will cause you to realize Q - whether true or false, you have a motivating reason to be convinced only of that which is true.

And while you only have a motivating reason to convince others of that which will cause them to realize P, they have motivating reasons to be convinced only of that which is true.

Now, while other people are motivated to tell you that which will help to objectively satisfy their desire that Q, what if their desires includes a particularly strong aversion to making false claims?

If such a person existed, that person would refrain from providing you with false information - or, at least, with information he thought to be false - even when he would otherwise benefit. Perhaps, when put up against something like his own aversion to death or the well-being of his child, he may be motivated to lie. However, where his aversion to lying is strong, it would take something like this to get him to lie.

Even here, a particularly strong aversion to lying would serve as a particularly strong motivation to find some other way - any way - to accomplish the same end without the lie. Here, too, the motivation is the same as that a person with a strong aversion to pain would have to find some option that promised not to involve pain before reluctantly settling on an option that does.

The instrumental value of identifying those with an aversion to lying would motivate agents to adopt methods that reliably identify agents as having or lacking this aversion to making false claims. This might include working with others to identify those who lack this aversion - perhaps identifying them as "liars".

And, if methods existed (e.g., by using the reward-learning system) to promote or strengthen this aversion to lying, your motivating reasons to acquire accurate information suggests that you employ these methods - and negotiate with others having the same interest - to promote this aversion to lying. A social institution for encouraging this aversion to lying, identifying those who do not have it, and labeling them publicly, could well be mutually beneficial.

Tuesday, May 17, 2011

An Analysis of Threats

You are an intentional agent in the world. You have desires that motivate you to realize states of affairs in which the things that you desire have been realized.

You are surrounded by other intentional agents with their own desires (i.e, a “desire that Q”).

Furthermore, your “desire that P” does not, in itself, motivate anybody but you to realize states in which P is true.

How can you get others to act so as to realize states in which P is true or, at least, refrain from acting so as to realize states in which P is false?

I have already discussed one option.

Bargaining. “If you act so as to realize P, then I will act so as to realize Q.” You give the other person an instrumental reason to realize P – as a means to realizing Q. One problem with bargaining, however, is that as soon as one participant completes their part of the bargain, there is no more motivation for the other participant to complete their part. It loses its instrumental value. Some other motivation is needed – such as reputation or an aversion to breaking promises. Dealing with a person who lacks further motivation means it will be foolish to be the first to meet the terms of one’s agreement.

Today, I want to look at another option.

Threats. You find somebody with a desire that Q and say, "Unless you help to realize P, I will act so as to realize not-Q."

For example, "If you help me to rob this bank, or I will cause your child a great deal of pain.”

At first glance, threats are taken to be the opposite of bargains. However, a quick second glance shows us that they have a lot in common. Every bargain contains an implicit threat - "If you do not act so as to realize P, then I will not act so as to realize Q." Every threat can be expressed as a bargain - the proverbial "deal you can't refuse."

One of the problems with threats is that other agents also have motivation to make threats. You may find yourself faced with other intentional agents giving you the choice, “Either you act so as to realize Q, or I will act so as to realize not-P.” Such as, “Your money or your life,” or “The penalty for offending God or the King is death.”

The fact of the matter is – like it or not – threats have, and will continue to have, instrumental value. They will always be a means for people with a desire that Q, confronted with other agent with a desire that P, to get those others to realize states in which Q is true. This is not going to change.

So, let us assume that somebody is threatening you. If you do not act to realize Q, then he will act to realize not-P – where P is something that you want. You want your child to be free from pain, so the threatening agent says, “If you do not give me the money in your bank account, I will realize a state in which your child is not free from pain.”

One thing for you to look for is whether the agent has any reason to realize not-P other than your decision to help realize Q. Until you realize Q, then his restraint from realizing not-P has instrumental value – to motivate you to realize Q. Once you realize Q, then he loses any motivation to restrain from realizing not-P, and his other desires will dominate his action. If those other desires motivate him to realize not-P, then your realizing Q was for nothing.

Consider, for example, a bargain with your kidnapper. “If you give us $250,000, then you will get your child back.” You give them $250,000. Q, now, has been realized. They now have no more incentive to keep your child alive. It does not do them any good to do so – unless, somewhere, they have some other desire motivating them not to kill your child.

Just like with bargains, one possible motivation is reputation. If he wants to make useful threats in the future – to kidnap other people and collect ransom from them - it would be useful (have instrumental value to him) to be known as somebody who does not realize not-P when those he threatens realize Q.

However, the instrumental power of reputation requires that the agent wants to make future threats. If this is “the one big haul” through which the agent will be “set for life”, then it would be foolish to expect the threatening agent to keep their side of the bargain based on reputation. The same is true if the threat is made in secret, so that the threatening agent’s reputation cannot be affected.

Another possible source of motivation is an aversion to breaking promises. Here, threats are just like bargains. Whereas the instrumental value of the threat is what motivates the agent to make the threat, the aversion to breaking promises will motivate the agent to live up to their end of the bargain even after the person threatened has done what is demanded. If you can reliably determine if others have an aversion to breaking promises, you can reliably determine if the person threatening you will keep their end of the bargain after you have kept yours.

I have mentioned that the threat – or the restraint from doing that which was threatened - loses its instrumental value the instant you realize Q. It also loses its instrumental value the instant that you make the realization of Q impossible. If the agent threatens to kill your child unless you turn over the key to the vault, turning the key over to the threatening agent, or destroying the key, both eliminate the instrumental value of not killing your child. In many cases, the best option is neither to realize Q, nor to render Q impossible, but to stall and negotiate.

The last claim I want to make about threats in this post is to point out that, all things being equal, you have reason to surround yourself with people who have an aversion to making threats; or, at least, an aversion to threatening you. A person with an aversion to issuing threats will not come to you and say, “either you act so as to realize Q, I will act to realize not-P,” even when it would otherwise benefit them to do so. This is true in the same sense that a person with an aversion to pain will avoid states of affairs where he is in pain, even where he would otherwise benefit.

But, let's be honest, there will always be some people who deal in threat-making. Like I said earlier in this posting, threats will continue to have instrumental value. There will always be some motivation to generate threats. So, instead of the pipe dream of pursuing a universal aversion to making threats, perhaps an aversion to threatening non-threatening individuals will be more useful. There are a lot of details to work out as to exactly what this would look like. However, the general idea seems to make some sense.

And, as with bargains, people who can reliably detect whether others have this aversion to threatening non-threatening individuals will have reason to welcome those who have this property, and exclude those who do not. To obtain the benefits of belonging to such a community, it would be useful to acquire the property of having an aversion to threatening non-threatening individuals yourself.

Thursday, May 12, 2011

Bargaining and Promise Keeping

You are an intentional agent with desires that motivate you to act so as to realize states of affairs in which the propositions that are the objects of those desires are true.

You are surrounded by other intentional agents.

However, desires only motivate the agents that have them. Therefore, the fact that you have these desires does not automatically give anybody a motivating reason to realize states of affairs that would objectively satisfy your desires. Going up to somebody and pleading, "I have a desire that P" can well lead to a shrug of indifference or even motivate the agent to realize not-P, if (for example) he hates you.

So, what options do you have in interacting with these other agents?

One option is the option to bargain or trade.

You find somebody with a desire that Q, where, that person can realize Q more efficiently with your help, and you say, "If you perform these acts for realizing P, then I will perform these other actions for realizing Q."

With this, you give that person a reason to realize P - by turning it into a means for realizing Q where he already had a motivating reason to realize Q.

Of course, this assumes that P and Q are jointly realizable. If P implies not-Q or Q implies not-P then there is a problem. But there are many cases in which “P and Q” is possible.

Unfortunately, under the assumptions we are making here, a substantial percentage of potential bargains are doomed to failure before they are even made. These are bargains where one agent fulfills his terms before the other one does. As soon as you complete your side of the bargain, and Q has been realized (or, at least, you have completed your steps for realizing Q), then his acting to realize P ceases to become a means for realizing Q. Consequently, his motivation for acting to as to realize P disappears.

You would be quite foolish to trust that he will realize P after you have completed your side of the bargain, unless he has some other motivation supporting the realization of P even after it is no longer an effective means for realizing Q.

One potential motivator could concern reputation. If he wants to enter into future bargains with others, he has reason to avoid being known as somebody who does not complete his side of a bargain. That would reduce his ability to enter into future bargains and reduce his ability to realize states that could best be realized through bargains.

However, this only applies if (1) the other agent has a reason to enter into future bargains, (2) you have the ability to threaten his reputation, and (3) you have the will to use that ability (which might be hindered by threats of violence or just a general aversion to causing trouble). Remove any of these elements, and your bargaining partner no longer faces the motivation of reputation.

Another potential motivator is an aversion to breaking promises. A person with a strong aversion to breaking promises will be strongly disposed to choose actions that will keep the proposition, "I have broken a promise" false. This is true in the same way that a person with a strong aversion to pain will be strongly disposed to keep the proposition, "I am in pain" false.

The person concerned solely with reputation will break a promise when he can get away with it. The person with an aversion to breaking promises will not break a promise even when he is the only person who will ever know about it. This is true in the same way that a person with a strong aversion to pain will avoid situations in which he is in great pain even when he will be the only person to know about the pain.

For that person, the instrumental value of the bargain - that realizing P becomes a means for realizing Q - is his motivation for making the promise. The aversion to breaking promises become his motivation for completing his side of the agreement even if you should finish your part first, and realizing P ceases to become useful for realizing Q.

To the degree that you can reliably detect this aversion to breaking promises in others, where you will complete your part of the bargain before the other party completes theirs, it makes the most sense to bargain with somebody who has this aversion to breaking promises. Furthermore, it would be in your interest to become a reliable detector of the aversion to breaking promises in others.

Now, you should also realize that, what is true of you in this case is true of others as well. Those other intentional agents that exist in the world around you also have reasons to enter into bargains. They have reason to prefer to bargain with people who have an aversion to breaking promises. And they have reason to work on improving their capacity to reliably detect who has this aversion – just as you do.

This means that, if you can acquire this aversion to breaking promises, then you will probably have a comparative advantage over others that those people would seek to bargain with.

At this point, I will not get into the question of whether it is possible to cultivate a desire or aversion. I will leave this discussion at the point that says that if it were possible to cultivate such an aversion, and others are reliable detectors of those who actually have such an aversion, then you almost certainly have a motivating reason to do so.

Of course, it is also true that if you can discover a way to exploit their detection methods and fool them into thinking you have this quality, you have reason to do that as well. Though it will also pay you to teach others how to reliably detect when others are using this technique, so that they can help in identifying and flagging those others.

This leads to a further implication of this system. Not only do you have reason to bargain with those who have an aversion to breaking promises, and to reliably detect those who have this aversion, you have reason to join with others in a campaign to identify and remove or, at least, flag those who lack this aversion to breaking promises. Of course, in doing so, you risk that they will flag you as such a person if you have not cultivated this aversion to breaking promises yourself.

Such is the nature of bargaining and promise-keeping under the assumptions we are working with here. People have no automatic reason to consider the desires of others, but they have reasons to enter into bargains. Bargains will fail where one person completes his terms of the bargain before the other one does – unless the other person faces some other motivation. Reputation is a good but flawed motivator in that people motivated by reputation will break promises they can get away with breaking. On the other hand, people with a strong aversion to breaking promises will keep promises they can get away with breaking. We have reason to prefer bargaining with those agents, with improving our ability to detect those agents, with cooperating with others in detecting those agents, and in being one of those agents.

Wednesday, May 11, 2011

Specific Claims About Desires

So, here you are, an agent in the world, surrounded by other agents, in which the following are true:

(1) Desires are the only reasons for action that exist.

(3) A desire is only a motivating reason to act for the person who has it.

Which means that nobody around automatically has any motivating reason to consider your desires when performing their actions, whatsoever.

I want to go into the implications of this but, before I do, I need to refine this first statement a bit. I need tos specify some more facts about desires.

(1a) Desires are propositional attitudes. That is to say that desires are mental states (attitudes) that take as their attitude a proposition (a sentence, capable of being true or false, such as "I am helping a sick child").

(1b) desires are motivating reasons in the sense that they motivate agents to intentionally choose actions that - if their relevant beliefs are true and complete - will realize (make real or make true) the propositions that are the objects of those desires (e.g., to make true the proposition, "I am helping a sick child").

Note that we are talking here about an agent's intentional actions - the actions that an agent chooses to perform or chooses to refrain from performing.

That desires motivate an agent to make a proposition true (realize a state of affairs in which the proposition is true) explains why agents are not motivated by experience machine options. A person with a desire to help sick children, if given the option of entering an experience machine that will stimulate her brain in ways such that she thinks that she is helping sick children - is almost entirely uninviting. This is because the experience machine cannot make the proposition, "I am helping a sick child" true, so it does not objectively satisfy the desire.

Luke Muehlhauser and I discuss this subject in Eposide 15 of Morality in the Real World

So, for the first stastement, it would be more precise to say:

(1) Desires are the only motivating end-reasons for intentional actions that exist.

Desires identify the ends or goals of intentional action - what the agent is aiming for - and motivate agents to realize (to make real) those ends.

So, let me restate your situation.

Here you are, an agent in the world, surrounded by other agents, in which the following are true:

(1) Desires are the only motivating end-reasons for intentional actions that exist.

(3) A desire is only a motivating end-reason for intentional action for the person who has it.

Which means that you are surrounded by agents choosing intentional actions who do not automatically have any motivating end-reasons for choosing intentional actions that realize the propositions that are your ends. In fact, they may have motivating reasons to choose intentional actions that will prevent the realization of your ends.

There ends might include propositions such as, "You are happy and healthy," but could also include propositions such as, "You are enduring a great deal of suffering" or they may seek states of affairs in which your suffering is a byproduct - and they might not care.

What can you do about it?