Showing posts with label happiness. Show all posts
Showing posts with label happiness. Show all posts

Thursday, March 24, 2022

The good of competent achievement

One of the ways we flourish is by achievement: by successfully fulfilling a plan of action and getting the intended end. But it seems that there is a further thing here of some philosophical interest: we can distinguish achievement from competent achievement.

For me, the phenomenon shows up most clearly when I engage in (indoor) rock climbing. In the case of a difficult route, I first have to try multiple times before I can “send” the route, i.e., climb it correctly with no falls. That is an achievement. But often that first send is pretty sketchy in that it includes moves where it was a matter of chance whether I would get the move or fall. I happened to get it, but next time I do it, I might not. There is something unsatisfying about the randomness here, even though technically speaking I have achieved the goal.

There is then a further step in mastery where with further practice, I not only happened to get the moves right, but do so competently and reliably. And while there is an intense jolt of pleasure at the initial sketchy achievement, there is a kind of less intense but steadier pleasure at competent achievement. Similar things show up in other physical pursuits: there is the first time one can do n pull-ups, and that’s delightful, but there is there time when one can do n pull-ups whenever one wants to, and that has a different kind of pleasure. Video games can afford a similar kind of pleasure.

That said, eventually the joy of competent achievement fades, too, when one’s skill level rises far enough above it. I can with competence and reliability run a 15 minute mile, but there is no joy in that, because it is too easy. It seems that what we enjoy here has a tension to it: competent achievement of something that is still fairly hard for us. There is also a kind of enjoyment of competent achievement of something that is hard for others but easy for us, but that doesn’t feel quite so virtuous.

There is a pleasure for others in watching an athlete doing something effortlessly (which is quite different from “they make it look effortless”, when in fact we may know that there is quite a bit of effort in it), but I think the hedonic sweet spot for the athlete does not lie in the effortless performance, but in a competent but still challenging performance.

And here is a puzzle. God’s omnipotence not only makes God capable of everything, but makes God capable of doing everything easily. Insofar as we are in the image and likeness of God, it would seem that the completely effortless should be the greater good for us than the challenging. Maybe, though, the fact that our achievements are infinitely below God’s activity imposes on our lives a temporal structure of striving for greater achievements that makes the completely effortless a sign that we haven’t pushed ourselves enough.

All this stuff, of course, mirrors familiar debates between Kantians and virtue ethicists about moral worth.

Wednesday, September 15, 2021

An ontological argument from justice

Buras and Cantrell have given a very clever ontological argument for the existence of God based on a desire for happiness. Here is a variant of their argument based on justice.

  1. Ought implies (metaphysical) possibility.

  2. There ought to be justice for humans.

  3. Necessarily, if there is justice for humans, it is possible that there is a human who has happiness.

  4. Necessarily, if there is a human who has happiness, God exists.

  5. So, possibly God exists. (1-4 and S5)

  6. So, God exists. (S5 and as God is essentially divine and necessarily exists.)

I want to expand a little on 3 and 4.

In any world where there is justice for humans, there is (a) a practical possibility of a human being innocent, and (b) a system that reliably rewards innocent humans with happiness. Items (a) and (b) taken together plausibly imply a practical, and hence metaphysical, possibility of happiness. That gives us (3).

Buras and Cantrell defend claim (4). My favorite defense of claim (4) is that human happiness, when we think through our deep desire for eternal life as well as the danger of boredom in eternal lfie, requires some sort of friendship with God.

Thursday, July 9, 2020

Ataraxia

The stoics, the academic sceptics and the epicureans all to various degrees basically agreed—or at least largely lived as if they agreed—that happiness was ataraxia, imperturbable calm and tranquility. This is a useful and important corrective to our busy work and busy “leisure”. But at the sme time, it’s really a quite empty and negative picture of life’s fulfillment. It’s more like a picture of how to get done with life without too much misery.

Perhaps they had a part of the truth: perhaps what is truly worth having is imperturbably, calmly and tranquilly doing certain things, such as enjoying the companionship of those we love—God above all. But the ataraxia is just a mode of the worthwhile activity rather than the center of it.

Furthermore, perhaps these ancients were extensionally right: for perhaps the only way to have ataraxia is by being with God, since our hearts are restless apart from him. In that case, ataraxia isn’t happiness, or worth pursuing for its own sake, but is a sign of happiness.

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Thursday, July 13, 2017

Love and happiness

Could perfect happiness consist of perfect love?

Here’s a line of argument that it couldn’t. Constitutively central to love are the desire for the beloved’s good and for union with the beloved. A love is no less perfect when its constitutive desires are unfulfilled. But perfect happiness surely cannot be even partly constituted by unfulfilled desires. If perfect happiness consistent of perfect love, then one could have a perfect happiness constituted at least partly by unfulfilled desires.

When this argument first occurred to me a couple of hours ago, I thought it settled the question. But it doesn’t quite. For there is a special case where a perfect love’s constitutive desires are always fulfilled, namely when the object of the love is necessarily in a perfectly good state, so that the desire for the beloved’s good is necessarily fulfilled, and when the union proper to the love is of such a sort that it exists whenever the love does. Both of these conditions might be thought to be satisfied when the object of love is God. Certainly, a desire for God’s good is always fulfilled. Moreover, although perfect love is compatible with imperfect union in the case of finite objects of love, perfect love of God may itself be a perfect union with God. If so, then our happiness could consist in perfect love for God.

I am not sure the response to the argument works but I am also not sure it doesn’t work. But at least, I think, my initial argument does establish this thesis:

  • If perfect happiness consists of perfect love, it consists of perfect love for God.

Of course none of the above poses any difficulty for someone who thinks that perfect happiness consists of fulfilled perfect love.

Thursday, October 27, 2016

Three strengths of desire

Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong.

That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know.

Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off.

Thinking about this suggests there are three different strengths in a desire:

  1. Sp: preferential strength, determined by which things one is inclined to choose over which.

  2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one.

  3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one.

It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh.

There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there.

(We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.)

Wednesday, November 9, 2011

48 arguments against naturalism

Consider this argument:

  1. A desire to be morally perfect is morally required for humans.
  2. If naturalism is correct, a desire to be morally perfect cannot be fulfilled for humans.
  3. If a desire cannot be fulfilled for humans, it is not morally required for humans.
  4. Therefore, naturalism is not correct.
This argument provides a schema for a family of arguments. One obtains different members of the family by replacing or disambiguating the underlined terms in different ways.

If one disambiguates "naturalism" as physicalism (reductive or not), one gets an argument against physicalism (reductive or not). If one disambiguates "naturalism" in the Plantinga way as the claim that there is no God or anybody like God, one gets an argument for theism or something like it. Below I will assume the first disambiguation, though I think some versions of the schema will have significant plausibility on the Plantingan disambiguation.

One can replace "morally required" by such terms as "normal", "non-abnormal" or "required for moral perfection".

One can replace "to be morally perfect" by "for a perfect friendship", "to be perfectly happy" or "to know with certainty the basic truths about the nature of reality" or "to know with certainty the basic truths about ethics" or "to have virtue that cannot be lost". While (1) as it stands is quite plausible, with some of these replacements the requiredness versions of (1) become less plausible, but the "non-abnormal" version is still plausible.

Probably the hardest decision is how to understand the "cannot". The weaker the sense of "cannot", the easier it is for (2) to hold but the harder it is for (3) to hold. Thus, if we take "cannot" to indicate logical impossibility, (2) becomes fairly implausible, but (3) is very plausible as above.

I would recommend two options. The first is that the "cannot" indicate causal impossibility. In this case, (3) is very plausible. And (2) has some plausibility for "moral perfection" and all its replacements. For instance, it is plausible that if naturalism is true, certain knowledge of the basic truths about the nature of reality or about ethics is just not causally available. If, further, moral perfection requires certainty about the basic truths of ethics (we might read these as at the normative level for this argument), then moral perfection is something we cannot have. And if we cannot have moral perfection, plausibly we cannot have perfect friendship either. Likewise, if naturalism is true, virtue can always be lost due to some quantum blip in the brain, and if moral perfection requires virtue that cannot be lost, then moral perfection is also unattainable. And perfect happiness requires certain knowledge of its not being such as can be lost. Maybe, though, one could try to argue that moral perfection is compatible with the possibility of losing virtue as long as the loss itself is not originated from within one's character. But in fact if naturalism is true, it is always causally possible to have the loss of virtue originate from within one's character, say because misleading evidence could come up that convinces one that torture is beneficial to people, which then leads to one conscientiously striving to become cruel.

The second option is that the "cannot" is a loosey-goosey "not really possible", weaker than causal impossibility by not counting as possible things that are so extraordinarily unlikely that we wouldn't expect them to happen over the history of humankind. Thus, in this sense, I "cannot" sprout wings, though it seems to be causally possible for my wavefunction to collapse into a state that contains wings. Premise (2) is now even more plausible, including for all the substituents, while premise (3) still has some plausibility, especially where we stick to the "morally required" or "required for moral perfection", and make the desire be a desire for moral perfection.

If I am counting correctly, if we keep "naturalism" of the non-Plantingan sort, but allow all the other variations in the argument, we get 48 arguments against naturalism, though not all independent. Or we can disjoin the conjunctions of the premises, and get an argument with one premise that is a disjunction of 48 conjunctions of three premises. :-)

Wednesday, May 19, 2010

The insatiaty of the will

I committed myself to read all the texts for our medieval comprehensive exam that I haven't already read so I can be a minimally competent grader. The reading will probably be giving rise to various posts (already hss).

Aquinas argues that our beatitude can only consist in God. The argument is interesting:

the object of the will, which is man's appetite, is the universal good, just as the object of the intellect is universal good. From this it is clear that nothing can put man's will to rest except the universal good. But the universal good is found only in God and not in any created good, since every creature has participated goodness. Hence, only God can satisfy man's will....

An initial worry is that the argument rests on an equivocation in "universal". The object of the will is the universal good in the sense that every good that a human can have is an object of the will. And God is the universal good in the sense that all goods are goods by participation in God.

Here is a more charitable take on the argument—whether it's what the angelic doctor means, I don't know. Every created good that a human can have is a good we desire. We cannot have them all, however. For instance, no matter how many friends we have, we could wish for people with a new configuration of characteristics to be friends with. Even if over the course of eternity we were to be friends with all possible kinds of friends, we couldn't simultaneously be friends with an infinite number of people. At any given time, we can only enjoy a finite number of goods. Now, in this life, we sometimes feel ourselves satisfied by a single created good—say, when we are engrossed in a wonderful conversation. But this is due to our lack of sensitivity, due to the fact that the presence of the good blocks out the fact that we lack other goods. True beatitude is not built on lack of sensitivity. Moreover, a component of a purely created happiness will be a commitment, for a length of time, to a form of activity, and such commitments can be incompatible, with there being a kind of sorrow that one is not simultaneously engaging in others.

If we are sensitive, we appreciate every created good we are capable of having, but we cannot have them all. So, if we are limited to created goods, our happiness will always either be blind or have a sorrow. Limited to created goods, our will is insatiable. But if it is possible to possess that by virtue of participation in which all the created goods are good, then by possessing that one being, one would satisfy the universality in our will. If we have that by participation in which friendship with Albert Einstein is valuable, then we do not need friendship with Einstein to satisfy the will. Thus, one kind of universality in our will—the universality of every—is satisfied by the other kind of universality.

Tuesday, August 18, 2009

The liar paradox and desire

The standard desire version of the liar paradox is to consider a person whose only desire is to have no satisfied desires. But that's a weird enough desire that one might wonder if it's possible to have it. Here is a version of the liar paradox using desires that are more imaginable.

Malefa has only one desire: That none of Bonnie's desires be satisfied. Bonnie has only one desire: That all of Malefa's desires be satisfied. Whose desire is satisfied? If Malefa's is, then Bonnie's isn't, and Malefa's isn't. If Malefa's isn't, then Bonnie's is, and Malefa's is, too.

What assumptions does this paradox depend on?

  1. It is possible that Malefa and Bonnie both have the above desires.
  2. The following disquotational schema for desire satisfaction is correct: A desire that p is satisfied iff I(p) (where I(p) is p rewritten with the subjunctive mood replaced by the indicative; thus, I("he eat ice cream") is "he eats ice cream"; to be more precise in the schema, I need to put in quotation marks of the right sort, but I'm not going to bother);
  3. Classical logic.

In regard to (1), one might worry that it's not possible to have only one desire. But that's easily handled by modifying the cases. Maybe Malefa's strongest (or most intense or latest acquired) desire is that Bonnie's strongest (or most intense or latest acquired) desire not be satisfied, and Bonnie's strongest (or ...) desire is that Malefa's strongest (or ...) desire be satisfied.

Moreover, there is nothing absurd about having desires that someone else's desires be or not be satisfied.

One could do what I did in my Monday post and argue that whether Malefa actually manages to have the indicated desires depends on what Bonnie desires (or vice versa or both). Somehow, I find this less plausible in the case of desire—I guess I feel a pull of a certain internalism about desire.

A different move would be based on the Gorgias. In the Gorgias, Socrates argues at length that the tyrant, though he is able to put enemies to death and all that, gets less and less of what he wants the more powerful he is. The reason for that is that he does not really desire to put enemies to death and all that—what he really wants is happiness. There are a couple of ways of taking Socrates' point. One way is to say that there are no instrumental desires. If the tyrant had a desire to have enemies put to death, that would be merely instrumental. Another way (I think Heda Segvic took this view) is to make desire have a normative dimension, such that to desire is to desire appropriately, so that the tyrant does not desire the deaths of his enemies.

Both of these two readings undercut the view that everything that can be put in a (subjunctified) "that clause" can be an object of desire. Moreover, they in particular make questionable the possibility of one person desiring that another's desires not be satisfied: that desire seems too much akin to the tyrant's desire that so-and-so die.

The paradox gives support for the thesis of the Gorgias. But there is something uncomfortable in using a paradox to give support to a substantive philosophical position.

Moreover, one might think that the solution in the case of the desire-satisfaction form of the paradox should be the same as in the case of the truth form. I am not completely sure. (Here is a consideration to back up my uncertainty: the complements of desires are subjunctified that-clauses, while the complements of beliefs and assertions are indicative that-clauses. This observation weakens—ever so slightly—the standard view that the object of a desire is a proposition. But the object of belief and assertion is a proposition. (I say this without committing to a realism about propositions.))

Thursday, May 14, 2009

Is time a continuum?

The following argument is valid:

  1. (Premise) If one compressed all the events of an infinitely long happy life into a minute, by living a year of events in the first half minute, then another year of events in the next quarter minute, and so on, then one would be exactly as well off as living the finite life as the infinite one.
  2. (Premise) If supertasks are possible, then the antecedent of (1) is possible for any infinitely long happy life.
  3. (Premise) If time is an actual continuum, supertasks are possible.
  4. (Premise) There is a possible an infinitely long happy life that would make for full human well-being.
  5. (Premise) A finitely long life could not make for full human well-being.
  6. (Premise) If a life makes for full human well-being, then so does any life that makes one exactly as well off.
  7. Therefore, if supertasks are possible, there is a finitely long life that would make for full human well-being (1, 2, 4, 6).
  8. Therefore, supertasks are impossible. (5, 7)
  9. Therefore, time is not an actual continuum. (3 and 8)

Thursday, March 5, 2009

Well-being

Consider these statements:

  1. Polluted air is bad for a tree.
  2. Polluted air is bad for a ladybug.
  3. Polluted air is bad for a mouse.
  4. Polluted air is bad for a dog.
  5. Polluted air is bad for a human.
As far as I know, these are all true. Moreover, it does not appear to me that "bad for" is used equivocally in all the cases.

Furthermore, the reasons for the truth of the items higher on the list remain in the case of the items lower on the list, but new reasons are added. Thus, pollution harms the growth and survival of a mouse just as it does a tree. But the mouse can get sick and feel pain, while the tree cannot. Thus, there is an additional reason for why pollution is bad for a mouse that does not apply in the case of the tree. And a human being can have various higher level goals be frustrated by pollution. However, the reasons for why polluted air is bad for a tree, a ladybug, a mouse or a dog are all reasons for why it is bad for a human as well.

This has obvious implications for a theory of human well-being. Since the reasons for why (1) is true have nothing to do with actual or counterfactual desires of trees, likewise, at least one of the reasons for why (5) is true obtains least in part independently of any actual or counterfactual desires of humans. And this shows that desire-fulfillment theories of human well-being are false: some things, such as health, are valuable for humans regardless of how humans feel about them, for the very same reasons for which they are valuable for trees.

The opponent will, I expect, either deny (1) and (2) (and maybe even (3)), saying that nothing is good or bad for trees or ladybugs, or else claim that I am equivocating on "bad for". Neither, though, seems that plausible.

Tuesday, October 28, 2008

Happiness, friendship and eternity

When one is not tired of a friend, the expected approaching loss of union with the friend makes one miserable. To be tired of a friend would not be compatible with full human happiness, and neither would it be compatible with full human happiness to have no friends. Full human happiness is grounded in truth—it is not full happiness when one's delight depends on ignorance. Therefore, when one is not tired of a friend, an approaching loss of union with the friend one is not tired of is not compatible with full human happiness, whether the loss is expected or not. But neither is it compatible with full human happiness to be tired of friends or lack them. Thus, in full human happiness, one never approaches the loss of union with a friend. But if one were to cease existing, one would thereby lose all union with one's friends.

It follows that full human happiness requires unending life with at least one friend. Moreover, it requires a well-grounded security in this unending life (this point I learned from Todd Buras).

We can conclude from this that naturalism is false if we add the premises:

  1. People have a natural desire for full human happiness.
  2. What people have a natural desire for is possible.
  3. If naturalism is true, then it is not possible to have well-grounded security in unending life.
One might think this argument can be simplified by arguing that if naturalism is true, unending life is impossible. But if the universe goes on expanding forever and quantum indeterminism holds, unending life is not impossible, just highly improbable (it gets less and less probable as the universe gets colder and colder). But such an unending life is insecure because of the improbability of its continuation.

Monday, August 25, 2008

Eternal happiness and finitude

Let us think what full happiness would be like. This isn't just partial happiness, but it is a happy state involving nothing unfortunate for one, nothing unhappy. Full happiness need not be maximal happiness. It is prima facie coherent that one might be fully happy at t1, but happier yet at t2, though the lesser amount of happiness at t1 would not have to involve any kind of unhappiness at the fact that it is not yet t2.

Full happiness has both mental and extra-mental components. To be fully happy requires a certain level of awareness of the events that make one be happy. No one in a coma is fully happy. But purely subjective states are insufficient. Being loved by others is surely a part of full happiness, but thinking and feeling that one is loved by others is not enough. The falsehood in thinking and feeling one is loved by others when in fact they despise one is clearly something unfortunate for one. Both the subjective component and the objective are essential to full happiness.

Next, for the sake of the argument, let me assume that there is a finite number N such that there are at most N subjectively different conscious states that are possible to one of us. At this point, I want this assumption to be ambiguous between different senses of "possible" (practical, nomic, physical, causal, metaphysical, logical, etc.) For instance, it seems plausible that there is such a number if mental states supervene on brain states and there is a limit on the possible size of the brain of one of us (maybe brains just couldn't function—or at least couldn't function as our brains—if they were more than a light year across), since although an analog system like the brain possibly is can have an infinite number of states, states that are too close together would not be subjectively distinguishable.

Now, it seems to me that a part of the concept of being fully happy is that the state of being fully happy forever is desirable. Let us take that assumption.

I will individuate mental state types in terms of subjective difference (feeling hot and smelling wintergreen are subjectively different, but smelling synthetic wintergreen and smelling natural wintergreen need not be subjectively different).[note 1]

The following seems plausible: Every qualitatively normal human state—i.e., every state of the same qualitative type as our normal, everyday human states—is such that to be in that state forever would be somewhat unfortunate. When we find ourselves feeling really happy, we wish that the moment could go on forever. But in fact, in the case of normal human states, this would be unfortunate. The wish of the lovers to sit on the bench watching the autumn foliage forever might be romantic, but if a fairy froze the lovers in that subjective state for eternity, we would see the spectacle as deeply sad. We might see it as preferable to many other states, but it would not be a fully happy state.

Neither would it be a fully happy state for a person to oscillate, with or without a repeating pattern, between a finite number of normal mental states. Granted, if the person in the state may be unaware that she has already experienced the blissful state 10100 times, she may not feel any ennui in having the state for the (10100+1)st time. But remember that happiness involves not just a subjective state, but an objective one. It may or may not be good to unaware of the infinite repetition of states, but such repetition is itself unfortunate.

But if there is only a finite number of normal mental states (distinguished subjectively) possible to us, then anybody who experiences only normal mental states will either cease having mental states (due to death or coma) or will eternally oscillate (with or without a repeating pattern) between a finite number of states. Since it is unfortunate if happiness is not to last forever, the person who would cease to have mental states was not fully happy (whether or not she was aware of the impending end of consciousness). And the person eternally oscillating between a finite number of states is also undergoing something unfortunate.

Consequently, assuming what has been assumed above, such as that there is a finite upper bound on the number of mental states possible to us, it follows that full happiness is impossible to us if we are limited to normal human states. The sense of "impossible" here matches the sense of "impossible" in the claim that it is impossible for us to have more than N subjectively different mental states.

From the above, an argument could be constructed that our full happiness would require either a supernatural mental state (such as the vision of God) or our going through an infinite number of different mental states (e.g., due to unbounded growth in knowledge).

In either case, the following seems interestingly true: Full happiness is impossible as long as naturalism is true. This might yield a desire-based argument against naturalism if we add the theses that any rational desire is possible to fulfill, that the desire for full happiness is rational, and that if naturalism is true, then it is impossible for naturalism to cease to be true. This requires some kind of a physical causal or nomic sense of "possible".

The above is just a sketch. Working it out would require carefully examining the different modalities and trying to find one in respect of which all of the premises of the argument are plausible. Something like nomic modality might do the trick. But this is all left as an exercise to the reader.