Showing posts with label entropy. Show all posts
Showing posts with label entropy. Show all posts

Wednesday, June 19, 2024

Entropy

If p is a discrete probability measure, then the Shannon entropy of p is H(p) =  − ∑xp({x})log p({x}). I’ve never had any intuitive feeling for Shannon entropy until I noticed the well-known fact that H(p) is the expected value of the logarithmic inaccuracy score of p by the lights of p. Since I’ve spent a long time thinking about inaccuracy scores, I now get some intuitions about entropy for free.

Entropy is a measure of the randomness of p. But now I am thinking that there are other measures: For any strictly proper inaccuracy scoring rule s, we can take Eps(p) to be some sort of a measure of the randomness of p. These won’t have the nice connections with information theory, though.

Thursday, October 31, 2019

The local five minute hypothesis, the Big Bang and creation

The local five minute hypothesis is that the earth, with everything on it, and the environment five light-minutes out from it, come into existence five minutes ago.

Let’s estimate the probability of getting something like a local five minute hypothesis by placing particles at random in the observable universe. Of course, in a continuous spacetime the probability of getting exactly the arrangement we have is zero or infinitesimal. But we only need to get things right to within a margin of error of a Planck distance for all practical purposes.

The volume of the observable universe is about 1080 cubic meters. The Planck volume is about 10−105 cubic meters. So, getting a single particle at random within a Planck volume of where it is has a probability of about 10−185.

But, if we’re doing our back-of-envelope calculation in a non-quantum setting (i.e., with no uncertainty principle), we also need to set the velocity for the particles. Let’s make our margin of error be the equivalent of moving a Planck distance within ten minutes. So our margin of error for velocity in any direction will be about 10−35 meters in 600 seconds, or about 10−38 meters per second. Speeds range from 0 to the speed of light, or about 108 meters per second, so the probability of getting each of the three components of the velocity right is about 10−46, and since we have three directions right is something like 10−138. The probability of getting both the position and velocity of a particle right is then 10−(185 + 138) = 10−323. Yeah, that’s small. Also, there are about 100 different types of particles, and there are a few other determinables like spin, so let’s multiply that by about 10−3 to get 10−326.

The total mass of planetary stuff within around five light minutes of earth—namely, Earth, Mass and Venus—is around 1025 kilograms. There are no more than about 1025 atoms, and hence about 1027 particles, per kilogram. So, we have 1052 particles we need to arrange within our volume.

We’re ready to finish the calculation. The probability of arranging these many particles with the right types and within our position and velocity margins of error is:

  • (10−326)1052 ≈ 10−102.5 × 1052 ≈ 10−1055.

Notice, interestingly, that most of the 55 comes from the number of particles we are dealing with. In fact, our calculations show that basically getting 10N particles in the right configuration has, very roughly, a probability of around 10−10N + 3.

So what? Well, Roger Penrose has estimated the probability of a universe with an initial entropy like ours at 10−10123. So, now we have two hypotheses:

  • A universe like ours came into existence with a Big Bang

  • The localized five minute hypothesis.

If there is no intelligence behind the universes, and if probabilistic calculations are at all appropriate for things coming into existence ex nihilo, the above probability calculations seem about right, and the localized five minute hypothesis wins by a vast margin: 10−1055 to 10−10123 or, roughly, 1010123 to 1. And if probabilistic calculations are not appropriate, then we cannot compare the hypotheses probabilistically, and lots of scepticism also follows. Hence, if there is no intelligence behind the universe, scepticism about everything more than five minutes ago and more than five light minutes from us follows.

Wednesday, December 1, 2010

A simple design argument

  1. P(the universe has low entropy | naturalism) is extremely tiny.
  2. P(the universe has low entropy | theism) is not very small.
  3. The universe has low entropy.
  4. Therefore, the low entropy of the universe strongly confirms theism over naturalism.

Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.

When I gave him the argument, Dan Johnson made the point to me that this appears to be a species of fine-tuning argument and that a good way to explore the argument is to see how standard objections to standard fine-tuning arguments fare against this one. So let's do that.

I. "There is a multiverse, and because it's so big, it's likely that in one of its universes there is life. That kind of a universe is going to be fine-tuned, and we only observe universes like that, since only universes like that have an observer." This doesn't apply to the entropy argument, however, because globally low entropy isn't needed for the existence of an observer like me. All that's needed is locally low entropy. What we'd expect to see, on the multiverse hypothesis, is a locally low entropy universe with a big mess outside a very small area--like the size of my brain. (This is the Boltzmann brain problem>)

II. "You can't use as evidence anything that is entailed by the existence of observers." While this sort of a principle has been argued for, surely it's false. If we're choosing between two evolutionary theories, both of them fitting the data, both equally simple, but one of them making it likely that observers would evolve and the other making it unlikely, we should choose the one that makes it likely. But I can grant the principle, because my evidence--the low entropy of the universe--is not entailed by the existence of observers. All that the existence of observers implies (and even that isn't perhaps an entailment) is locally low entropy. Notice that my responses to Objections I and II show a way in which the argument differs from typical fine-tuning arguments, because while we expect constants in the laws of nature to stay, well, constant throughout a universe, not so for entropy.

III. "It's a law of nature that the value of the constants--or in this case of the universe's entropy--is exactly as it is." The law of nature suggestion is more plausible in the case of some fundamental constant like the mass of the electron than it is in the case of a continually changing non-fundamental quantity like total entropy which is a function of more fundamental microphysical properties. Nonetheless, the suggestion that the initial low entropy of the universe is a law of nature has been made in the philosophy of sceince literature. Suppose the suggestion is true. Now consider this point. There is a large number--indeed, an infinite number--of possible laws about the initial values of non-fundamental quantities, many of which are incompatible with the low initial entropy. The law that the initial entropy is low is only one among many competing incompatible laws. The probability given naturalism of initially low entropy being the law is going to be low, too. (Note that this response can also be given in the case of standard fine-tuning arguments.)

IV. "The values of the constant--or the initially low entropy--does not require an explanation." That suggestion has also been made in the philosophy of science literature in the entropy case. But the suggestion is irrelevant to the argument, since none of the premises in the argument say anything about explanation. The point is purely Bayesian.