Pages

Saturday, February 29, 2020

14 Years BackRe(Action)

[Image: Scott McLeod/Flickr]
14 years ago, I was a postdoc in Santa Barbara, in a tiny corner office where the windows wouldn't open, in a building that slightly swayed each time one of the frequent mini-earthquakes shook up California. I had just published my first blogpost. It happened to be about the possibility that the Large Hadron Collider, which was not yet in operation, would produce tiny black holes and inadvertently kill all of us. The topic would soon rise to attention in the media and thereby mark my entry into the world of science communication. I was well prepared: Black holes at the LHC were the topic of my PhD thesis.

A few months later, I got married.

Later that same year, Lee Smolin's book "The Trouble With Physics" was published, coincidentally at almost the same time I moved to Canada and started my new position at Perimeter Institute. I had read an early version of the manuscript and published one of the first online reviews. Peter Woit's book "Not Even Wrong" appeared at almost the same time and kicked off what later became known as "The String Wars", though I've always found the rather militant term somewhat inappropriate.

Time marched on and I kept writing, through my move to Sweden, my first pregnancy and the following miscarriage, the second pregnancy, the twin's birth, parental leave, my suffering through 5 years of a 3000 km commute while trying to raise two kids, and, in late 2015, my move back to Germany. Then, in 2018, the publication of my first book.

The loyal readers of this blog will have noticed that in the past year I have shifted weight from Blogger to YouTube. The reason is that the way search engine algorithms and the blogosphere have evolved, it has become basically impossible to attract new audiences to a blog. Here on Blogger, I feel rather stuck on the topics I have originally written about, mostly quantum gravity and particle physics, while meanwhile my interests have drifted more towards astrophysics, quantum foundations, and the philosophy of physics. YouTube's algorithm is certainly not perfect, but it serves content to users that may be interested in the topic of a video, regardless of whether they've previously heard of me.

I have to admit that personally I still prefer writing over videos. Not only because it's less time-consuming, but also because I don't particularly like either my voice or my face. But then, the average number of people who watch my videos has quickly surpassed the number of those who typically read my blog, so I guess I am doing okay.

On this occasion I want to thank all of you for spending some time with me, for your feedback and comments and encouragement. I am especially grateful to those of you who have on occasion sent a donation my way. I am not entirely sure where this blog will be going in the future, but stay around and you will find out. I promise it won't be boring.

Friday, February 28, 2020

Quantum Gravity in the Lab? The Hype Is On.


Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar.

It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example:
“In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.”
The correct statement would be:
“Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.”
Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space.

Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it.

Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe.

As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity.

Let us have a look at the paper behind the article. We can read there in the introduction:
“In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.”
This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that.

If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything.

Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on:
“However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention.

The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.”

I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them.

Thursday, February 20, 2020

The 10 Most Important Physics Effects

Today I have a count-down of the 10 most important effects in physics that you should all know about.


10. The Doppler Effect

The Doppler effect is the change in frequency of a wave when the source moves relative to the receiver. If the source is approaching, the wavelength appears shorter and the frequency higher. If the source is moving away, the wavelength appears longer and the frequency lower.

The most common example of the Doppler effect is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you.

But the Doppler effect does not only happen for sound waves; it also happens to light which is why it’s enormously important in astrophysics. For light, the frequency is the color, so the color of an approaching object is shifted to the blue and that of an object moving away from you is shifted to the red. Because of this, we can for example calculate our velocity relative to the cosmic microwave background.

The Doppler effect is named after the Austrian physicist Christian Doppler and has nothing to do with the German word Doppelgänger.

9. The Butterfly Effect

Even a tiny change, like the flap of a butterfly’s wings, can making a big difference for the weather next Sunday. This is the butterfly effect as you have probably heard of it. But Edward Lorenz actually meant something much more radical when he spoke of the butterfly effect. He meant that for some non-linear systems you can only make predictions for a limited amount of time, even if you can measure the tiniest perturbations to arbitrary accuracy. I explained this in more detail in my earlier video.

8. The Meissner-Ochsenfeld Effect

The Meissner-Ochsenfeld effect is the impossibility of making a magnetic field enter a superconductor. It was discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933. Thanks to this effect, if you try to place a superconductor on a magnet, it will hover above the magnet because the magnetic field lines cannot enter the superconductor. I assure you that this has absolutely nothing to do with Yogic flying.

7. The Aharonov–Bohm Effect

Okay, I admit this is not a particularly well-known effect, but it should be. The Aharonov-Bohm effect says that the wave-function of a charged particle in an electromagnetic field obtains a phase shift from the potential of the background field.

I know this sounds abstract, but the relevant point is that it’s the potential that causes the phase, not the field. In electrodynamics, the potential itself is normally not observable. But this phase shift in the Aharonov-Bohm Effect can and has been observed in interference patterns. And this tells us that the potential is not merely a mathematical tool. Before the Aharonov–Bohm effect one could reasonably question the physical reality of the potential because it was not observable.

6. The Tennis Racket Effect

If you throw any three-dimensional object with a spin, then the spin around the shortest and longest axes will be stable, but that around the intermediate third axis not. The typical example for the spinning object this is a tennis racket, hence the name. It’s also known as the intermediate axis theorem or the Dzhanibekov effect. You see a beautiful illustration of the instability in this little clip from the International Space Station.

5. The Hall Effect

If you bring a conducting plate into a magnetic field, then the magnetic field will affect the motion of the electrons in the plate. In particular, If the plate is orthogonal to the magnetic field lines, you can measure a voltage flowing between opposing ends of the plate, and this voltage can be measured to determine the strength of the magnetic field. This effect is named after Edwin Hall.

If the plate is very thin, the temperature very low, and the magnetic field very strong, you can also observe that the conductivity makes discrete jumps, which is known as the quantum Hall effect.

4. The Hawking Effect

Stephen Hawking showed in the early 1970s that black holes emit thermal radiation with a temperature inverse to the black hole’s mass. This Hawking effect is a consequence of the relativity of the particle number. An observer falling into a black hole would not measure any particles and think the black hole is surrounded by vacuum. But an observer far away from the black hole would think the horizon is surrounded by particles. This can happen because in general relativity, what we mean by a particle depend on the motion of an observer like the passage of time.

A closely related effect is the Unruh effect named after Bill Unruh, which says that an accelerated observer in flat space will measure a thermal distribution of particles with a temperature that depends on the acceleration. Again that can happen because the accelerated observer’s particles are not the same as the particles of an observer at rest.

3. The Photoelectric Effect

When light falls on a plate of metal, it can kick out electrons from their orbits around atomic nuclei. This is called the “photoelectric effect”. The surprising thing about this is that the frequency of the light needs to be above a certain threshold. Just what the threshold is depends on the material, but if the frequency is below the threshold, it does not matter how intense the light is, it will not kick out electrons.

The photoelectric effect was explained in 1905 by Albert Einstein who correctly concluded that it means the light must be made of quanta whose energy is proportional to the frequency of the light.

2. The Casimir Effect

Everybody knows that two metal plates will attract each other if one plate is positively charged and the other one negatively charged. But did you know the plates also attract each other if they are uncharged? Yes, they do!

This is the Casimir effect, named after Hendrik Casimir. It is created by quantum fluctuations that create a pressure even in vacuum. This pressure is lower between the plates than outside of them, so that the two plates are pushed towards each other. However, the force from the Casimir effect is very weak and can be measured only at very short distances.

1. The Tunnel Effect

Definitely my most favorite effect. Quantum effects allow a particle that is trapped in a potential to escape. This would not be possible without quantum effects because the particle just does not have enough energy to escape. However, in quantum mechanics the wave-function of the particle can leak out of the potential and this means that there is a small, but nonzero, probability that a quantum particle can do the seemingly impossible.

Saturday, February 15, 2020

The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop

On my recent visit to Great Britain (the first one post-Brexit) I had the pleasure of talking to Dorothy Bishop. Bishop is Professor of Psychology at the University of Oxford and has been a leading force in combating the reproducibility crisis in her and other disciplines. You find her on twitter under the handle @deevybee . The comment for Nature magazine which I mention in the video is here.

Monday, February 10, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer

[This is the second part of Tim’s guest contribution. The first part is here.]



In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated.

These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them.

The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability.

For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y.

Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken.

However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space).

What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic.

Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general.

For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles.

If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!!

However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements.

In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions.

However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role.

It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time.

The Lorenz attractor.
[Image Credits: Markus Fritzsch.]

As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below.

With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition).

In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem.

This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.)

With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all!

If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come.

In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics.

My thanks to Sabine for allowing me the space on her blog to express these points of view.

Saturday, February 08, 2020

Philosophers should talk more about climate change. Yes, philosophers.


I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to.

Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat.

As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper.

But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor.

Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science.

Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor.

We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor.

If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to.

But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles.

Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from.

You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either.

But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.

The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data.

Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy.

What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing.

To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data.

But let me come back to the climate change deniers. You may call me naïve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime.

Thursday, February 06, 2020

Ivory Tower [I've been singing again]

I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy!

Monday, February 03, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer

[Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.]


[Screenshot from Tim’s public lecture at Perimeter Institute]


Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.

To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem.

Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor.

One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable.

How does this help unify physics?

Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory.

That was easy! The rest is not so easy which is why I need two guest posts and not one!

When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos.

To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting).

Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background. 

In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between.

Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time.

The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities.

However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post.

Sunday, February 02, 2020

Does nature have a minimal length?

Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length?

This is what we will talk about today.



When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny.

To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider.

Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length.

What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole.

More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about ℏc/E. I will call that distance Δx. Here, c is the speed of light and ℏ is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures.

Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it.

The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just ℏcx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure.

Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length.

Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and ℏ. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use.

The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video.

In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution.

When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization.

What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future.

Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.

Friday, January 24, 2020

Do Black Holes Echo?

What happens with the event horizon of two black holes if they merge? Might gravitational waves emitted from such a merger tell us if Einstein’s theory of general relativity is wrong? Yes, they might. But it’s unlikely. In this video, I will explain why. In more detail, I will tell you about the possibility that a gravitational wave signal from a black hole merger has echoes.


But first, some context. We know that Einstein’s theory of general relativity is incomplete. We know that because it cannot handle quantum properties. To complete General Relativity, we need a theory of quantum gravity. But progress in theory development has been slow and experimental evidence for quantum gravity is hard to come by because quantum fluctuations of space-time are so damn tiny. In my previous video I told you about the most promising ways of testing quantum gravity. Today I want to tell you about testing quantum gravity with black hole horizons in particular.

The effects of quantum gravity become large when space and time are strongly curved. This is the case towards the center of a black hole, but it is not the case at the horizon of a black hole. Most people get this wrong, so let me repeat this. The curvature of space is not strong at the horizon of a black hole. It can, in fact, be arbitrarily weak. That’s because the curvature at the horizon is inversely proportional to the square of the black hole’s mass. This means the larger the black hole, the weaker the curvature at the horizon. It also means we have no reason to think that there are any quantum gravitational effects near the horizon of a black hole. It’s an almost flat and empty space.

Black holes do emit radiation by quantum effects. This is the Hawking radiation named after Stephen Hawking. But Hawking radiation comes from the quantum properties of matter. It is an effect of ordinary quantum mechanics and *not an effect of quantum gravity.

However, one can certainly speculate that maybe General Relativity does not correctly describe black hole horizons. So how would you do that? In General Relativity, the horizon is the boundary of a region that you can only get in but never get out. The horizon itself has no substance and indeed you would not notice crossing it. But quantum effects could change the situation. And that might be observable.

Just what you would observe has been studied by Niayesh Afshordi and his group at Perimeter Institute. They try to understand what happens if quantum effects turn the horizon into a physical obstacle that partly reflects gravitational waves. If that was so, the gravitational waves produced in a black hole merger would bounce back and forth between the horizon and the black hole’s photon sphere.

The photon sphere is a potential barrier at about one and a half times the radius of the horizon. The gravitational waves would slowly leak during each iteration rather than escape in one bang. And if that is what is really going on, then gravitational wave interferometers like LIGO should detect echoes of the original merger signal.

And here is the thing! Niayesh and his group did find an echo signal in the gravitational wave data. This signal is in the first event ever detected by LIGO in September 2015. The statistical significance of this echo was originally at 2.5 σ. This means roughly one-in-a-hundred times random fluctuations conspire to look like the observed echo. So, it’s not a great level of significance, at least not by physics standards. But it’s still 2.5σ better than nothing.

Some members of the LIGO collaboration then went and did their own analysis of the data. And they also found the echo, but at a somewhat smaller significance. There has since been some effort by several groups to extract a signal from the data with different techniques of analysis using different models for the exact type of echo signal. The signal could for example be dampened over time, or it’s frequency distribution could change. The reported false alarm rate of these findings ranges from 5% to 0.002%, the latter is a near discovery.

However, if you know anything about statistical analysis, then you know that trying out different methods of analysis and different models until you find something is not a good idea. Because if you try long enough, you will eventually find something. And in the case of black hole echoes, I suspect that most of the models that gave negative results never appeared in the literature. So the statistical significance may be misleading.

I also have to admit that as a theorist, I am not enthusiastic about black hole echoes because there are no compelling theoretical reasons to expect them. We know that quantum gravitational effects become important towards the center of the black hole. But that’s hidden deep inside the horizon and the gravitational waves we detect are not sensitive to what is going on there. That quantum gravitational effects are also relevant at the horizon is speculative and pure conjecture, and yet that’s what it takes to have black hole echoes.

But theoretical misgivings aside, we have never tested the properties of black hole horizons before, and on unexplored territory all stones should be turned. You find a summary of the current status of the search for black hole echoes in Afshordi’s most recent paper.

Wednesday, January 22, 2020

Travel and Book Update

My book “Lost in Math” has meanwhile also been translated to Hungarian and Polish. Previous translations have appeared in German, Spanish, Italian, and French, I believe. I have somewhat lost overview. There should have been a Chinese and Romanian translation too, I think, but I’m not sure what happened to these. In case someone spots them, please let me know. The paperback version of the US-Edition is scheduled to appear in June.

My upcoming trips are to Cambridge, UK, for a public debate on the question “How is the scientific method doing?” (on Jan 28th) and a seminar about Superdeterminism (on Jan 29). On Feb 13, I am in Oxford (again) giving a talk about Superfluid Dark Matter (again), but this time at the physics department. On Feb 24th, I am in London for the Researcher to Reader Conference 2020.

On March 9th I am giving a colloq at Brown University. On March 19th I am in Zurich for some kind of panel discussion, details of which I have either forgotten or never knew. On April 8, I am in Gelsenkirchen for a public lecture.

Our Superdeterminism workshop is scheduled for the first week of May (details to come soon). Mid of May I am in Copenhagen for a public lecture. In June I’ll be on Long Island for a conference on peer review organized by the APS.

The easiest way to keep track of my whatabouts and whereabouts is to follow me on Twitter or on Facebook.

Thursday, January 16, 2020

How to test quantum gravity

Today I want to talk about a topic that most physicists get wrong: How to test quantum gravity. Most physicists believe it is just is not possible. But it is possible.


Einstein’s theory of general relativity tells us that gravity is due to the curvature of space and time. But this theory is strictly speaking wrong. It is wrong because according to general relativity, gravity does not have quantum properties. I told you all about this in my earlier videos. This lacking quantum behavior of gravity gives rise to mathematical inconsistencies that make no physical sense. To really make sense of gravity, we need a theory of quantum gravity. But we do not have such a theory yet. In this video, we will look at the experimental possibilities that we have to find the missing theory.

But before I do that, I want to tell you why so many physicists think that it is not possible to test quantum gravity.

The reason is that gravity is a very weak force and its quantum effects are even weaker. Gravity does not seem weak in everyday life. But that is because gravity, unlike all the other fundamental forces, does not neutralize. So, on long distances, it is the only remaining force and that’s why we notice it so prominently. But if you look at, for example, the gravitational force between an electron and a proton and the electromagnetic force between them, then the electromagnetic force is a factor 10^40 stronger.

One way to see what this means is to look at a fridge magnet. The magnetic force of that tiny thing is stronger than the gravitational pull of the whole planet.

Now, in most approaches to quantum gravity, the gravitational force is mediated by a particle. This particle is called the graviton, and it belongs to the gravitational force the same way that the photon belongs to the electromagnetic force. But since gravity is so much weaker than the electromagnetic force, you need ridiculously high energies to produce a measureable amount of gravitons. With the currently available technology, it would take a particle accelerator about the size of the Milky Way to reach sufficiently high energies.

And this is why most physicists think that one cannot test quantum gravity. It is testable in principle, all right, but not in practice, because one needs these ridiculously large accelerators or detectors.

However, this argument is wrong. It is wrong because one does not need to produce a quantum of a field to demonstrate that the field must be quantized. Take electromagnetism as an example. We have evidence that it must be quantized right here. Because if it was not quantized, then atoms would not be stable. Somewhat more quantitatively, the discrete energy levels of atomic spectral lines demonstrate that electromagnetism is quantized. And you do not need to detect individual photons for that.

With the quantization of gravity, it’s somewhat more difficult, but not impossible. A big advantage of gravity is that the gravitational force becomes stronger for larger systems because, recall, gravity, unlike the other forces, does not neutralize and therefore adds up. So, we can make quantum gravitational effects stronger by just taking larger masses and bringing them into quantum states, for example into a state in which the masses are in two places at once. One should then be able to tell whether the gravitational field is also in two places at once. And if one can do that, one can demonstrate that gravity has quantum behavior.

But the trouble is that quantum effects for large objects quickly fade away, or “decohere” as the physicists say. So the challenge to measuring quantum gravity comes down to producing and maintaining quantum states of heavy objects. “Heavy” here means something like a milli-gram. That doesn’t sound heavy, but it is very heavy compared to the masses of elementary particles.

The objects you need for such an experiment have to be heavy enough so that one can actually measure the gravitational field. There are a few experiments attempting to measure this. But presently the masses that one can bring into quantum states are not quite high enough. However, it is something that will reasonably become possible in the coming decades.

Another good chance to observe quantum gravitational effects is to use the universe as laboratory. Quantum gravitational effects should be strong right after the big bang and inside of black holes. Evidence from what happened in the early universe could still be around today, for example in the cosmic microwave background. Indeed, several groups are trying to find out whether the cosmic microwave background can be analyzed to show that gravity must have been quantized. But at least for now the signal is well below measurement precision.

With black holes, it’s more complicated, because the region where quantum gravity is strong is hidden behind the event horizon. But some computer simulations seem to show that stars can collapse without forming a horizon. In this case we could look right at the quantum gravitational effects. The challenge with this idea is to find out just how the observations would differ between a “normal” black hole and a singularity without horizon but with quantum gravitational effects. Again, that’s subject of current research.

And there are other options. For example, the theory of quantum gravity may violate symmetries that are respected by general relativity. Symmetry violations can show up in high-precision measurements at low energies, even if they are very small. This is something that one can look for with particle decays or particle interactions and indeed something that various experimental groups are looking for.

There are several other ways to test quantum gravity, but these are more speculative in that they look for properties that a theory of quantum gravity may not have.

For example, the way in which gravitational waves are emitted in a black hole merger is different if the black hole horizon has quantum effects. However, this may just not be the case. The same goes for ideas that space itself may have the properties of a medium give rise to dispersion, which means that light of different colors travels at different speed, or may have viscosity. Again, this is something that one can look for, and that physicists are looking for. It’s not our best shot though, because quantum gravity may not give rise to these effects.

In any case, as you can see, clearly it is possible to test quantum gravity. Indeed I think it is possible that we will see experimental evidence for quantum gravity in the next 20 years, most likely by the type of test that I talked about first, with the massive quantum objects.

Wednesday, January 08, 2020

Update January 2020

A quick update on some topics that I previously told you about.


Remember I explained the issue with the missing electromagnetic counterparts to gravitational wave detections? In a recent paper a group of physicists from Russia claimed they had evidence for the detection of a gamma ray event coincident with the gravitational wave detection from a binary neutron star merger. They say they found it in the data from the INTEGRAL satellite mission.

Their analysis was swiftly criticized informally by other experts in the field, but so far there is no formal correspondence about this. So the current status is that we are still missing confirmation that the LIGO and Virgo gravitational wave interferometers indeed detect signals from outer space.

So much about gravitational waves. There is also news about dark energy. Last month I told you that a new analysis of the supernova data showed they can be explained without dark energy. The supernova data, to remind you, are the major evidence that physicists have for dark energy. And if that evidence does not hold up, that’s a big deal because the discovery of dark energy was awarded a nobel prize in 2011.

However, that new analysis of the supernova data was swiftly criticized by another group. This criticism, to be honest, did not make much sense to me because they picked on the use of the coordinate system, which was basically the whole point of the original analysis. In any case, the authors of the original paper then debunked the criticism. And that is still the status today.

Quanta Magazine was happy to quote a couple of astrophysicists saying that the evidence for dark energy from supernovae is sound without giving further reasons.

Unfortunately, this is a very common thing to happen. Someone, or a group, goes and challenges a widely accepted result. Then someone else criticizes the new work. So far, so good. But after this, what frequently happens is that everybody else, scientists as well as the popular science press, will just quote the criticism as having sorted out the situation just so that they do not have to think about the problem themselves. I do not know, but I am afraid that this is what’s going on.

I was about to tell you more about this, but something better came to my mind. The lead author of the supernova paper, Subir Sakar is located in Oxford and I will be visiting Oxford next month. So, I asked if he would be in for an interview and he kindly agreed on that. So you will have him explain his work himself.

Speaking of supernovae. There was another paper just a few days ago that claimed that actually supernovae are not very good standards for standard candles, and that indeed their luminosity might just depend on the average age of the star that goes supernova.

Now, if you look at more distant supernovae, the light has had to travel for a long time to reach us, which means they are on the average younger. So, if younger stars that go bang have a different luminosity than older ones, that introduces a bias in the analysis that can mimic the effect of dark energy. Indeed, the authors of that new paper also claim that one does not need dark energy to explain the observations.

This gives me somewhat of a headache because these are two different reasons for why dark energy might not exist. Which raises the question what happens if you combine them. Maybe that makes the expansion too slow? Also, I said this before, but let me emphasize again that the supernova data are not the only evidence for dark energy. Someone’s got to do a global fit of all the available data before we can draw conclusions.

One final point for today, the well-known particle physicist Mikhail Shifman has an article on the arXiv that could best be called an opinion piece. It is titled “Musings on the current status of high energy physics”. In this article he writes “Low energy-supersymmetry is ruled out, and gone with it is the concept of naturalness, a basic principle which theorists cherished and followed for decades.” And in a footnote he adds “By the way, this principle has never been substantiated by arguments other than aesthetical.”

This is entirely correct and one of the main topics in my book “Lost in Math”. Naturalness, to remind you, was the main reason so many physicists thought that the Large Hadron Collider should see new particles besides the Higgs boson. Which has not happened. The principle of naturalness is now pretty much dead because it’s just in conflict with observation.

However, the particle physics community has still not analyzed how it could possibly be that such a large group of people for such a long time based their research on an argument that was so obviously non-scientific. Something has seriously gone wrong here and if we do not understand what, it can happen again.

Friday, January 03, 2020

The Real Butterfly Effect

If a butterfly flaps its wings in China today, it may cause a tornado in America next week. Most of you will be familiar with this “Butterfly Effect” that is frequently used to illustrate a typical behavior of chaotic systems: Even smallest disturbances can grow and have big consequences.


The name “Butterfly Effect” was popularized by James Gleick in his 1987 book “Chaos” and is usually attributed to the meteorologist Edward Lorenz. But I recently learned that this is not what Lorenz actually meant by Butterfly Effect.

I learned this from a paper by Tim Palmer, Andreas Döring, and Gregory Seregin called “The Real Butterfly Effect” and that led me to dig up Lorenz’ original paper from 1969.

Lorenz, in this paper, does not write about butterfly wings. He instead refers to a sea gull’s wings, but then attributes that to a meteorologist whose name he can’t recall. The reference to a butterfly seems to have come from a talk that Lorenz gave in 1972, which was titled “Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?”

The title of this talk was actually suggested by the session chair, a meteorologist by name Phil Merilees. In any case, it was the butterfly that stuck instead of the sea gull. And what was the butterfly talk about? It was a summary of Lorentz 1969 paper. So what’s in that paper?

In that paper, Lorenz made a much stronger claim than that a chaotic system is sensitive to the initial conditions. The usual butterfly effect says that any small inaccuracy in the knowledge that you have about the initial state of the system will eventually blow up and make a large difference. But if you did precisely know the initial state, then you could precisely predict the outcome, and if only you had good enough data you could make predictions as far ahead as you like. It’s chaos, alright, but it’s still deterministic.

Now, in the 1969 paper, Lorenz looks at a system that has an even worse behavior. He talks about weather, so the system he considers is the Earth, but that doesn’t really matter, it could be anything. He says, let us divide up the system into pieces of equal size. In each piece we put a detector that makes a measurement of some quantity. That quantity is what you need as input to make a prediction. Say, air pressure and temperature. He further assumes that these measurements are arbitrarily accurate. Clearly unrealistic, but that’s just to make a point.

How well can you make predictions using the data from your measurements? You have data on that finite grid. But that does not mean you can generally make a good prediction on the scale of that grid, because errors will creep into your prediction from scales smaller than the grid. You expect that to happen of course because that’s chaos; the non-linearity couples all the different scales together and the error on the small scales doesn’t stay on the small scales.

But you can try to combat this error by making the grid smaller and putting in more measurement devices. For example, Lorenz says, if you have a typical grid of some thousand kilometers, you can make a prediction that’s good for, say, 5 days. After these 5 days, the errors from smaller distances screw you up. So then you go and decrease your grid length by a factor of two.

Now you have many more measurements and much more data. But, and here comes the important point: Lorenz says this may only increase the time for which you can make a good prediction by half of the original time. So now you have 5 days plus 2 and a half days. Then you can go and make your grid finer again. And again you will gain half of the time. So now you have 5 days plus 2 and half plus 1 and a quarter. And so on.

Most of you will know that if you sum up this series all the way to infinity it will converge to a finite value, in this case that’s 10 days. This means that even if you have an arbitrarily fine grid and you know the initial condition precisely, you will only be able to make predictions for a finite amount of time.

And this is the real butterfly effect. That a chaotic system may be deterministic and yet still be non-predictable beyond a finite amount of time .

This of course raises the question whether there actually is any system that has such properties. There are differential equations which have such a behavior. But whether the real butterfly effect occurs for any equation that describes nature is unclear. The Navier-Stokes equation, which Lorenz was talking about may or may not suffer from the “real” butterfly effect. No one knows. This is presently one of the big unsolved problems in mathematics.

However, the Navier-Stokes equation, and really any other equation for macroscopic systems, is strictly speaking only an approximation. On the most fundamental level it’s all particle physics and, ultimately, quantum mechanics. And the equations of quantum mechanics do not have butterfly effects because they are linear. Then again, no one would use quantum mechanics to predict the weather, so that’s a rather theoretical answer.

The brief summary is that even in a deterministic system predictions may only be possible for a finite amount of time and that is what Lorenz really meant by “Butterfly Effect.”

Friday, December 27, 2019

How did the universe begin?

The year is almost over and a new one about to begin. So today I want to talk about the beginning of everything, the whole universe. What do scientists think how it all started?


We know that the universe expands, and as the universe expands, matter and energy in it dilutes. So when the universe was younger, matter and energy was much denser. Because it was denser, it had a higher temperature. And a higher temperature means that on the average particles collided at higher energies.

Now you can ask, what do we know about particles colliding at high energies? Well, the highest collision energies between particles that we have experimentally tested are those produced at the Large Hadron Collider. These are energies about a Tera-electron Volt or TeV for short, which, if you convert it into a temperature, comes out to be 1016 Kelvin. In words that’s ten million billion Kelvin which sounds awkward and is the reason no one quotes such temperatures in Kelvin.

So, up to a temperature of about a TeV, we understand the physics of the early universe and we can reliably tell what happened. Before that, we have only speculation.

The simplest way to speculate about the early universe is just to extrapolate the known theories back to even higher temperatures, assuming that the theories do not change. What happens then is that you eventually reach energy densities so high that the quantum fluctuations of space and time become relevant. To calculate what happens then, we would need a theory of quantum gravity, which we do not have. So, in brief, the scientific answer is that we have no idea how the universe began.

But that’s a boring answer and one you cannot publish, so it’s not how the currently most popular theories for the beginning of the universe work. The currently most popular theories assume that the electromagnetic interaction must have been unified with the strong and the weak nuclear force at high energies. They also assume that an additional field exists, which is the so-called inflaton field.

The purpose of the inflaton is to cause the universe to expand very rapidly early on, in a period which is called “inflation”. The inflaton field then has to create all the other matter in the universe and basically disappear because we don’t see it today. In these theories, our universe was born from a quantum fluctuation of the inflaton field and this birth event is called the “Big Bang”.

Actually, if you believe this idea, the quantum fluctuations still go on outside of our universe, so there are constantly other universes being created.

How scientific is this idea? Well, we have zero evidence that the forces were ever unified and have equally good evidence, namely none, that the inflaton field exists. The idea that the early universe underwent a phase of rapid expansion fits to some data, but the evidence is not overwhelming, and in any case, what the cause of this rapid expansion would have been – an inflaton field or something else – the data don’t tell us.

So that the universe began from a quantum fluctuations is one story. Another story has it that the universe was not born once but is born over and over again in what is called a “cyclic” model. In cyclic models, the Big Bang is replaced by an infinite sequence of Big Bounces.

There are several types of cyclic models. One is called the Ekpyrotic Universe. The idea of the Ekpyrotic Universe was originally borrowed from string theory and had it that higher-dimensional membranes collided and our universe was created from that collision.

Another idea of a cyclic universe is due to Roger Penrose and is called Conformal Cyclic Cosmology. Penrose’s idea is basically that when the universe gets very old, it loses all sense of scale, so really there is no point in distinguishing the large from the small anymore, and you can then glue together the end of one universe with the beginning of a new one.

Yet another theory has it that new universes are born inside black holes. You can speculate about this because no one has any idea what goes on inside black holes anyway.

An idea that sounds similar but is actually very different is that the universe started from a black hole in 4 dimensions of space. This is a speculation that was put forward by Niayesh Afshordi some years ago.

 Then there is the possibility that the universe didn’t really “begin” but that before a certain time there was only space without any time. This is called the “no-boundary proposal” and it goes back to Jim Hartle and Stephen Hawking. A very similar disappearance of time was more recently found in calculations based on loop quantum cosmology where the researchers referred to it as “Asymptotic Silence”.

Then we have String Gas Cosmology, in which the early universe lingered in an almost steady state for an infinite amount of time before beginning to expand, and then there is the so-called Unicorn Cosmology, according to which our universe grew out of unicorn shit. Nah, I made this one up.

So, as you see, physicists have many ideas about how the universe began. The trouble is that not a single one of those ideas is backed up by evidence. And they may never be backed up by evidence, because the further back in time you try to look, the fewer data we have. While some of those speculations for the early universe result in predictions, confirming those predictions would not allow us to conclude that the theory must have been correct because there are many different theories that could give rise to the same prediction.

This is a way in which our scientific endeavors are fundamentally limited. Physicists may simply have produced a lot of mathematical stories about how it all began, but these aren’t any better than traditional tales of creation.

Friday, December 20, 2019

What does a theoretical physicist do?

This week, I am on vacation and so I want to answer a question that I get a lot but that doesn’t really fit into the usual program: What does a theoretical physicist do? Do you sit around all day and dream up new particles or fantasize about the beginning of the universe? How does it work?


Research in theoretical physics generally does one of two things: Either we have some data that require explanation for which a theory must be developed. Or we have a theory that requires improvement, and the improved theory leads to a prediction which is then experimentally tested.

I have noticed that some people think theoretical physics is something special to the foundations of physics. But that isn’t so. All subdisciplines of physics have an experimental part and a theoretical part. How much the labor is divided into different groups of people depends strongly on the field. In some parts of astrophysics, for example, data collection, analysis, and theory-development is done by pretty much the same people. That’s also the case in some parts of condensed matter physics. In these areas many experimentalists are also theorists. But if you look at fields like cosmology or high energy particle physics, people tend to specialize either in experiment or in theory development.

Theoretical physics is pretty much a job like any other in that you get an education and then you put your knowledge to work. You find theoretical physicists in public higher education institutions, which is probably what you are most familiar with, but you also find them in the industry or in non-profit research institution like the one I work at. Just what the job entails depends on the employer. Besides the research, a theoretical physicist may have administrational duties, or may teach, mentor students, do public outreach, organize scientific meetings, sit on committees and so on.

When it comes to the research itself, theoretical physics doesn’t work any different from other disciplines of science. The largest part of research, ninetynine percent, is learning what other people have done. This means you read books and papers, go to seminars, attend conferences, listen to lectures and you talk to people until you understand what they have done.

And as you do that, you probably come across some open problems. And from those you pick one for your own research. You would pick a problem that, well, you are interested in, but also something that you think will move the field forward and, importantly, you pick a problem that you think you have a reasonable chance of solving with what you know. Picking a research topic that is both interesting and feasible is not easy and requires quite some familiarity with the literature, which is why younger researchers usually rely on more senior colleagues to pick a topic.

Where theoretical physics is special is in the amount of mathematics that we use in our research. In physics all theories are mathematical. This means both that you must know how to model a natural system with mathematics and you must know how to do calculations within that model. Of course we now do a lot of calculations numerically, on a computer, but you still have to understand the mathematics that goes into this. There is really no way around it. So that’s the heart of the job, you have to find, understand, and use the right mathematics to describe nature.

The thing that a lot people don’t understand is just how constraining mathematics is in theory development. You cannot just dream up a particle, because almost everything that you can think of will not work if you write down the mathematics. It’s either just nonsense or you find quickly that it is in conflict with observation already.

But the job of a theoretical physicist is not done with finishing a calculation. Once you have your results, you have to write them up and publish them and then you will give lectures about it so that other people can understand what you have done and hopefully build on your work.

What’s fascinating about theoretical physics is just how remarkably well mathematics describes nature. I am always surprised if people tell me that they never understood physics because I would say that physics is the only thing you can really understand. It’s the rest of the world that doesn’t make sense to me.

Monday, December 16, 2019

The path we didn’t take


“There are only three people in the world who understand Superdeterminism,” I used to joke, “Me, Gerard ‘t Hooft, and a guy whose name I can’t remember.” In all honesty, I added the third person just in case someone would be offended I hadn’t heard of them.

What the heck is Superdeterminism?, you ask. Superdeterminism is what it takes to solve the measurement problem of quantum mechanics. And not only this. I have become increasingly convinced that our failure to solve the measurement problem is what prevents us from making progress in the foundations of physics overall. Without understanding quantum mechanics, we will not understand quantum field theory, and we will not manage to quantize gravity. And without progress in the foundations of physics, we are left to squeeze incrementally better applications out of the already known theories.

The more I’ve been thinking about this, the more it seems to me that quantum measurement is the mother of all problems. And the more I am talking about what I have been thinking about, the crazier I sound. I’m not even surprised no one wants to hear what I think is the obvious solution: Superdeterminism! No one besides ‘t Hooft, that is. And that no one listens to ‘t Hooft, despite him being a Nobel laureate, doesn’t exactly make me feel optimistic about my prospects of getting someone to listen to me.

The big problem with Superdeterminism is that the few people who know what it is, seem to have never thought about it much, and now they are stuck on the myth that it’s an unscientific “conspiracy theory”. Superdeterminism, so their story goes, is the last resort of the dinosaurs who still believe in hidden variables. According to these arguments, Superdeterminism requires encoding the outcome of every quantum measurement in the initial data of the universe, which is clearly outrageous. Not only that, it deprives humans of free will, which is entirely unacceptable.

If you have followed this blog for some while, you have seen me fending off this crowd that someone once aptly described to me as “Bell’s Apostles”. Bell himself, you see, already disliked Superdeterminism. And the Master cannot err, so it must be me who is erring. Me and ‘t Hooft. And that third person whose name I keep forgetting.

Last time I made my 3-people-joke was in February during a Facebook discussion about the foundations of quantum mechanics. On this occasion, someone offered in response the name “Tim Palmer?” Alas, the only Tim Palmer I’d heard of is a British music producer from whose videos I learned a few things about audio mixing. Seemed like an unlikely match.

But the initial conditions of the universe had a surprise in store for me.

The day of that Facebook comment I was in London for a dinner discussion on Artificial Intelligence. How I came to be invited to this event is a mystery to me. When the email came, I googled the sender, who turned out to be not only the President of the Royal Society of London but also a Nobel Prize winner. Thinking this must be a mistake, I didn’t reply. A few weeks later, I tried to politely decline, pointing out, I paraphrase, that my knowledge about Artificial Intelligence is pretty much exhausted by it being commonly abbreviated AI. In return, however, I was assured no special expertise was expected of me. And so I thought, well, free trip to London, dinner included. Would you have said no?

When I closed my laptop that evening and got on the way to the AI meeting, I was still wondering about the superdeterministic Palmer. Maybe there was a third person after all? The question was still circling in my head when the guy seated next to me introduced himself as... Tim Palmer.

Imagine my befuddlement.

This Tim Palmer, however, talked a lot about clouds, so I filed him under “weather and climate.” Then I updated my priors for British men to be called Tim Palmer. Clearly a more common name than I had anticipated.

But the dinner finished and our group broke up and, as we walked out, the weather-Palmer began talking about free will! You’d think it would have dawned on me then I’d stumbled over the third Superdeterminist. However, I was merely thinking I’d had too much wine. Also, I was now somewhere in London in the middle of the night, alone with a man who wanted to talk to me about free will. I excused myself and left him standing in the street.

But Tim Palmer turned out to not only be a climate physicist with an interest in the foundations of quantum mechanics, he also turned out to be remarkably persistent. He wasn’t remotely deterred by my evident lack of interest. Indeed, I later noticed he had sent me an email already two years earlier. Just that I dumped it unceremoniously in my crackpot folder. Worse, I seem to vaguely recall telling my husband that even the climate people now have ideas for how to revolutionize quantum mechanics, hahaha.

Cough.

Tim, in return, couldn’t possibly have known I was working on Superdeterminism. In February, I had just been awarded a small grant from the Fetzer Franklin Fund to hire a postdoc to work on the topic, but the details weren’t public information.

Indeed, Tim and I didn’t figure out we have a common interest until I interviewed him on a paper he had written about something entirely different, namely how to quantify the uncertainty of climate models.

I’d rather not quote cranks, so I usually spend some time digging up information about people before interviewing them. That’s when I finally realized Tim’s been writing about Superdeterminism when I was still in high school, long before even ‘t Hooft got into the game. Even more interestingly, he wrote his PhD thesis in the 1970s about general relativity before gracefully deciding that working with Stephen Hawking would not be a good investment of his time (a story you can hear here at 1:12:15). Even I was awed by that amount of foresight.

Tim and I then spent some months accusing each other of not really understanding how Superdeterminism works. In the end, we found we agree on more points than not and wrote a paper to explain what Superdeterminism is and why the objections often raised against it are ill-founded. Today, this paper is on the arXiv.


Thanks to support from the Fetzer Franklin Fund, we are also in the midst of organizing a workshop on Superdeterminism and Retrocausality. So this isn’t the end of the story, it’s the beginning.