Reasons to Panic about the Hierarchy Problem

This is intended to be kind of a sequel to one of my previous posts, which attempted to convey the vibes surrounding renormalization: the systematic ignorance of physics at small scales.

If you read the thing, you may recall that I justified renormalization with the argument that physics at different scales mostly don’t effect each other. Galileo’s pendulum wasn’t effected by quantum mechanics or the gravitational pull of Jupiter.

There is an outstanding problem in particle physics at the moment that, if not resolved, may send that whole philosophy down the toilet. The problem has been around for a while, but it has got a lot worse in the last two or three years, sending particle physics into a bit of a crisis.

I speak of the hierarchy problem. Buckle your seatbelts and all that.

We Need to Talk About Mass

The hierarchy problem has its origin in interpreting the mass of the recently discovered Higgs boson. To get down to what the problem is about, we have to first think about mass more generally.

If you only know one equation from physics, it’s probably E=mc2. This says that energy and mass are basically the same thing, just in different forms. An object of mass at rest contains energy equal to mc2, where c is the speed of light. That means that in principle you can extract mc2 worth of energy from the object.

The total energy of an object is the energy contained in its mass plus the energy associated with its motion, i.e., its kinetic energy. When the object is at rest it has no kinetic energy, so all of its energy can be associated with its mass. Flipping this argument on its head, you can say that the energy E inside an object at rest tells you its mass m, via m=E/c2.

This may seem like an obvious and redundant thing to say, but consider the following example. A proton, one of the constituents of an atomic nucleus, is not simply a single particle but can be thought of as three smaller particles (called quarks) bound together. The quarks are in general wobbling around, moving in relation to each other, so they contain some kinetic energy. Quantum field theory tells us that the quarks interact by emitting and absorbing other particles called gluons, which are very similar to photons. Gluons can spontaneously create new quarks, then destroy them again an instant later. The motion of all these extra particles contribute to the overall energy of the proton.

Since the mass is given by the total energy it contains when it’s at rest, it includes all of this extra energy due to the motion and interactions. As a consequence, the mass of the proton is larger than just the sum of the quark masses. In fact, the quark masses only account for around 1% of the total proton mass!

17555547_10202824906821777_1157189460_n.jpg

Fig. 1: The inner workings of a proton. The mass of the proton is given by all of the energy enclosed by the dotted line (divided by c2)

A similar effect occurs for individual particles. Namely, working out the mass of the Higgs boson requires an analogous consideration.

The Higgs can both emit and absorb many different types of particle, including quarks, electrons, you name it. It could emit a quark, which exists for a tiny period of time, then absorb it again before it gets the chance to go anywhere. The result is that the Higgs is covered with a cloud of extra particles popping in and out of existence. The mass and motion of these particles all contribute to the overall energy of the Higgs, therefore enhancing its mass.

17495576_10202824906781776_1327828188_n.jpg

Fig. 2: The Higgs, dressed with emissions. The effective Higgs mass is given by all the energy enclosed in the dotted line (divided by c2)

Similar things occur for other particles, like electrons, but not to the extent that it happens to the Higgs. To get into the reasons for this difference requires some deep discussions about symmetries in particle physics, a subject I should really do a post about at some point. But I won’t go into it here.

From this point of view the Higgs really has two masses, the apparent mass m which is measured in an experiment, and the bare mass m0, the mass of the Higgs if it wasn’t coated in emissions.

m0 is the more fundamental of the two, a parameter of the underlying theory. However, only m is accessible by experiment. How can one deduce m0 from only knowing m? If we define E to be the energy contained in the emissions, the extra mass it gives to the Higgs will be E/c2. Then we can write:

m = m0 + E/c2

But how do we work out E? We can make an approximation according to the following argument.

Just like in the Feynman diagrams in the previous article, the cloud of particles surrounding the Higgs can have any momentum, so the energy gets contributions from emissions with all possible momenta. But recall that, in order to make sure probabilities can’t become infinite, we need to restrict particles from having momentum above Λ. This corresponds to ignoring scales below 1/Λ. So we only need to consider emissions having momentum up to Λ. Most of the bonus energy in this case comes from the most energetic possible particles, the ones with momentum Λ. Assuming this to be large, we can say that most of their energy is kinetic, and can ignore the energy due to their masses. The kinetic energy of the most energetic allowed particles then is roughly Λ, leading to an overall bonus energy for the Higgs to be round about Λ. So we end up with

m ≈ m0Λ/c2

This equation is where the hierarchy problem comes from.

One in a Hundred Million Billion Trillion Trillion

Imagine the scene. We’ve measured the mass of the Higgs m, to be the famous number 125GeV (GeV is just a unit of mass particle physicists use). Looking at the above equation, you can see that if we decide to set Λ at some value, we then have to tune the value of m0 in order to produce the observed 125GeV for mΛ effectively dictates what theory we’re using to model reality, and each theory has a different value for m0.

What are the possibilities for choosing Λ? Λ is meant to be chosen to cut out effects at scales where we don’t know what’s going on, so we can choose Λ such that 1/Λ is anything down to scales where “new physics” appears.

What if there was no new physics at all, our current model is valid at all scales? Then we could take 1/Λ to be the theoretically smallest possible length – the Planck length LP. In this case, we have that Λ = 1/LP, leading to a new equation:

m ≈ m0 + 1/LPc2

LP is a very very small number, the smallest possible length. As a result, this new bonus mass 1/LPc2 is a fucking huge number, in fact, it’s a hundred-million-billion-trillion-trillion times larger than m. That’s not just a generically big sounding number, it’s literally how much bigger it is.

For this theory to be consistent with the observed Higgs mass m, m0 needs to be a number which when added to this huge number 1/LPc2, results in m. So, firstly m0 needs to be negative (this isn’t a huge problem since m0 isn’t an observable mass, only masses that you can observe strictly need to be positive). It also needs to be almost exactly the same as 1/LPc2, so the two cancel out exactly enough to create the much smaller number m.

Imagine you changed the 33rd decimal place of  m0, in other words, the number was shifted up by a hundred-million-billion-trillion-trillionth of its size. The value of m would go from being 125GeV to double that size, a huge change. If you increased m0 at just the 3rd decimal point, m would still become a million billion trillion trillion times bigger. And so on. This is referred to as the fine-tuning of m0.

17505994_10202824906741775_1830717354_n.jpg

Fig. 3: Above equation visualized. The towers Λ/c2 and m0 need to almost exactly match up in order to produce the small m. Not to scale.

The universe would be radically different if that value of m0 was changed even a tiny bit. The Higgs particle is what gives mass to all the other particles, and the mass of all the other particles is decided in part by the Higgs mass. If m was billions of times larger, all the other particles would become billions of times heavier also. We certainly couldn’t have stars, planets and all that, the universe would be too busy collapsing in on itself. It seems like, to generate a universe remotely like the one we live in, nature needs to decide on a parameter m0, highly tuned to 33 decimal places.

This disturbs a lot of people because it is very unnatural. It seems like an incredible coincidence that m0 ended up with the exact value it did, the exact value needed for a universe where stars could burn, planets could form and life could frolic. It’s a bit like saying someone dropped a pencil and it landed on its point and stayed there, perfectly balanced. Except in this case to get the same degree of coincidence, the pencil would have to be as long as the solar system and have a millimetre wide tip [source].

This is concerning by itself, but its consequences go further. It represents a breakdown of our assumption of physics at different scales being mostly independent. m0 is a parameter of the theory which describes physics down to the scale of LP, so includes whatever physics is happening at the Planck length. In that case, m0 in a sense is decided by whatever is happening at the Planck length. Physics at large scales seem to be incredibly strongly dependent on m0 which comes from the Planck scale.

Before, we thought that physics at very small scales shouldn’t strongly effect physics at larger scales, but this changes all that. Is renormalization valid if this is the case?

Supersymmetry to the Rescue

In constructing the hierarchy problem above, we made an assumption that our current theory of particle physics is valid all the way down to the Planck length. This may be true, but it may not be. There may be new unknown laws of physics that appear as you go down to smaller scales, before you get anywhere near the Planck length.

If we assume some new physics appears at a new length scale we’ll call LN, then our current theory is only valid at scales larger than this, and can only contain particles of momentum smaller than 1/LN. This changes the bonus Higgs mass, changing the above equation to:

m ≈ m0 + 1/LNc2

If the scale LN is much bigger than the Planck length, LP, then  1/LNc2 is much smaller, and m0 requires less fine tuning.

Still, if 1/LNc2 is only a million times the size of instead of ten-million-billion-trillion-trillion, m0 still needs to be tuned to an accuracy of a millionth of its size… What we really need to solve this problem is some new laws of physics to appear at scales very close to the ones we’ve already probed.

It is for this reason that a popular candidate theory of smaller scales, supersymmetry, is hoped to become apparent at length scales not much smaller than what we’ve already tested. This would solve our problem, as 1/LNc2 would end up being roughly the same size as m.

Since the LHC at CERN started bashing together protons at higher momenta then ever before, we’ve been keeping an eye out for signs of supersymmetry. We’ve now searched for signs at lengths scales quite a lot smaller than where we discovered the Higgs. Unfortunately, no evidence of supersymmetry’s existence has appeared. With every year of experiments that pass, and supersymmetry isn’t found, the possible scale where supersymmetry appears gets pushed to smaller and smaller, making LN smaller and smaller. The smaller LN gets, the more fine-tuned  m0 needs to be.

People are starting to worry. Even if supersymmetry is found tomorrow, it looks like it’ll only become important at scales where 1/LNc2 is a hundred times the size of the Higgs mass. So a tuning of one part in a hundred… Is that already too much of a coincidence? The further up the energy scale we have to go to find supersymmetry, the less power it has to resolve the issue.

The Hierarchy problem is one of the biggest driving forces in particle physics research today, giving hints that there is more physics to be found at scales close to us. If supersymmetry is not found at the LHC, we’re going to have to do a proper re-think about our philosophy of renormalization. Could there be something wrong with our understanding of scales? And could the stars, planets and life really exist on merit of a massive coincidence?

More on the proton mass

More on naturalness

2 thoughts on “Reasons to Panic about the Hierarchy Problem

  1. Who is the author of this article: Reasons to Panic about the Hierarchy Problem? How do I reference it?
    Thanks

    Like

    1. Hi Sadeg,
      if you want to reference it, feel free to just use the url for the main page of the blog. The name is Euan McLean if you want to include that too.
      cheers

      Like

Leave a comment