Hitchhiker’s guide to an infinity-free theory

Quantum field theory is the theoretical framework of particle physics. Without it, we never could have worked out what an atom is made of, understood the forces that govern its content, or predicted the Higgs boson.

But when it was first being established in the first half of the 20th century, it came across an apparently fatal flaw. It was plagued with infinities. And infinities don’t belong in physics. Following the rules of quantum field theory, you could end up predicting an electron having an infinite electric charge. Gasp. Its resolution lead to a revolutionised way of thinking that now underpins all of particle physics.

I may go into a little bit of maths, but don’t worry its all easy. Promise.

Infinitely Probable Events

Science is about making predictions, given initial conditions.

If our system is in state A at time 1, what is the probability of it being in state B at time 2?

In particle physics, we read “system” to mean the universe at its most bare-bones fundamental level. The question becomes the following:

At time 1, there exists a given set of particles, each with a particular momentum. What is the probability of a new set of particles, each with a new particular momentum, at time 2?

Quantum field theory is intended to be the machinery one can use to answer such a question. A nice simple challenge we can give it is this: given two electrons, hurtling towards each other at momenta p1 and p2, what is the likelihood of them ricocheting off each other and coming out of the collision with the new momenta q1 and q2?

FullSizeRender (2).jpgFig 1: Feynman Diagram of two electrons exchanging a photon

The easiest (most likely) way for this to happen is shown in fig. 1, this thing is called a Feynman diagram. Electron 1 emits a photon (the particle responsible for the electromagnetic force). This flies over to electron 2 with momentum k, and gets absorbed. We can use the principle of conservation of momentum to uniquely determine k. The principle states that total momentum must be the same at the beginning and end of all events. Applying this to electron 1 emitting the photon, initial momentum = final momentum implies p1 = q1 + k. Then, rearranging gets us to k = p1q1. Since we’re given p1 and q1, we can use this equation to work out exactly what k will be.

Quantum field theory can be used to work out the probability of each individual part of the Feynman diagram. Electron 1 emitting the photon, the photon travelling from electron 1 to 2 with momentum k, and electron 2 absorbing it. This produces the so-called Feynman rules, a translation between parts of the diagram and probabilities of each part taking place. The probability of the entire event can be found by just multiplying probabilities of each component event. The probability of the photon emission, multiplied by the probability of its travel to electron 2, multiplied by the probability of its absorption, gets you the overall probability. Nobel prizes all ’round.

But wait. This is not the only way you can put in two electrons of momenta p1 and p2 and get two electrons out with momenta q1 and q2. There are a number of different ways the two electrons could interact, in order to produce the same outcome. For example, this:

FullSizeRender (3).jpgFig.2: Feynman Diagram of two electrons exchanging a photon which splits into an electron positron pair on the way.

The photon splits into two new particles, which recombine to return the photon. Similarly to before we know exactly what the photon momentum k is, using k = p1 – q1, and the values for p1 and q which we are given in the problem. But now, there is no guiding principle to decide what the momenta of the electron and the positron in the middle will have. We know that k1 + k2 = k from conservation of momentum, but this is one equation containing two unknowns. Compare it to how we worked out k in the first diagram, in which case there was only one unknown, so we could use all the other known values (p1 and q1) to get the unknown one (k). If we fix k2 by saying k2k – k1, we have one unfixed degree of freedom left, k1, which could take on any value. k1 could even have negative values, these represent the electron moving in the opposite direction to all the other particles.

k1 is not uniquely determined by the given initial and final momenta of the electrons. This becomes significant when working out the overall probability of fig.2 occurring.

To work out the overall probability, one needs to use the Feynman rules to translate each part of the diagram into a probability, then combine them. The probability of electron 1 emitting photon, multiplied by the probability of photon moving to where it splits up, multiplied by the probability of photon splitting into the electron & positron, etc.  But this time, since the middle electron could have any momentum, one needs to add up the probability of that part for all values of k1. There is an infinite spectrum of possible k1 values so there are an infinite number of ways fig.2 could occur.

Let’s step back for a moment. In general, if there are lots of different events (call them E1, E2, ….) that could cause the same overall outcome O to occur, then the probability of Oprob(O), is

prob(O) = prob(E1) + prob(E2) + …

If there are an infinite number of ways O could occur, then it becomes an infinite sum of probabilities, and as long as each of the probabilities are not zero, or tend towards zero, then prob(O) becomes infinite.

This is what happens with our particles. Since there is an infinite number of momentum values the middle electron could have, there is an infinite number of probabilities that must be added up to get the probability of fig.2 occurring, so the probability of fig.2 is infinite.

What could that even mean? A probability should be a number between 0 (definitely won’t happen) and 1 (definitely will happen). Such predictions of infinite probabilities renders a theory useless, quantum field theory is doomed. The Higgs boson is a conspiracy invented by the Chinese.

Renormalization or How to ignore all your problems

This wasn’t the end of quantum field theory- since there is a way of resolving this problem. Kind of. The solution, or rather the family of solutions, are referred to as renormalization. It comes in many different manifestations, but it all boils down to something along the lines of the following. We pretend that k1, our unconstrained electron momentum, can only have a value below some maximum allowed size we’ll call Λ. Then, we don’t need to add up probabilities from situations where k1 goes arbitrarily high. We’re left with a finite number of possibilites, therefore a finite probability for the whole event. More generally, we can solve all problems like this by making Λ a universal maximum momentum for all particles involved in an interaction. Λ is called a momentum cutoff.

This solves the issue, we end up with sensible predictions for all processes. And as long as we make Λ suitably larger than the momentum of the initial and final electrons, the answer matches results of experiments to high precision. But I’ll understand if you feel a little unsatisfied by this. How come we can just ignore the possibility of electrons having momentum higher than Λ? To win you over, I’ll tell you a bit about what Λ physically means.

In quantum mechanics, an electron is both a particle and a wave. One of the first realisations in quantum mechanics was that the wavelength of an electron wave is inversely proportional to its momentum; wavelength = 1/momentum. A high momentum corresponds to a small wavelength, and vice versa. Ignoring particles with momentum higher than Λ, is the same as ignoring waves with wavelength smaller than 1/Λ. Since all particles can also be seen as waves, the universe is made completely  out of waves. If you ignore all waves of wavelength smaller than 1/Λ, you’re effectively ignoring “all physics” at lengths smaller than 1/Λ.

Renormalization is a “coarse graining” or “pixelation” of our description of space, the calculation has swept details smaller than 1/Λ under the rug.

Making exceptions like this have in fact been a feature of all models of nature throughout history. When you’re in physics class doing experiments with pendulums, you know that the gravitational pull of Jupiter isn’t going to effect the outcome of your experiment, so broadly speaking, long-range interactions aren’t relevant. You also know that the exact nature of the bonds between atoms in the weight of your pendulum isn’t worth thinking about, so short-range interactions also aren’t relevant. The swing of the pendulum can be modelled accurately by considering only physics at the same scale as it, stuff happening on the much larger and much smaller scale can be ignored. In essence you are also using renormalization.

Renormalization is just a mathematically explicit formulation of this principle.

The Gradual Probing of Scales

Renormalization teaches us how to think about the discovery of new laws of physics.

The fact that experiments on the pendulum aren’t effected by small scales means we cannot use the pendulum to test small scale theories like quantum mechanics. In order to find out what’s happening at small scales, you need to study small things.

Since particles became a thing, physicists have been building more and more powerful particle accelerators, which accelerate particles to high momenta and watch them interact. As momenta increase, the wavelength of the particles get smaller, and the results of the experiments are probing smaller and smaller length scales. Each time a bigger accelerator is required in order to accelerate particles to higher speeds, and each jump is a huge engineering challenge. This race to the small scales has culminated in the gargantuan 27km ring buried under Geneva called the Large Hadron Collider (LHC). This has achieved particle momenta high enough to probe distances of around 10 zeptometers (0.000000000000000000001 meters), the current world record.

Galileo didn’t know anything about quantum mechanics when he did his pioneering pendulum experiments. But it didn’t stop him from understanding those pendulums damn well. In the present day, we still don’t know how physics works at distances under 10 zeptometers, but we can still make calculations about electrons interacting.

From this point of view, it seems like we absolutely should impose a maximum momentum/minimum distance when working out the probabilities of Feynman diagrams. We don’t know what’s going on at distances smaller than 1/Λ. We need to remain humble and always have in mind that any theory of nature we build is only right within its regime of validity. If we didn’t involve this momentum cutoff, we would be claiming that our theory still works at those smaller scales, which we don’t know to be true. Making such a mistake causes infinite probabilities, which suggests that there is indeed something lurking in those small scales that is beyond what we know now…

The road to the Planck scale

There are currently a bunch of theories about what is going on at the small untested length scales. We can make educated guesses about what scales these prospective new features of nature should become detectable at.

FullSizeRender (1).jpg

Fig. 3: Length scales

There has been a fashionable theory floating around for a while called supersymmetrywhich says, broadly, that matter and the forces between bits of matter are in a sense interchangeable. Its some well sick theoretical physics that I won’t go into here. The effects of this theory is believed to become visible at scales only slightly smaller than the ones we’ve already tested. It may even be detected at the LHC!

There’s a family of theories pertaining to even smaller sizes, called grand unified theories. These claim that if we can see processes at some way smaller scale, many of the fundemental forces will be revealed to be manifestations of a single unified force. The expected scale where this happens is about a billion billion times smaller than what we’ve currently tested, so will take a billion billion times more energy to probe. Don’t hold your breath on that one.

Finally, there’s reason to believe that there exists a smallest possible scale. This is known as the Planck length. If any object is smaller than the planck length, it would collapse into a quantum black hole, then immediately evaporate, removing any evidence of its existence. This is the scale where the quantum nature of gravity becomes important, and if you want to test that, you’ll need a particle collider 100 billion billion times more powerful than the LHC.

If we want to learn about these mysterious smaller scales, we’re going to need some mighty big colliders. Perhaps impossibly big. Maybe we need some new innovation which makes the probing of scales easier. Maybe the challenge for the next generation of particle physicists will be a rethink of how we test particle physics all together.

More on drawing Feynmann Diagrams

More on renormalisation

Supersymmetry

Grand unified theories

Planck length

7 thoughts on “Hitchhiker’s guide to an infinity-free theory

  1. I just started physics in college. I have read about Feynman and Plank so have a bit of a self taught background on quantum mechanics (just the tip if the iceberg) and this was wonderfully written. Usually I have to read and article twice or Google something but this was perfect for someone in a beginners position like me. Thanks I’ll definetly subscribe.

    Like

Leave a comment