Sightseeing in the Large Hadron Collider

u

The Large Hadron Collider (LHC) is a big round thing that has protons in it and the protons smash together making smaller things and then we look at the smaller things.

The LHC is arguably the best tool we have currently for discovering new particles. According to the old chestnut of E=mc2, energy can be converted into mass, and vice versa. When protons collide, the energy in the collision can be converted into new particles. The more energy you throw into the collision, the heavier the particle you can make. It’s likely that many of the particles that we haven’t discovered yet are heavier than the ones we know already. That’s why the LHC needed to be mighty big and mighty powerful, so the collisions could create new particles never before seen.

The new particles will only last a fraction of a second before decaying into more familiar ones. But by studying the aftermath of the proton collisions, we can find footprints of these undiscovered particles, therefore peeling back more layers of reality and peering in.

FullSizeRender.jpgFigure 1: a cross-section of the ATLAS detector at the LHC. Think of the beam of protons to be coming out of the page. All the words will be explained below.

But what’s actually going on when those protons collide? And, more importantly, how can we use said aftermath to learn that juicy new physics? Before jumping into these two questions, we need a little primer about hadrons.

Tell me what a Hadron is

Yes I’ll tell you what a Hadron is. New heavy particles will often decay into hadrons, then it’s the hadrons that we detect. To explain hadrons I must first explain the colour force.

The colour force is the force responsible for holding together the constituents of the proton: quarks and gluons. Quarks are the meat of the proton, and gluons glue the quarks together. The colour force is more often called the strong nuclear force, but this is a bit of a confusing historical hangover.

There are two things I’d like you to know about the colour force:

Thing 1

The strength of the colour force varies depending on the energy of the particles that feel it.

Just after the big bang, the universe was extremely hot, and all the particles contained huge amounts of energy. The colour force was weak at this time. As a result the universe was made of a soup of freely moving quarks and gluons. Over time as the universe cooled, and particles started to mellow out, the colour force became strong and bounded quarks and gluons into tightly bound clumps. We call these clumps hadrons. Protons and neutrons are examples of hadrons, but there are many more kinds.

The behavior of quarks and gluons is well understood, but only at high energies. The theory describing their interactions at high energies is called quantum chromodynamics or QCD. It’s an extremely simple and elegant theory, explaining a veritable smörgåsbord of phenomena with just a tiny number of parameters and a single equation.

At low energies however, it becomes difficult to explain everything in terms of quarks and gluons, we don’t have a good understanding of how they behave in bound states. We can however forget about the quarks and gluons and just treat hadrons as the fundamental particles. The theory that goes with this is, in comparison to QCD, quite messy and unappealing. But hadrons are where the real physics is, since we can only ever do experiments on hadrons. If you want to do an experiment directly on quarks and gluons, the detector you design better be mighty small (smaller than a proton) or be able to withstand mighty high temperatures (like, literally a bajillion degrees).

There is a grey area between high energies and low energies, when it is neither right to explain things in terms of quarks and gluons, or in terms of hadrons. How exactly did the individual quarks and gluons turn into hadrons at the beginning of the universe? That is very poorly understood.

Some particles are completely immune to the colour force, so the above discussion does not apply to them. Particles like the humble electron exist as a concept no matter what energy it has. We refer to these particle that don’t feel the colour force as leptons.

Thing 2 (a consequence of thing 1)

At low energies, it is impossible to see a quark or a gluon by itself.

Say you went back to the big bang, harvested a single quark in a jar, and brought it back to the present day. As it cooled down, it would emit a whole bunch of gluons and other quarks, resulting in not just a single quark but a soup of quarks and gluons. Then, as the soup cools down, and the colour force becomes strong, all the inhabitants of the soup will bind together into hadrons. If you tried to pull one of the hadrons apart into its individual quarks, those individual quarks would just immediately emit more soup and form into hadrons again.

1.jpg

Figure 2: What would happen as a single quark cooled down after being transported to the current day.

You now know all the things about the colour force. At high energies, nature is described by individual quarks and gluons interacting. The colour force is weak so no binding together, no hadrons. At low energies, everything is bounded into hadrons, everything can be described by hadrons alone.

Now we can start to talk about what happens when protons collide at the LHC.

The Proton Collision

There are millions of billions of protons passing each other every second when the LHC is turned on. Most of the time they just shoot past each other, but occasionally (on about every trillionth pass), the protons will collide.

The protons are given lots of energy when they are accelerated around the ring, so when they collide, we need to be thinking about quarks and gluons rather than hadrons. Quarks and gluons from one proton will interact with quarks and gluons from the other, which produces a bunch of other particles. If you’re lucky, one of the new particles will be one that has never been seen before.

2.jpg

Figure 3: A proton collision involving a short-lived Higgs boson.

A new mysterious particle won’t last very long, it’ll be around for a tiny fraction of a second before it decays into something else. This is true almost by definition, a particle we haven’t seen before can’t be something that hangs around after it is produced – otherwise there would be loads of them just lying around and getting in the way.

The new particle will inevitably decay into particles we already know. These familiar particles will shoot off away from the event, and smash into one of the detectors (which we’ll get onto later). In order to work out what happened at the proton collision, we need to be able to work out what particles emerged, and their initial trajectories. More on this later.

So what will come out of the collision? Sometimes it will create leptons, the particles resistant to the colour force. These will likely travel undisturbed until they reach the detector. Since the reaction is bursting with energy, it can also create individual quarks and gluons. In this case, it can’t be as simple as a quark shooting off and hitting a detector. An individual quark could never reach the detector, as by that time it would have cooled down to low energies, and at low energies quarks are no longer a thing. Somewhere along the way, that lonely little quark must somehow become part of a hadron.

Jets

We have a single energetic quark, and before it gets anywhere near the detectors, it’s going to turn into a bunch of hadrons. As I said, the grey area between quarks/gluons and hadrons is not very well understood. We want to be able to deduce the presence of that quark from the hadrons that hit the detectors, but the presence of this grey area may give you the impression that it’s an impossible task.

Luckily, there is a something about quarks and gluons that will make this problem easy. Easyish. Consider the very first thing the quark emits: it’ll probably be a gluon. The theory of QCD can tell us the probability of the emission:

Pemission  ≃ 1 / Eθ

where E is the energy inside the gluon, and θ is the angle between the two particle’s trajectories. Look at this equation a little bit and you’ll see that the most likely angle θ of an emitted gluon is very small. In other words, the gluon most of the time ends up traveling in basically the same direction as the quark. It’s also apparent that it’s most likely for the gluon to have a small energy. The importance of this I’ll get onto in a minute.

3.jpg

Figure 4: a quark emitting a gluon.

This won’t just apply to the first emission. Either the quark or gluon could go on to have another emission, of a new quark or a new gluon. In that case, our equation above can be used again. This will lead you to the conclusion that the vast majority of new particles will travel in the same direction as its mother, and will have a low energy.

4.jpg

Figure 5: A jet.

This results in some broad statements that can be made about the end products of our original quark. To start off with, all the quark/gluon soup resulting from the quark will be concentrated in a narrow beam. Since new particles in general have smaller energies, the soup quickly approaches the low energy regime where it will clump up into hadrons, and these resulting hadrons will also be moving in that one specific direction. The result is what is called a jet, a narrow beam of hadrons. The detector will eventually be hit with a bunch of hadrons all clustered in a small area.

Detectors

So what about these detectors then. There are a number of locations around the LHC where detectors are placed. Each is designed slightly differently and tuned to spot different things. The largest and most famous of them is called the ATLAS experiment, so we’ll focus on that as an example and well you know, fuck the rest of them.

ATLAS is a veritable onion of detectors. It’s basically a cylinder that encloses the beam of protons with a number of layers of detectors. Each layer is a different kind of detector, specialized to detecting certain types of particle.

The only things that survive long enough to get to the detectors are hadrons and leptons. The first layer samples the energy of leptons as they interact with the electrically charged particles in the detector. Successive detectors measure the energy of hadrons as they interact with the nuclei of atoms in the detector.

It’s not just the energy of the particles we can measure, we can also work out exactly where the particle hit the detector. ATLAS can be thought of as a rather expensive 100 megapixel camera.

There are some types of particle, for example the neutrino, that are well sneaky and will shoot straight through all the detectors unnoticed. The presence of these particles can however be deduced, using a little thing called conservation of energy. We know how much energy is in the initial protons (since we gave them that energy), and we can measure the energy in all of the gunk that hits our detectors. If a neutrino escapes detection, then there will be energy missing between the initial protons and the energy measured by the detectors, from that we can deduce the presence and energy of the neutrino.

From Hadrons in the detector to Quarks at the collision

So, imagine we have measured the energy and location on the detector of a bunch of hadrons and leptons that came from the proton collision. We need to deduce exactly what happened just as the protons collided, to see if anything new and exciting happened; the production of a shiny new particle perhaps.

For leptons, it’s pretty easy to trace things back. An electron doesn’t tend to do much as it flies through space, so if we know where it hit the detector and how fast it was moving, we can just extrapolate backwards to work out how it emerged from the collision.

Hadrons are, as you now know, a different story. The detectors can receive hundreds of hadrons from a single jet. We use things called jet finding algorithms, these are an attempt to deduce what quarks flew out the collision given the hadrons hitting the detector. It is a highly not-so-easy problem, since we don’t really understand that grey area between quarks/gluons and hadrons. The original attempts amounted to just adding up all the energy picked up from some region of the detector. The most popular algorithms of recent days involve attempting to trace back the process step by step, emission by emission.

These more recent algorithms are designed according to the equation we had for probability of emission, Pemission  ≃ 1 / Eθ. Hence, the algorithm decides that two particles came from the same mother particle if they are traveling in a similar direction, and at least one of them has a low energy.

Now that we’re talking about the practicalities of applying this equation, there’s something about it I didn’t bring up before that we now have to pay attention to. You may feel slightly unnerved by the possibility of a gluon coming off a quark that has zero energy (E=0). Or a tad unhinged by a gluon moving in exactly the same direction as the quark (θ=0). In either of these cases, Pemission is infinite. It doesn’t make any sense for a probability to be infinite, probabilities must be between 0 (definitely won’t happen) and 1 (definitely will happen).

If we don’t pay attention to these infinities, they will mess up our jet finding algorithms. We need to understand what is going on. There must be some deep physics reason this is happening, right?

Let’s deal with the θ=0  problem first. The situation it describes is a quark emitting a gluon moving in exactly the same direction. Due to conservation of energy, the energy contained in the final quark/gluon pair is the same as the energy that was in the mother quark. Think about this in terms of where the energy is: a little point of energy (the quark) changes into a point containing the same energy (the quark and gluon). As far as we’re concerned, this is indistinguishable from the outcome of the quark emitting nothing, since the outcome will be the appearance of a hadron in exactly the same place containing exactly the same energy. So actually, this kind of event in a sense “doesn’t exist”, we don’t need to include the possibility of this happening in our jet algorithm.

It’s a similar story for the E=0 problem. If a gluon with no energy is emitted in the woods and no one is around to hear it, does it make a sound? In this case no, that gluon can never be seen by the detectors, and it won’t ever contribute to the creation of any hadron. So that event is just the same as no emission at all.

The algorithm must be designed so that it does not take these θ=0 and E=0 possibilities into account. It’s safe for an algorithm to completely ignore these possibilities, since they don’t exist.

The deep physics shit going on here is this – some things about quarks and gluons are unknowable. As a result, thinking too hard about the quarks and gluons inside the jets themselves leads you to nonsense. Asking something like “how many gluons are inside the jet?” is a nonsense question, there exists no answer. It’s a question we can never answer with any experiment (since we can never measure a gluon by itself), and it can never be predicted theoretically. Quarks and gluons are very much mathematical concepts, while it’s only really the hadrons that have a solid physical interpretation.

How to Find the Higgs Boson

Plugging the energy and location of detected hadrons into jet algorithms, we can deduce how many quarks initially emerged from the collision, their direction of travel, and their energy. By tracing back the trajectory of the leptons that hit the detector, we can also deduce the number of leptons produced by the collision, direction of travel and energy. Our task now is to translate this knowledge into information about the collision itself.

To do this we need to ask: what possible events at the proton collision could have produced these outgoing quarks and leptons? I’ll refer to a specific combination of quarks and leptons, with some given directions and energies, as the collision’s final state. Usually there are not one, but a number of possibilities that could have resulted in our deduced final state. Some of those possible events will only include boring old familiar particles. These possibilities are referred to as background. A possibility that includes the production of an undiscovered particle is called the signal.

We can predict how often a background event will lead to our given final state, since we already understand how all the particles in a background event work. Therefore, if our final state occurs more often than we expect, this is evidence of a new particle – the signal is shining through. The final state is appearing more often than expected because there is a “new unexpected way” that the colliding protons can produce that given final state.

Figure 6: Bump representing a new particle. “GeV” is just a measure of energy that particle physicists use.

The probability of a given final state varies with the total energy of the final state, i.e., the energy of each outgoing particle added up. This can be seen from fig. 6. We can predict the frequency of our background events at each energy, defining the dotted curve on the plot. Most of the time the observed curve from the LHC agrees with the background curve. But if there’s a bump that cannot be explained by the background, this would be evidence of a new particle.

Moreover, the energy at which the bump appears is important. To see this, consider that the probability of the final state occuring will be roughly proportional to the number of different ways the final state can be created. The creation and destruction of a new particle represents a new way that the final state could be created, so, when the right amount of energy is involved to make the new particle, the probability of the final state increases.

From fig. 6, it looks like there’s an “extra way” to create a final state of energy 125GeV, this extra way is a new particle being created and destroyed. Via E=mc2we can work out what mass the new particle would need to have to create a final state of that energy.  By dividing  125GeV by c2, one can find the mass of this new particle. The numbers I’ve chosen here lead us to the Higgs mass, since this is how its mass was deduced.

Figure 6 is essentially a cartoonist impression of one of the plots used to discover the Higgs. You can have a look at the real plot in the original discovery paper, on page 10, figure 4.

The discovery of the Higgs was a huge success, but the search for new particles at the LHC is far from over. Physicists are still squinting away at plots like the above, hoping to find the next bump.

Continue reading “Sightseeing in the Large Hadron Collider”

Advertisements

Clever Demons and Hungry Black Holes

The French scholar Pierre-Simon Laplace once told the story of a demon. The demon knows all the laws of physics, and is so smart that he can do an infinite number of calculations in his head. If you told him the exact state of the universe at one point in time, then he would be able to predict with certainty the exact state of the universe at some later time. He would always win bets.

He could also use his physics knowledge to turn the clocks back, and deduce, given the state of the universe at some time, the state it had at some earlier time. If you wanted to destroy a document containing information you’d rather no one ever find out, and, say, burned it, you still wouldn’t be safe. The demon could look at the smoke coming off the flames, and use it to deduce what was on the page.

Laplace told this story in order to convey the idea that

“We may regard the present state of the universe as the effect of its past and the cause of its future.”

This seems like a pretty sensible way to view nature to most physicists. The universe is in principle predictable. If it wasn’t the case, what’s the point in physics?

FullSizeRender.jpg

Fig 1: Given everything that’s happening at the present, one can in principle predict the future or deduce the past.

I’m going to tell you about a recent(ish) strange discovery that causes problems with this way of thinking. It concerns the bat-shit behavior of black holes, and is referred to as the black hole information paradox.

Entanglement

We first have to understand a wee bit of quantum mechanics. The main thing about quantum mechanics is that physical things can exist in superposition. This is when the system exists as a mixture of different states that we usually would consider to be mutually exclusive, i.e., it only makes sense if it’s in one or the other.

For example, consider a single particle flying along through space. It can exist as a mixture of, say, an electron and a positron (the positively charged version of the negatively charged electron). It could be just as much positron as electron, or mostly electron and only a little bit positron, or the other way around. The quantum state of the particle can be encapsulated in one number ψ, telling you where it lies on the spectrum between electron and positron. ψ = 0 means it’s  an electron, ψ = 1 means it’s a positron, ψ = 1/2 means it’s half and half.

What do I mean when I say it’s a mixture of electron and positron? Imagine the particle hits a detector, which can be used to deduce its charge. When it hits the detector, and the reading comes up on a screen, it needs to make up its mind. The chances of the detector registering a positron is ψ, and the chances of it registering an electron is 1-ψ.

Now let’s complicate the picture a little. Let’s say there are two such particles, call them A and B, which both sprang from the breaking up of some original particle. The original particle had zero electric charge, so the charge of A and B need to add up to zero. Both are in a superposition, both a mix of electron and positron. But, the requirement that their charges add up to zero limits the quantum states they are allowed to have. If particle A is an electron (negative charge), then B must be a positron (positive charge), and vice versa. They can’t both be electron or both be positron, as that would mean the overall charge not adding up to zero.

Both particles have a number specifying their quantum state; ψA and ψB. But this time, due to the requirement of overall zero charge, ψA depends on ψB , and vice versa. You need to know what ψB is to know what ψA is. A and B are said to be entangled. 

If you left particle B out of the picture, then the quantum state of A is not well defined. It would seem like there is information missing from its quantum state, that information is being held hostage by particle B.

Let me elaborate on this a little to show what I mean by missing information. If we told Laplace’s demon the quantum states of A and B (i.e. the values ψA and ψB), he could use the laws of quantum mechanics to predict exactly what their quantum states would be at some later time. However, what if we were only interested in particle A? What if we wanted to only tell the demon the quantum state of particle A, and ask him to deduce its quantum state at some later time? This couldn’t be done, since particle A has information missing from its quantum state, so he couldn’t work out what would happen to particle A in the future. If the demon can’t see particle B, then his powers of perfect prediction are lost.

IMG_3777.JPG

Fig 2: If you only know about particle A at time 1, this isn’t enough to predict it’s state at time 2. Only if you know the state of both particles at time 1 will you be able to predict either’s state at time 2.

This is kind of weird, but it doesn’t get in the way of Laplace’s belief. As long as the demon is given all the information available in the universe at a given time (which includes the states of both particles A and B) he can make perfect predictions of the future and deductions of the past. However, what if there was a way to, not just ignore the information in particle B, but physically destroy it?

Evaporating Black Holes

Ok, black hole 101. When a star dies, it collapses under its own gravity into a very dense and compact object. Some of the more massive ones will collapse into something that’s almost infinitely small and dense. Such a thing is called a singularity. Its gravitational pull will be so strong, it will prevent even light escaping from it. Get too close to it, and it becomes physically impossible to escape. You can imagine a sphere around the compact object signifying the point of no return, this is called the event horizon.

The weird nature of strong gravitational fields can make particles seem to be created out of nowhere. At the event horizon, particles appear in pairs. One flies outward, away from the black hole, and the other falls inwards toward the singularity. These pairs are entangled in a similar way that particles A and B were entangled. The quantum states of the particles radiating out of the hole are dependent on the state of those falling into the hole, who end up hiding behind the event horizon.

The black hole is always radiating these entangled particles, an effect referred to as Hawking radiation. If something is constantly throwing out energy, it will eventually run out of energy, and disappear completely. The black hole will evaporate leaving only the outgoing radiation as evidence of its existence. Information about the radiation’s quantum state, that was being held inside the black hole, has now been obliterated. Could it have somehow escaped before the black hole disappeared? No, it’s impossible for anything to cross the event horizon from the inside to the outside.

We are left with only a cloud of radiation that has a poorly defined quantum state. In fact, it is extremely poorly defined. Since it was so strongly entangled with the interior of the black hole, it contains almost no information. Compare this radiation to the radiation coming from a star (light, radio waves etc). If Laplace’s demon could collect up all the radiation from a star, it could deduce exactly the nature of all the reactions going on inside the star that led to the emission of the radiation. This is because, while the radiation seems random and messy, there is in fact subtle features hiding in it, delicate interactions between the constituent particles that can be used to deduce the nature of their origins. In this sense, the light from a star contains information.

Hawking radiation is not like this. It contains virtually no information, it doesn’t just look messy and disorganized, it is intrinsically messy and disorganized. The demon could collect up all the radiation left from the black hole, but he couldn’t deduce anything about the black hole from it.

The Information Paradox

Remember that document you really wanted destroyed, so no one could ever see, or even deduce, the information on it? Throwing it into a black hole seems a sure-fire way of doing that. Any information that falls into a black hole is permanently erased, since the end-state of a black hole is Hawking radiation containing no information.

The current laws of physics, or even any conceivable law of physics we could come up with in the future, are powerless to deduce what was going on before the black hole formation, even given the exact state of everything after the black hole evaporates.

IMG_3775.JPG

Fig. 3: If you know everything at time 2, this will not be enough to deduce the information on the incriminating document at time 1, since all you have is Hawking radiation carrying insufficient information.

This also causes problems in the opposite direction in time. It seems likely at the moment that the fundamental laws of physics are symmetric in time, i.e., behave in the same way going both forward and backward in time. A video of the moon orbiting Earth would look just as sensible if played in reverse, since the equations governing gravity look the same if time is reversed.

If this is the case, then the laws of physics must allow the reverse of black hole evaporation to take place, i.e. fig.3 but flipped upside-down. Such a thing may never have happened in the history of the universe, and may never happen in the future, but the point is that such an event is allowed to happen in nature. This event would consist of radiation clumping together to produce a reverse-black-hole, and totally unpredictable things falling out of it. Our knowledge of the universe before the creation of the reverse-black-hole would not be sufficient to predict what would fall out of it. It could be anything, a sperm whale or a bowl of petunias for all we know, and no law of physics could ever tell us why they appeared.

Again, this type of thing may never happen, but the fact that our current laws of physics seem to allow this type of thing is deeply troubling to physicists. If information can be destroyed in a process like the above, who’s to say there isn’t a plethora of other possible processes in which information is destroyed?

Is it really true that the universe is fundamentally unpredictable? The debate has been ongoing since this problem was first uncovered in the 70s. A number of solutions to this problem have been proposed, for example, modifying the physical laws to let the information in the black hole somehow escape. So far none of the solutions have have been conclusively shown to work, so the debate continues.

Some of the most notable attempts at a solution include: black hole complementarity, the existence of firewalls at the event horizon, an appeal to the principle of holography from string theory, and most recently, the theory of supertranslations.

We may be a long way from solving this problem, but I suspect when it is finally solved, it will come with some dramatic overturning of some of the most deep-rooted ideas in physics today.

Continue reading “Clever Demons and Hungry Black Holes”

Reasons to Panic about the Hierarchy Problem

This is intended to be kind of a sequel to one of my previous posts, which attempted to convey the vibes surrounding renormalization: the systematic ignorance of physics at small scales.

If you read the thing, you may recall that I justified renormalization with the argument that physics at different scales mostly don’t effect each other. Gallileo’s pendulum wasn’t effected by quantum mechanics or the gravitational pull of Jupiter.

There is an outstanding problem in particle physics at the moment that, if not resolved, may send that whole philosophy down the toilet. The problem has been around for a while, but it has got a lot worse in the last two or three years, sending particle physics into a bit of a crisis.

I speak of the hierarchy problem. Buckle your seatbelts and all that.

We Need to Talk About Mass

The hierarchy problem has its origin in interpreting the mass of the recently discovered Higgs boson. To get down to what the problem is about, we have to first think about mass more generally.

If you only know one equation from physics, it’s probably E=mc2. This says that energy and mass are basically the same thing, just in different forms. An object of mass at rest contains energy equal to mc2, where c is the speed of light. That means that in principle you can extract mc2 worth of energy from the object.

The total energy of an object is the energy contained in its mass plus the energy associated with its motion, i.e., its kinetic energy. When the object is at rest it has no kinetic energy, so all of its energy can be associated with its mass. Flipping this argument on its head, you can say that the energy E inside an object at rest tells you its mass m, via m=E/c2.

This may seem like an obvious and redundant thing to say, but consider the following example. A proton, one of the constituents of an atomic nucleus, is not simply a single particle but can be thought of as three smaller particles (called quarks) bound together. The quarks are in general wobbling around, moving in relation to each other, so they contain some kinetic energy. Quantum field theory tells us that the quarks interact by emitting and absorbing other particles called gluons, which are very similar to photons. Gluons can spontaneously create new quarks, then destroy them again an instant later. The motion of all these extra particles contribute to the overall energy of the proton.

Since the mass is given by the total energy it contains when it’s at rest, it includes all of this extra energy due to the motion and interactions. As a consequence, the mass of the proton is larger than just the sum of the quark masses. In fact, the quark masses only account for around 1% of the total proton mass!

17555547_10202824906821777_1157189460_n.jpg

Fig. 1: The inner workings of a proton. The mass of the proton is given by all of the energy enclosed by the dotted line (divided by c2)

A similar effect occurs for individual particles. Namely, working out the mass of the Higgs boson requires an analogous consideration.

The Higgs can both emit and absorb many different types of particle, including quarks, electrons, you name it. It could emit a quark, which exists for a tiny period of time, then absorb it again before it gets the chance to go anywhere. The result is that the Higgs is covered with a cloud of extra particles popping in and out of existence. The mass and motion of these particles all contribute to the overall energy of the Higgs, therefore enhancing its mass.

17495576_10202824906781776_1327828188_n.jpg

Fig. 2: The Higgs, dressed with emissions. The effective Higgs mass is given by all the energy enclosed in the dotted line (divided by c2)

Similar things occur for other particles, like electrons, but not to the extent that it happens to the Higgs. To get into the reasons for this difference requires some deep discussions about symmetries in particle physics, a subject I should really do a post about at some point. But I won’t go into it here.

From this point of view the Higgs really has two masses, the apparent mass m which is measured in an experiment, and the bare mass m0, the mass of the Higgs if it wasn’t coated in emissions.

m0 is the more fundamental of the two, a parameter of the underlying theory. However, only m is accessible by experiment. How can one deduce m0 from only knowing m? If we define E to be the energy contained in the emissions, the extra mass it gives to the Higgs will be E/c2. Then we can write:

m = m0 + E/c2

But how do we work out E? We can make an approximation according to the following argument.

Just like in the Feynman diagrams in the previous article, the cloud of particles surrounding the Higgs can have any momentum, so the energy gets contributions from emissions with all possible momenta. But recall that, in order to make sure probabilities can’t become infinite, we need to restrict particles from having momentum above Λ. This corresponds to ignoring scales below 1/Λ. So we only need to consider emissions having momentum up to Λ. Most of the bonus energy in this case comes from the most energetic possible particles, the ones with momentum Λ. Assuming this to be large, we can say that most of their energy is kinetic, and can ignore the energy due to their masses. The kinetic energy of the most energetic allowed particles then are roughly Λ, leading to an overall bonus energy for the Higgs to be round about Λ. So we end up with

m ≈ m0Λ/c2

This equation is where the hierarchy problem comes from.

One in a Hundred Million Billion Trillion Trillion

Imagine the scene. We’ve measured the mass of the Higgs m, to be the famous number 125GeV (GeV is just a unit of mass particle physicists use). Looking at the above equation, you can see that if we decide to set Λ at some value, we then have to tune the value of m0 in order to produce the observed 125GeV for mΛ effectively dictates what theory we’re using to model reality, and each theory has a different value for m0.

What are the possibilities for choosing Λ? Λ is meant to be chosen to cut out effects at scales where we don’t know what’s going on, so we can choose Λ such that 1/Λ is anything down to scales where “new physics” appears.

What if there was no new physics at all, our current model is valid at all scales? Then we could take 1/Λ to be the theoretically smallest possible length – the Planck length LP. In this case, we have that Λ = 1/LP, leading to a new equation:

m ≈ m0 + 1/LPc2

LP is a very very small number, the smallest possible length. As a result, this new bonus mass 1/LPc2 is a fucking huge number, in fact, it’s a hundred-million-billion-trillion-trillion times larger than m. That’s not just a generically big sounding number, it’s literally how much bigger it is.

For this theory to be consistent with the observed Higgs mass m, m0 needs to be a number which when added to this huge number 1/LPc2, results in m. So, firstly m0 needs to be negative (this isn’t a huge problem, since m0 isn’t an observable mass, only masses that you can observe strictly need to be positive). It also needs to be almost exactly the same as 1/LPc2, so the two cancel out exactly enough to create the much smaller number m.

Imagine you changed the 33rd decimal place of  m0, in other words, the number was shifted up by a hundred-million-billion-trillion-trillionth of its size. The value of m would go from being 125GeV to double that size, a huge change. If you increased m0 at just the 3rd decimal point, m would still become a million billion trillion trillion times bigger. And so on. This is referred to as the fine-tuning of m0.

17505994_10202824906741775_1830717354_n.jpg

Fig. 3: Above equation visualized. The towers Λ/c2 and m0 need to almost exactly match up in order to produce the small m. Not to scale.

The universe would be radically different if that value of m0 was changed even a tiny bit. The Higgs particle is what gives mass to all the other particles, and the mass of all the other particles is decided in part by the Higgs mass. If m was billions of times larger, all the other particles would become billions of times heavier also. We certainly couldn’t have stars, planets and all that, the universe would be too busy collapsing in on itself. It seems like, to generate a universe remotely like the one we live in, nature needs to decide on a parameter m0, highly tuned to 33 decimal places.

This disturbs a lot of people because it is very unnatural. It seems like an incredible coincidence that m0 ended up with the exact value it did, the exact value needed for a universe where stars could burn, planets could form and life could frolic. It’s a bit like saying someone dropped a pencil and it landed on its point and stayed there, perfectly balanced. Except in this case to get the same degree of coincidence, the pencil would have to be as long as the solar system and have a millimeter wide tip [source].

This is concerning by itself, but its consequences go further. It represents a breakdown of our assumption of physics at different scales being mostly independent. m0 is a parameter of the theory which describes physics down to the scale of LP, so includes whatever physics is happening at the Planck length. In that case, m0 in a sense is decided by whatever is happening at the Planck length. Physics at large scales seem to be incredibly strongly dependent on m0 which comes from the Planck scale.

Before, we thought that physics at very small scales shouldn’t strongly effect physics at larger scales, but this changes all that. Is renormalization valid if this is the case?

Supersymmetry to the Rescue

In constructing the hierarchy problem above, we made an assumption that our current theory of particle physics is valid all the way down to the Planck length. This may be true, but it may not be. There may be new unknown laws of physics that appear as you go down to smaller scales, before you get anywhere near the Planck length.

If we assume some new physics appears at a new length scale we’ll call LN, then our current theory is only valid at scales larger than this, and can only contain particles of momentum smaller than 1/LN. This changes the bonus Higgs mass, changing the above equation to:

m ≈ m0 + 1/LNc2

If the scale LN is much bigger than the Planck length, LP, then  1/LNc2 is much smaller, and m0 requires less fine tuning.

Still, if 1/LNc2 is only a million times the size of instead of ten-million-billion-trillion-trillion, m0 still needs to be tuned to an accuracy of a millionth of its size… What we really need to solve this problem is some new laws of physics to appear at scales very close to the ones we’ve already probed.

It is for this reason that a popular candidate theory of smaller scales, supersymmetry, is hoped to become apparent at length scales not much smaller than what we’ve already tested. This would solve our problem, as 1/LNc2 would end up being roughly the same size as m.

Since the LHC at CERN started bashing together protons at higher momenta then ever before, we’ve been keeping an eye out for signs of supersymmetry. We’ve now searched for signs at lengths scales quite a lot smaller than where we discovered the Higgs. Unfortunately, no evidence of supersymmetry’s existence has appeared. With every year of experiments that pass, and supersymmetry isn’t found, the possible scale where supersymmetry appears gets pushed to smaller and smaller, making LN smaller and smaller. The smaller LN gets, the more fine-tuned  m0 needs to be.

People are starting to worry. Even if supersymmetry is found tomorrow, it looks like it’ll only become important at scales where 1/LNc2 is a hundred times the size of the Higgs mass. So a tuning of one part in a hundred… Is that already too much of a coincidence? The further up the energy scale we have to go to find supersymmetry, the less power it has to resolve the issue.

The Hierarchy problem is one of the biggest driving forces in particle physics research today, giving hints that there is more physics to be found at scales close to us. If supersymmetry is not found at the LHC, we’re going to have to do a proper re-think about our philosophy of renormalization. Could there be something wrong with our understanding of scales? And could the stars, planets and life really exist on merit of a massive coincidence?

Continue reading “Reasons to Panic about the Hierarchy Problem”

A Little Patch of Spacetime

Recently there’s been a lot of buzz around the idea that the universe is a big simulation. The idea is pretty out there, right?

What if I was to tell you that us humans have been creating universes on computers, taking into account the most fundamental of physics, detailed to some of the smallest length scales that we understand? They’re not quite the size of our universe, or even something smaller like a planet, current computers would struggle somewhat. They’re only about 10 femtometers across, smaller than an atom. But it’s a start!

They’re called Lattice simulations, and belong to a subgenre of particle physics called Lattice Gauge Theory.

To illustrate what this is and the drive behind it, let’s consider a very simple and general problem in physics. Working out the trajectory of, say, an electron. Deducing a trajectory is to be able to say where the electron is at any given point in time.

In classical mechanics (a.k.a how the world looked before quantum mechanics became a thing), given all the forces acting on the electron, along with the initial conditions (i.e. its position and velocity) there exists one unique trajectory the particle can take. One can plug the initial conditions into an equation of motion (like Newton’s 2nd law, F=ma) and solve it to deduce with certainty the position of that particle after some arbitrary period of time.

Taking quantum mechanics into account the water becomes muddied. The electron is no longer bound to follow the unique trajectory, but can take other trajectories which disobey its classical equation of motion. Before, the probability of the electron following the classical path was 1, and following any other path was 0. Now, each path has some non-binary probability between 0 and 1.

Not even the most versed physicist can predict with certainty where the particle will be after a period of time. As a consolation prize, it is possible to deduce the probability of the particle arriving at a certain point in space at a given time.

To do this, a physicist would basically sum up the probabilities of each of the many trajectories that result in the electron arriving at your chosen location at your chosen time. This is called a path integral, the sum of all probabilities of a particle taking each possible path between two points. In general there is an infinite number of possible paths. The classical path is always the most likely, paths that are close to the classical path have a smaller probability but still contribute, and completely deviant paths that go to jupiter and back are incredibly unlikely and basically don’t contribute.

One of the reasons quantum mechanics is ‘hidden’ at sizes bigger than an atom is that the perturbed paths become so unlikely that the classical path is basically the only path the particle can take.

fig1.jpgFig.1: Particle travelling from A to B. Rightmost- a particle on the quantum scale, e.g. an electron. Leftmost- a particle on the classical scale, e.g. a baseball. Solid lines are “very likely paths”, and dotted are less likely.

Now let’s complicate the picture further by moving from quantum mechanics to quantum field theory. This takes into account the possibility of the electron emitting and absorbing particles, or decaying into different particles only to reappear somewhere down the line before reaching its destination. Things become more complicated, but the principle of the path integral still holds, with the new feature that the bunch of paths we need to add up now include all combinations of emissions and decays. I’ll refer to a ‘path’ as a trajectory + any specific interaction including other particles.

fig2.jpgFig.2: Similar to figure 1, now with the possibility of other particles being created and destroyed.

Once we’re in quantum field theory we are getting into some real fundamental shit. The standard model of particle physics, containing the recently discovered Higgs boson, is expressed in the language of quantum field theory.

In practice it’s not possible to work out probabilities for an infinite number of paths.  Happily, as I discussed, there are a small number of dominant paths which account for the majority of the probability, the classical path and small perturbations of it. In particle physics, the way we usually work out the probability of some process is to consider only these dominant paths, and we get to a result which is pretty close to the ‘true’ answer. It can be done with just a pen, paper and the knowhow. The method is referred to as perturbation theory.

This doesn’t work for everything. Namely, if we were trying to compute the path integral for a quark rather than an electron. The electron interacts mostly with the electromagnetic force (electricity+magnetism). Quarks feel not only the electromagnetic force, but also the strong nuclear force, it’s horrendously more complicated cousin. The strong force is ‘purely quantum’ in the sense that there isn’t really a dominant path and subdominant perturbations of the path, there are many different dominant paths and there is no good way to order them in terms of probability.

It’s possible to use perturbation theory on quarks, but since it’s difficult to find all the dominant paths, uncertainties usually lie at around 10% (i.e., the true answer could be the answer we worked out give or take 10% of that answer). Compare this with what one can achieve with the electron, with uncertainties dipping well below 1%.

The solution: lattice simulations!

Simulate a small period of time playing out on a small patch of space on a powerful computer, give the patch a little prod so it has enough energy for a quark to appear, and let your tiny universe play out all the possible paths, decays, interactions, whatever.

fig3.JPGInside the patch it’s necessary to approximate space and time as a lattice of discrete points. Each point has some numbers attached to it, signifying the probability of a quark being at that point, the probability of the presence of other quarks, and the strength of the strong force. With this little universe, we can forget about which paths are dominant and which aren’t since all paths occur automatically in the simulation.

Like perturbation theory, it is also an approximation. Spacetime is not discrete (as far as we know), and not inside a finite patch. People often will perform the simulation at many different ‘lattice spacings’ (the distance between each discrete point), and look at the trend of these numbers to extrapolate the answer to a zero lattice spacing, representing a continuous space. Similarly in the real world there are no ‘walls’ like there are on the edges of the patch. So  folk will use a range of sizes of patch, and extrapolate results to an infinite size where there are no walls.

Uncertainties in lattice simulations are in many cases a lot smaller than perturbation theory, at about the 1% level. The method has proven shockingly effective in understanding how quarks bind together to form mesons. A meson is like the little brother of the proton and neutron, while these contain 3 bound quarks, a meson contains 2. Lattice people have their sights on making a simulation big enough that a whole proton can fit inside its walls, but we’re not quite there yet.

I think lattice gauge theory is still only in its ‘calibration phase’. The motivation of a lot of the work lattice people do is to show it works, by matching its predictions to experiments. As computers become faster, our methods become more efficient, and our understanding of the physics improves, the lattice could end up being the tool which uncovers the next big discovery in particle physics. Watch this space.

Continue reading “A Little Patch of Spacetime”

No seriously what is Entropy

 I always found the popular science description of entropy as ‘disorder’ as a bit unsatisfying.

It has a level of subjectivity that the other physical quantities don’t.  Temperature, for example, is easy- we all experience low and high temperatures, so can readily accept that there’s a number which quantifies it. It’s a similar story for things like pressure and energy. But no one ever said ‘ooh this coffee tastes very disordered.’

Yet entropy is in a way one of the most important concepts in physics. Among other things, because it’s attached to the famous second law of thermodynamics, with significance towering over the other laws of thermodynamics (which are, in relation, boring as shit). It states that the entropy of a closed system can only increase over time.

But what does that mean?! What is entropy really? If you dig deep enough, it has an intuitive definition. I’ll start with the most general definition of entropy. Then, applying it to some every day situations, we can build up an idea of what physicists mean when they say ‘entropy’.

Missing Information

Fundamentally, entropy is not so much the property of a physical system, but a property of our description of that system. It quantifies the difference between the amount of information stored in our description, and the total quantity of information in the system, i.e. the maximum information that could in principle be extracted via an experiment.

Usually in physics, it’s too difficult to model things mathematically without approximations. If you make approximations in your model, or description, you can no longer make exact predictions of how the system will behave. You can instead work out the probabilities of various outcomes.

Consider the general setup of an experiment with n possible outcomes, which one can label outcome 1, outcome 2, … outcome n. Each outcome has a probability p1,p2, p3 …pn assigned to it. Each p has a value between 0 (definitely won’t happen) and 1 (definitely will happen). The Gibbs entropy S one assigns to the description of a system is a function of these p‘s (see mathsy section at the end for the explicit definition).

Imagine we thought up a perfect theory to describe the system under study using no approximations, so we could use this theory to predict with certainty that outcome 1 would occur. Then p1 = 1 and all other p‘s would be zero. One can plug these probabilities into the Gibbs entropy, and find that in this case, S = 0. There is no missing information. In contrast, if we had no information at all, then all probabilities will have the same value. How could we say one outcome is more or less likely than any other? In that case, S ends up with its maximum possible value.

There’s a classic example that’s always used to describe the Gibbs entropy – tossing a coin. There are two possible outcomes- heads or tails. Usually we consider the outcome to be pretty much random, so we say that they’re equally likely: p(heads) = 1/2, p(tails) = 1/2. This description contains no information, a prediction using these probabilities is no better than a random guess. What if we discovered that one of the sides of the coin was weighted? Then one outcome would be more likely than the other, we can make a more educated prediction of the outcome. The entropy of the description has been reduced. 

Going further, if we modelled the whole thing properly with Newton’s laws, and knew exactly how strongly it was flipped, its initial position, etc, we could make a precise prediction of the outcome and S would shoot towards zero.

missinginformation.jpg

Fig.1: S for different predictions of the outcome of a coin flip.

Working with this definition, the second law of thermodynamics comes pretty naturally. Imagine we were studying the physics of a cup of coffee. If we had perfect information, and knew the exact positions and velocities of all the particles in the coffee, and exactly how they will evolve in time, then S=0, and would stay at 0. We always know exactly where all the particles are at all times. However, what if there was a rogue particle we didn’t have information about, then S is small but non-zero. As that particle (possibly) collides with other particles around it, we become less sure what the position and velocity of those neighbours could be. The neighbours may collide with further particles, so we don’t know their velocities either. The uncertainty would spread like a virus, and S can only increase. It can never go the other way.

I said before that this S is about a description, rather than a physical quantity. But entropy is usually considered to be a property of the stuff we’re studying. What’s going on there? This brings us to…

Microstates and Macrostates

In physics, we can separate models into two broad groups. The first, with “perfect” information, is aiming to produce exact predictions. This is the realm containing, for example, Newtons laws. The specification of a “state” in one of these models, contains all possible information about what its trying to describe, and is called a microstate.

The second group of models are those with “imperfect information”, containing only some of the story. Included in the second set is thermodynamics. Thermodynamics seeks not to describe the positions and velocities of every particle in the coffee, but more coarse quantities like the temperature and total energy, which only give an overall impression of the system. A thermodynamic description is missing any microscopic information about particles and forces between them, so is called a macrostate.

A microstate specifies the position and velocity of all the atoms in the coffee.

A macrostate specifies temperature, pressure on the side of the cup, total energy, volume, and stuff like that.

In general, one macrostate corresponds to many microstates. There are many different ways you could rearrange the atoms in the coffee, and it would still have the same temperature. Each of those configurations of atoms corresponds to a microstate, but they all represent a single macrostate.

Some macrostates are “bigger” than others, containing lots of microstates, and some contain little. We can loosely refer to the number of ways you could rearrange the atoms while remaining in a macrostate as its size.

What does all this have to do with entropy? If I were to tell you that your coffee is in a certain macrostate, this gives you information. It narrows down the set of possible microstates the coffee could be in. But you still don’t know for sure exactly what’s going on in the coffee, so there is missing information, and a non-zero entropy. But if the coffee was in a smaller macrostate, our thermodynamic description would give more information, since we’ve narrowed down further the number of microstates the coffee could be in. Then our description contains more information, so this is a lower entropy macrostate.
Hence, the entropy of a macrostate (called the Bolzmann entropy) is defined to be proportional to its size. For messy thermodynamic systems like the coffee, entropy is a measure of how many different ways you can rearrange its constituents without changing its macroscopic behaviour. The Bolzmann entropy can be derived from the Gibbs entropy. It is not a different definition, but a special case of Gibbs, the case where we’re interested only in macroscopic physics.

Working with this definition, the second law of thermodynamics comes reasonably  naturally. Over time, a hot and messy system like a cup of coffee will explore through microstates randomly, the molecules will move around producing different configurations. Without any knowledge of what’s going on with the individual atoms, we can only assume that each microstate is equally likely. What macrostate is the system most likely to end up in? The one containing the most microstates. Which is also the one of highest entropy.

Consider the milk in your coffee. Soon after adding the milk, it ended up evenly spread out through the coffee, since in the macrostate of ‘evenly spread out milk’ is the biggest, so has the highest entropy. There are many different ways the molecules in the milk could arrange themselves, while conspiring to present a macroscopic air of spreadoutedness.
You don’t expect all the milk to suddenly pool up into one side of your cup, since this would be a state of low entropy. There are few ways the milk molecules could configure themselves while making sure they all stayed in that side. The second law predicts that you will basically never see your coffee naturally partition like this.
16809757_10202653360373223_127604940_n.jpg
Fig.2: Cups of coffee in different microstates. The little blobs represent molecules of milk.

The Mystery Function of Thermodynamics

When one talks about the Boltzmann entropy, naturally there is a transition between considering entropy a property of the description to a property of physics. Different states in thermodynamics can be assigned different entropies depending on how many microstates it represents.

Once we stop thinking at all about what is going on with individual atoms, we are left with a somewhat mysterious quantity, S.

The “original” entropy, now known as the thermodynamic entropy, is a property of a system related to its temperature and energy. This was defined by Clausius in 1854, before the nature of the atoms at the macroscopic level were even understood. Back then, not everyone had been convinced that atoms were even a thing.

Thermodynamic entropy is what people most commonly mean when they refer to entropy, but, since it is defined without any consideration of the microscopic world, its true meaning is obscured. I hope it’s slightly less obscured for you now.

Continue reading “No seriously what is Entropy”

Hitchhiker’s guide to an infinity-free theory

Quantum field theory is the theoretical framework of particle physics. Without it, we never could have worked out what an atom is made of, understood the forces that govern its content, or predicted the Higgs boson.

But when it was first being established in the first half of the 20th century, it came across an apparently fatal flaw. It was plagued with infinities. And infinities don’t belong in physics. Following the rules of quantum field theory, you could end up predicting an electron having an infinite electric charge. Gasp. Its resolution lead to a revolutionised way of thinking that now underpins all of particle physics.

I may go into a little bit of maths, but don’t worry its all easy. Promise.

Infinitely Probable Events

Science is about making predictions, given initial conditions.

If our system is in state A at time 1, what is the probability of it being in state B at time 2?

In particle physics, we read “system” to mean the universe at its most bare-bones fundamental level. The question becomes the following:

At time 1, there exists a given set of particles, each with a particular momentum. What is the probability of a new set of particles, each with a new particular momentum, at time 2?

Quantum field theory is intended to be the machinery one can use to answer such a question. A nice simple challenge we can give it is this: given two electrons, hurtling towards each other at momenta p1 and p2, what is the likelihood of them ricocheting off each other and coming out of the collision with the new momenta q1 and q2?

FullSizeRender (2).jpgFig 1: Feynman Diagram of two electrons exchanging a photon

The easiest (most likely) way for this to happen is shown in fig. 1, this thing is called a Feynman diagram. Electron 1 emits a photon (the particle responsible for the electromagnetic force). This flies over to electron 2 with momentum k, and gets absorbed. We can use the principle of conservation of momentum to uniquely determine k. The principle states that total momentum must be the same at the beginning and end of all events. Applying this to electron 1 emitting the photon, initial momentum = final momentum implies p1 = q1 + k. Then, rearranging gets us to k = p1q1. Since we’re given p1 and q1, we can use this equation to work out exactly what k will be.

Quantum field theory can be used to work out the probability of each individual part of the Feynman diagram. Electron 1 emitting the photon, the photon travelling from electron 1 to 2 with momentum k, and electron 2 absorbing it. This produces the so-called Feynman rules, a translation between parts of the diagram and probabilities of each part taking place. The probability of the entire event can be found by just multiplying probabilities of each component event. The probability of the photon emission, multiplied by the probability of its travel to electron 2, multiplied by the probability of its absorption, gets you the overall probability. Nobel prizes all ’round.

But wait. This is not the only way you can put in two electrons of momenta p1 and p2 and get two electrons out with momenta q1 and q2. There are a number of different ways the two electrons could interact, in order to produce the same outcome. For example, this:

FullSizeRender (3).jpgFig.2: Feynman Diagram of two electrons exchanging a photon which splits into an electron positron pair on the way.

The photon splits into two new particles, which recombine to return the photon. Similarly to before we know exactly what the photon momentum k is, using k = p1 – q1, and the values for p1 and q which we are given in the problem. But now, there is no guiding principle to decide what the momenta of the electron and the positron in the middle will have. We know that k1 + k2 = k from conservation of momentum, but this is one equation containing two unknowns. Compare it to how we worked out k in the first diagram, in which case there was only one unknown, so we could use all the other known values (p1 and q1) to get the unknown one (k). If we fix k2 by saying k2k – k1, we have one unfixed degree of freedom left, k1, which could take on any value. k1 could even have negative values, these represent the electron moving in the opposite direction to all the other particles.

k1 is not uniquely determined by the given initial and final momenta of the electrons. This becomes significant when working out the overall probability of fig.2 occurring.

To work out the overall probability, one needs to use the Feynman rules to translate each part of the diagram into a probability, then combine them. The probability of electron 1 emitting photon, multiplied by the probability of photon moving to where it splits up, multiplied by the probability of photon splitting into the electron & positron, etc.  But this time, since the middle electron could have any momentum, one needs to add up the probability of that part for all values of k1. There is an infinite spectrum of possible k1 values so there are an infinite number of ways fig.2 could occur.

Let’s step back for a moment. In general, if there are lots of different events (call them E1, E2, ….) that could cause the same overall outcome O to occur, then the probability of Oprob(O), is

prob(O) = prob(E1) + prob(E2) + …

If there are an infinite number of ways O could occur, then it becomes an infinite sum of probabilities, and as long as each of the probabilities are not zero, or tend towards zero, then prob(O) becomes infinite.

This is what happens with our particles. Since there is an infinite number of momentum values the middle electron could have, there is an infinite number of probabilities that must be added up to get the probability of fig.2 occurring, so the probability of fig.2 is infinite.

What could that even mean? A probability should be a number between 0 (definitely won’t happen) and 1 (definitely will happen). Such predictions of infinite probabilities renders a theory useless, quantum field theory is doomed. The Higgs boson is a conspiracy invented by the Chinese.

Renormalization or How to ignore all your problems

This wasn’t the end of quantum field theory- since there is a way of resolving this problem. Kind of. The solution, or rather the family of solutions, are referred to as renormalization. It comes in many different manifestations, but it all boils down to something along the lines of the following. We pretend that k1, our unconstrained electron momentum, can only have a value below some maximum allowed size we’ll call Λ. Then, we don’t need to add up probabilities from situations where k1 goes arbitrarily high. We’re left with a finite number of possibilites, therefore a finite probability for the whole event. More generally, we can solve all problems like this by making Λ a universal maximum momentum for all particles involved in an interaction. Λ is called a momentum cutoff.

This solves the issue, we end up with sensible predictions for all processes. And as long as we make Λ suitably larger than the momentum of the initial and final electrons, the answer matches results of experiments to high precision. But I’ll understand if you feel a little unsatisfied by this. How come we can just ignore the possibility of electrons having momentum higher than Λ? To win you over, I’ll tell you a bit about what Λ physically means.

In quantum mechanics, an electron is both a particle and a wave. One of the first realisations in quantum mechanics was that the wavelength of an electron wave is inversely proportional to its momentum; wavelength = 1/momentum. A high momentum corresponds to a small wavelength, and vice versa. Ignoring particles with momentum higher than Λ, is the same as ignoring waves with wavelength smaller than 1/Λ. Since all particles can also be seen as waves, the universe is made completely  out of waves. If you ignore all waves of wavelength smaller than 1/Λ, you’re effectively ignoring “all physics” at lengths smaller than 1/Λ.

Renormalization is a “coarse graining” or “pixelation” of our description of space, the calculation has swept details smaller than 1/Λ under the rug.

Making exceptions like this have in fact been a feature of all models of nature throughout history. When you’re in physics class doing experiments with pendulums, you know that the gravitational pull of Jupiter isn’t going to effect the outcome of your experiment, so broadly speaking, long-range interactions aren’t relevant. You also know that the exact nature of the bonds between atoms in the weight of your pendulum isn’t worth thinking about, so short-range interactions also aren’t relevant. The swing of the pendulum can be modelled accurately by considering only physics at the same scale as it, stuff happening on the much larger and much smaller scale can be ignored. In essence you are also using renormalization.

Renormalization is just a mathematically explicit formulation of this principle.

The Gradual Probing of Scales

Renormalization teaches us how to think about the discovery of new laws of physics.

The fact that experiments on the pendulum aren’t effected by small scales means we cannot use the pendulum to test small scale theories like quantum mechanics. In order to find out what’s happening at small scales, you need to study small things.

Since particles became a thing, physicists have been building more and more powerful particle accelerators, which accelerate particles to high momenta and watch them interact. As momenta increase, the wavelength of the particles get smaller, and the results of the experiments are probing smaller and smaller length scales. Each time a bigger accelerator is required in order to accelerate particles to higher speeds, and each jump is a huge engineering challenge. This race to the small scales has culminated in the gargantuan 27km ring buried under Geneva called the Large Hadron Collider (LHC). This has achieved particle momenta high enough to probe distances of around 10 zeptometers (0.000000000000000000001 meters), the current world record.

Galileo didn’t know anything about quantum mechanics when he did his pioneering pendulum experiments. But it didn’t stop him from understanding those pendulums damn well. In the present day, we still don’t know how physics works at distances under 10 zeptometers, but we can still make calculations about electrons interacting.

From this point of view, it seems like we absolutely should impose a maximum momentum/minimum distance when working out the probabilities of Feynman diagrams. We don’t know what’s going on at distances smaller than 1/Λ. We need to remain humble and always have in mind that any theory of nature we build is only right within its regime of validity. If we didn’t involve this momentum cutoff, we would be claiming that our theory still works at those smaller scales, which we don’t know to be true. Making such a mistake causes infinite probabilities, which suggests that there is indeed something lurking in those small scales that is beyond what we know now…

The road to the Planck scale

There are currently a bunch of theories about what is going on at the small untested length scales. We can make educated guesses about what scales these prospective new features of nature should become detectable at.

FullSizeRender (1).jpg

Fig. 3: Length scales

There has been a fashionable theory floating around for a while called supersymmetry, which says, broadly, that matter and the forces between bits of matter are in a sense interchangeable. Its some well sick theoretical physics that I won’t go into here. The effects of this theory is believed to become visible at scales only slightly smaller than the ones we’ve already tested. It may even be detected at the LHC!

There’s a family of theories pertaining to even smaller sizes, called grand unified theories. These claim that if we can see processes at some way smaller scale, many of the fundemental forces will be revealed to be manifestations of a single unified force. The expected scale where this happens is about a billion billion times smaller than what we’ve currently tested, so will take a billion billion times more energy to probe. Don’t hold your breath on that one.

Finally, there’s reason to believe that there exists a smallest possible scale. This is known as the Planck length. If any object is smaller than the planck length, it would collapse into a quantum black hole, then immediately evaporate, removing any evidence of its existence. This is the scale where the quantum nature of gravity becomes important, and if you want to test that, you’ll need a particle collider 100 billion billion times more powerful than the LHC.

If we want to learn about these mysterious smaller scales, we’re going to need some mighty big colliders. Perhaps impossibly big. Maybe we need some new innovation which makes the probing of scales easier. Maybe the challenge for the next generation of particle physicists will be a rethink of how we test particle physics all together.

Continue reading “Hitchhiker’s guide to an infinity-free theory”

What’s the Matter with Antimatter?

Why are we here? Ok this is a uselessly vague question; I’ll rephrase. Under what mechanism did the stars, planets, life, and all that come about?

Ask a particle physicist this, and they may be tempted to drag you to the nearest blackboard, write down four lines of maths, and stare at you expectantly as if you’re meant to understand what on earth it means. These four lines represent the standard model of particle physics, which is our most up-to-date attempt to mathematically describe the fundamental constituents of matter, and the forces with which they interact. By matter, I mean what makes up everything around us- planets, stars, life, all composed of atoms, which in turn are composed of smaller particles like electrons and quarks. These are the particles that the standard model governs.

SM.JPG

Fig.1: The Standard Model

The standard model has been highly successful at predicting the outcome of experiments, for example in the Large Hadron Collider (LHC) at CERN, and in fact it has withstood basically every test that has been thrown at it. It explains the nature of matter on a very fundamental level. But it’s actually pretty useless at explaining how all that matter got here in the first place.

So what do these equations have to say about how matter came into existence? I hope you’ll agree with me that electrons definitely exist. How is an electron created? The standard model says it can come into existence during for example, photon decay [ref. fig 2]. But in this process necessarily another particle, a positron must also appear at that same place and time. The positron is the electron’s antiparticle, meaning it has the same mass as the electron but oppositely charged. Such an event is not the only way electrons can be created, but whatever the process, the electron must always emerge with it’s oppositely charged sibling. The universe seems to run a two for one deal on fundamental particles.

eegamma.JPG

Fig.2: The only way to get your hands on an electron.

It seems like if the standard model is the true description of reality, it would imply that there are just as many positrons in the universe as electrons. We know there are plenty of electrons present today; each atom has a bunch of electrons orbiting them. But positrons are a rather rare spectacle; they appear in cosmic rays but not many other places. We have arrived at a paradox.

Perhaps there is some way the positrons could have been destroyed but the electrons survived. The only way you can destroy a positron, according to the standard model, is to annihilate it with an electron, e.g. the process in fig. 2, but in reverse.

The only way to create or destroy electrons or positrons is via processes similar to the above. This is not just the case for electrons; the same applies to all particles in the standard model that make up matter.

While a proton is made up of three bounded quarks, there can exist anti-protons consisting of three antiquarks. With anti-protons, anti-neutrons and positrons, anti-atoms can form, which, just like atoms, can bind in all the ways necessary to end up with anti-stars, anti-planets, anti-life, the whole shebang. It’s all referred to as antimatter, which is a little misleading since it behaves rather a lot like matter, just with charges reversed.

If we had the same number of antiprotons as protons and antineutrons as neutrons, we would end up with the same number of anti-atoms, anti-stars and anti-planets.

Where is all of this antimatter? In 1998 an experiment was sent into space (the Alpha Magnetic Spectrometer, or AMS) to compare the density of helium and anti-helium in cosmic rays. It detected at least three million helium atoms per anti-helium.

Perhaps matter and antimatter have become separated somehow. Perhaps we just live in some huge region of space containing matter, and there are antimatter regions elsewhere in the universe.

Could we detect the presence of an antimatter region, say, by finding an anti-galaxy? To detect normal galaxies, we rely on nuclear fusion in stars, to emit light that can be picked up by our telescopes.

Since anti-fusion in anti-stars would emit the same frequency of radiation as normal fusion, we can’t really tell whether or not a galaxy is made of matter or antimatter. However, we know that galaxy collisions are reasonably common. The collision between galaxy and anti-galaxy would result in essentially annihilation between each particle-antiparticle pair, which come into contact. The reverse of figure 2 on a grand scale.

This would be a truly awesome event, making the black hole merger recently detected via gravitational waves look like a sneeze. The energy emitted would be E = mc2 where m is the combined mass of the two galaxies. In a back-of-the-envelope calculation, one would simply add the masses of the two galaxies and multiply by c2 to find an energy output 1,000,000,000,000,000 times more than your average supernova. Suffice to say we definitely would have noticed this.

Regardless of how you theorise how matter and antimatter have been distributed, the inevitable annihilations of objects and anti-objects would create a lot more radiation coming from the skies than we observe. It seems matter rules our universe.

Is it curtains for the standard model? Not quite. There have been a number of results from experiments throughout the years that may account for the matter/antimatter imbalance.  The first of which was concerning a strange particle called the Kaon. Similar to a proton made of three quarks bound together, a Kaon consists of only two quarks. It’s possible for the Kaon to transform into an anti-Kaon spontaneously. It was originally thought that the probability of such a transition was the same as that of the reverse, an anti-Kaon turning into a Kaon, so overall the matter/antimatter balance would be preserved.

In 1964 it was discovered that in fact the two probabilities are different; the particles prefer to be Kaons rather than anti-Kaons. The technical expression for phenomena like these is “CP-violation” if you want to do some hardcore technical reading. These types of processes, combined with some extra conditions about thermodynamics in the early universe, could be enough to explain the imbalance we see.

For some time Kaons seemed to be the only particle which could behave in such a CP-violating way. However in 1983, CP-violation made a comeback when it was found that a whole new family of particles, called the B mesons, was capable of a plethora of such processes.

The imbalance caused by Kaons and B mesons is still nowhere near enough to explain the matter-dominated universe. The race to find more of these processes, and understand the mysterious underlying mechanisms that cause them, is still happening today. Experiments around the world like the LHCb detector at CERN are dedicated to this goal. In tandem, theoretical physicists are searching for extensions to the standard model that induce imbalance of the extent we need.

It looks like the search may have only just begun, and could lead us to some new and profound realisations about why the universe is the way it is.

Continue reading “What’s the Matter with Antimatter?”