Skip to main content Skip to main content

Physics for the 21st Century

The Basic Building Blocks of Matter Online Textbook

Online Text by Natalie Roe

The videos and online textbook units can be used independently. When using both, it is possible to start with either one. Watching the video first, and then reading the unit from the online textbook is recommended.

Each unit was written by a prominent physicist who describes the cutting edge advances in his or her area of research and the potential impacts of those advances on everyday life. The classical physics related to each new topic is covered briefly to help the reader better understand the research, its effects, and our current understanding of physics.

Click on “Content By Unit” (in the menu to the left) and select a unit title to view the Web version of the online text, which includes links to related material. Or, download PDF versions of the units below.

1. Introduction

The physical universe challenges us over a wide span of distances, ranging over more than 35 orders of magnitude, from subatomic scales (< 10-14 meters) to the dimensions of galaxies (1021 meters) and beyond. In recent years, scientists working at both ends of the scale—particle physicists probing the basic building blocks of matter and cosmologists studying the structure of the universe on the largest observable scales—have started to converge on a common picture of how the universe expanded from a hot, dense “particle soup” shortly after the Big Bang to form galaxies, stars, and planets. Impressive as this “cosmic convergence” is, important questions still remain: Is there a Higgs particle responsible for giving particles mass? What is the nature of the dark matter that dominates the mass in galaxies, including our own Milky Way? And, why is a mysterious force dubbed dark energy causing the expansion of the universe to speed up? To address these questions, physicists have planned a variety of experiments that use accelerators, telescopes, and detectors deep underground. They hope to find some of the answers in the next decade.

Figure 1: Fundamental particles of the Standard Model.
Source: © Wikimedia Commons, License: CC 3.0 Unported. Author: MissMJ, 27 June 2006. More info

Particle physicists have already made significant progress in understanding the subatomic end of the scale. They have enshrined their discoveries in the Standard Model of particle physics. This theory is so apparently perfect that no crack has yet appeared despite experimentalists’ best efforts to devise ever-more precise tests. Yet, at the same time, it is so fatally flawed as to convince theorists that behind the Standard Model must lie a better theory that encompasses and expands upon it.

The evidence for dark matter and dark energy, although they remain completely mysterious, is perhaps the most significant hints that the Standard Model is incomplete. We shall learn about that evidence and the theoretical problems it causes in Units 10 and 11. But even before these cosmological clues surfaced, observations of the behavior of particles called neutrinos and theoretical problems in extending the Standard Model to much higher energies had suggested that something was missing. Literally thousands of theoretical papers in the literature propose everything from string theory to extra dimensions and from supersymmetry to multiple universes as remedies for the Standard Model’s known flaws. TheLarge Hadron Collider (LHC) at the CERN laboratory in Geneva, Switzerland—the highest-energy particle accelerator ever built—will put the Standard Model to its most rigorous tests ever and tell us which, if any, of the many theories beyond the Standard Model bear any resemblance to reality. This unit details the discoveries of successive subatomic particles and will analyze what each contributed to the Standard Model.

2. The First Subatomic Particles

The large hadron collider (LHC) is the culmination of a long and illustrious tradition of crunching particles together to figure out their components. Skeptics have likened the process to smashing a delicate Swiss-made watch to find out how it works. Nevertheless, this brute force approach has worked remarkably well.

 

Figure 2: Inside the LHC tunnel during construction.
Source: © Wikipedia Commons, GNU License. Author: Juhanson. 19 October 2004. More info

As accelerators have become ever bigger and more powerful during the past century, they have given physicists two advantages. First, the more energetic the accelerated particle, the more deeply it can probe into the structure of matter. Second, the relationship between mass and energy that Albert Einstein formulated in his famous equation indicates that higher-energy collisions can produce more massive particles. With each advance in accelerator technology, therefore, new energy frontiers have delivered dramatic new discoveries and opened up new conceptual frontiers. See The Math on Einstein’s Equation below in Section 9.

A particle accelerator uses an electric field to propel electrically charged particles in a desired direction. An electron accelerated across a potential of one volt acquires a kinetic energy of one electron-volt (eV). In the LHC, an oscillating electric field accelerates two hair-thin beams of protons to 7 trillion electron volts (TeV). Superconducting magnets direct the beams in a circular path with a circumference of 27 kilometers. The two beams of protons race around the ring in opposite directions at 0.999999991 times the speed of light. When two protons collide, they have a center-of-mass energy of 14 TeV. The total energy in the two beams is equivalent to 173 kilograms of Trinitrotoluene (TNT).

The earliest accelerators

We can trace the lineage of the LHC back to an accelerator that was basically a primitive version of the cathode ray tube in an old-fashioned television set. The early experiments with simple accelerators like this led to an increasingly sophisticated understanding of the structure of the atom. In doing so, they provided a blueprint for a method of discovery that generations of Nobel Prize-winning physicists have used ever since.

Figure 3: Thomson used the cathode ray tube in three different experiments.
Source: © Wikimedia Commons, Public Domain. More info

Physicists applied the first accelerators to understanding and then using mysterious forms of radiation that were first detected in the 1890s. English physicist J.J. Thomson used an evacuated glass tube and an anode and cathode to show that the beta rays that emanated from a heated metal filament were actually particles with negative electric charges. Further studies indicated that these electrons had very small masses compared with that of the hydrogen atom. Thomson theorized that an atom resembled a plum pudding, with electrons distributed throughout a uniform, positively charged sphere.

A student of Thomson’s, New Zealander Ernest Rutherford, extended the study of atoms by firing alpha rays emitted in certain radioactive decays at thin gold foil. He concluded that the mass of an atom was concentrated in a very small region, which he called the “nucleus,” surrounded by a cloud of electrons. Alpha rays turned out to be helium nuclei. Rutherford estimated the diameter of the nucleus to be less than 10-13 meters, compared with the atomic size of about 10-10 meters (1 Ångström). More recent measurements give values for the nucleus that range from about 10-14 meters to 10-15 meters depending on the atomic number.

Modified by Danish physicist Niels Bohr’s application of the principles of quantum mechanics that we shall meet in Unit 5, the atomic model led directly to our modern view of the atom: a nucleus consisting of protons and electrically neutral neutrons (discovered in 1932), surrounded by a swarm of electrons, equal in number to the protons. This is a remarkably simple system. By taking different combinations of just three constituents—protons, neutrons, and electrons—we can account for all the elements seen in nature.

An organizing principle

Atomic theory also explained the physics underlying the structure of the periodic table, which Russian chemist Dmitri Mendeleev had first proposed in 1869. The table provided an organizing principle, whose power was shown by the discovery of the noble gases. Long before Rutherford and Bohr explained the underlying structure of the atom, gaps in the periodic table had enabled chemists to predict where new elements might be found. For example, in 1894 Sir William Ramsay and John Strutt, Lord Rayleigh, discovered a new gas in ordinary air. They named it “argon” after the Greek word argos, or “lazy one,” because it did not interact readily with other elements. Argon was assigned a place according to its atomic number, where it stuck out like a sore thumb without any obvious neighbors with similar properties. This prompted chemists to search for other nonreactive gases. Within the next five years, they discovered the noble gases helium, krypton, radon, neon, and xenon.

Figure 4: The periodic table of elements.
Source: © Lawrence Berkeley National Laboratory.

The search for new elements continues even today, still based on Bohr’s atomic model. Uranium, the heaviest element that naturally occurs on Earth, has an atomic number of 92, meaning that it contains 92 protons and 92 electrons. In 1940, a team at the Lawrence Berkeley Laboratory led by Ed McMillan, produced the first transuranic element. Named, like uranium, after one of the outermost planets, neptunium had an atomic number of 93.

In the subsequent 20 years, physicists using Berkeley’s 60-inchcyclotron to create intense beams of slow neutrons created 10 more transuranic elements, with atomic numbers 94 through 103. The elements were mostly named for people and places connected to physics research. Starting in the 1960s, groups in Russia and Germany joined the hunt, creating the next eight transuranic elements. In 2006, a research team working in Dubna, Russia, announced the indirect detection of three nuclei of element 118. This discovery still awaits confirmation and an official name from the International Union of Pure and Applied Chemistry.

3. The Particle Zoo in Cosmic Rays

The satisfyingly simple view that all matter consisted of three subatomic particles—electrons, protons, and neutrons—did not last long. A veritable zoo of new subatomic particles began to emerge in the 1930s when physicists started to study cosmic rays. These are particles produced by nature’s accelerators: energetic protons from the Sun, neutron stars, supernovae, and extra-galactic sources. The particles impinge on our upper atmosphere, collide with the nuclei of oxygen or nitrogen, and produce showers of newly created particles. Although most cosmic rays have relatively short lifetimes, the effects of special relativity allow many of them traveling at extremely high speeds to reach the Earth before they decay. This effect, which physicists call “time dilation,” increases with the particle’s speed and is described by the Lorentz factor equation: . See The Math below in Section 9.

Figure 5: Carl Anderson, Paul Dirac, and a positron track observed in a cloud chamber.

To detect cosmic rays, physicists relied on cloud chambers—sealed compartments filled with vapor that is cooled and kept very near the dew point. Charged particles passing through the vapor create tracks of ionization and cause tiny droplets to condense. The vapor in the cloud chamber reveals the particles’ track, much as the contrails behind a jet show the path of an airplane. By applying an external magnetic field to bend the tracks, physicists gleaned more clues about the particles’ momentum and charge.

California Institute of Technology physicist Carl Anderson started the riot of discovery in 1932. He identified a stable, positively charged particle, called the positron, in a cloud chamber. The find came four years after English theorist Paul Dirac had predicted the existence of antiparticles. Working on the relativistic equation of motion for the electron, Dirac found a mysterious second solution with negative energy. The correct interpretation, he postulated, was a particle with the same mass as the electron but the opposite charge. In other words, the positron is the electron’s antiparticle. When a positron and an electron meet, they annihilate each other with a flash of energy in the form of radiation—another demonstration of Einstein’s equation, E=mc2 .

Dirac later speculated about the existence of other worlds made of antimatter that ought to exist if the laws of physics were completely symmetric with respect to matter and antimatter. As we shall see later in this unit, this was prescient speculation. It has spurred experiments that still continue today.

An astonishing new particle

The existence of antimatter was a shocking development that many scientists and nonscientists found difficult to accept, even though theorists could readily accommodate the positron. But the next particle to be discovered, the muon, really came out of left field. Discovered in 1936, also in a cloud chamber experiment, it behaved like an electron but had about 200 times more mass. “Who ordered that?” asked the Nobel Prize-winning Columbia University physicist I.I. Rabi.

 

Figure 6: The muon’s most common decay mode. Source: © Wikimedia Commons, Public Domain. Author: Thymo, 6 April 2009.

Studies showed that the muon is long-lived, decaying in about a microsecond. That makes it one of the most common particles from cosmic ray showers that survive all the way to the Earth before decaying. The particle was actually the first member of a second generation of Standard Model particles to be discovered, although it would take decades for physicists to appreciate that fact. A “generation” is a family of related subatomic particles; the first generation consists of particles that do not decay, such as the electron. We shall meet more of the second and further generations later in this unit.

About ten years after the discovery of the muon, photographic emulsions taken of cosmic rays revealed the particles called pions and kaons. Experimentalists had eagerly sought the pion, to fulfill the prediction of Japanese physicist Hideki Yukawa. It stemmed from his effort to understand why the electrical repulsion of all the protons packed into a tiny space did not tear apart atomic nuclei. Yukawa postulated the existence of a short-range strong nuclear force, attractive between two protons, which could overcome their electrostatic repulsion. As the carrier of that force, he proposed the pion, with a mass about one-sixth that of the proton. The discovery of the pion confirmed the existence of this new force, as we shall see in Unit 2.

Figure 7: Pions play an important role in explaining why atomic nuclei do not split apart.

On the other hand, nobody predicted the kaon, whose unusual behavior quickly earned it the nickname “the strange particle.” (Theorists later formalized the concept of strangeness; it applies to particles such as the kaon that decay more slowly than expected.) Since pions and kaons have masses intermediate between the electrons and the protons, scientists called them mesons, from the Greek mesos, for “medium.” The electron and muon were named leptons, from the Greek leptos, or “thin.”

4. From Cloud Chambers to Bubble Chambers

Physicists became impatient waiting for cosmic rays to produce the rare events that led to new discoveries. So after World War II, research shifted to national laboratories where accelerators were built to produce intense beams of energetic protons. To record the particles and their decay tracks, physicists built large bubble chambers. These liquid versions of cloud chambers recorded thousands of photographs of particle tracks.

Figure 8: An abandoned bubble chamber at Fermilab.
Source: © Fermilab. More info

The new accelerators represented greatly improved versions of the crude accelerators that J.J. Thomson and Ernest Rutherford had used in their pioneering studies of atomic structure. Those original instruments had a significant disadvantage: The naturally produced alpha and beta particles that provided the projectiles for the accelerators had relatively little energy. In 1927, Rutherford upped the ante by calling for ways of creating “a copious supply” of higher-energy particles. Ernest Lawrence, a young physics professor at the University of California, Berkeley, found a unique way to take up the challenge. It involved a circular device in which a magnetic field confined particles to orbiting in a horizontal plane while an alternating electric potential applied to each half of the circular plane would give the particles an energy boost twice per orbit. This ingenious technique avoided the use of very high voltages—an achievement both difficult and dangerous. Instead, it applied a modest voltage many times.

The first cyclotron built by Lawrence and his student M. Stanley Livingston measured 4.5 inches in diameter. As soon as they proved that it worked, they built a larger version. With a diameter of 11 inches, this accelerated protons to energies of more than one million electron volts. Eventually, Lawrence founded the Radiation Laboratory at Berkeley (now the Lawrence Berkeley National Laboratory) and oversaw the construction of ever-larger cyclotrons. That group of devices, which included an accelerator called the Bevatron, led to the discovery of new mesons, enabled the first detection of the antiproton, created transuranic elements, and even provided beams of particles for cancer treatment.

New species for the particle zoo

The Bevatron at Berkeley and the Cosmotron at Brookhaven National Laboratory on Long Island led the way to the new surge of discovering subatomic particles. Reaching full power in 1953, the Cosmotron became the first particle accelerator to give single particles kinetic energies of more than 1 giga-electron volt (GeV, or 109electron volts). Once it started operation in 1954, meanwhile, the Bevatron accelerated protons at energies up to 6.2 GeV into a fixed metal target.

Figure 9: The first cyclotron, the Bevatron, and particle tracks. Source: Cyclotron: © Lawrence Berkeley National Laboratory, courtesy AIP Emilio Segre Visual Archives, Bevatron and Particle Tracks: © Lawrence Berkeley National Laboratory.

The studies added several new species to the particle zoo, with names like sigma (), cascade (), and delta (). Since these particles were heavier than the proton, physicists dubbed them baryons (meaning heavy ones in Greek). The research also revealed particles of different electrical charges—positive, negative, and neutral—with the same mass and decay properties, suggesting that they were members of a family. Physicists even identified a ++ particle that had a charge of +2 (i.e., twice the proton charge)!

The situation now resembled that faced by chemists before the advent of the Rutherford-Bohr model of the atom. To impose some order, physicists followed Dmitri Mendeleev’s example and constructed tables that organized the eight known mesons and nine known baryons according to their electric charges and amounts of strangeness (as determined by the number of kaons in the decay chain). They plainly needed a new theory to find the underlying symmetry in this particle zoo.

Three fundamental building blocks

In 1964, theorists Murray Gell-Mann and George Zweig independently suggested that all of the observed mesons and baryons could be constructed from just three fundamental building blocks. The pair regarded these quarks as mathematical constructs that were useful for explaining the observed data, but not necessarily as fundamental particles corresponding to physical reality.

The model postulated that the three types, or flavors, of quark—that physicists named up, down, and strange—had fractional electric charges. It assigned the up quark a charge of +2/3 (two-thirds of the charge on the proton), and the down and strange quarks charges of -1/3 (one-third of the electron’s charge). All baryons, the model suggested, consisted of three quarks, combined in such a way that they have integral or zero electric charge. Protons, for example, contained two up quarks and a down quark, providing a net electric charge of +1. Neutrons stemmed from one up and two down quarks, netting out at zero charge.

Mesons, meanwhile, were created from just two constituent quarks. They gained their integral electric charges by combining quarks and anti-quarks. Anti-quarks are quarks’ antimatter partners; they have the opposite electric charge and bear the same relation to quarks as positrons to electrons. For example, the pi+ consisted of an up quark and an anti-down quark with a charge of +1; the pi-zero stemmed from an up and an anti-up (or down and anti-down) quark; and the pi- from a down quark and an anti-up quark. And if you wanted kaons, you simply changed the down quarks to strange quarks.

Table 1: How quarks create baryons.
Quark 1 Quark 2 Quark 3 Baryon
up up down proton
up down down neutron
up down strange lambda

 

Elegant in its simplicity, the theory echoed the atomic model that had posited the proton, neutron, and electron as the basic building blocks for more than 100 different elements. The quark model saw the proton and neutron as no longer fundamental but composite particles created from quarks. The model accounted for the entire particle zoo by combining three types of quarks and anti-quarks in all possible allowed combinations.

However, one combination had so far defied observation: the tenth baryon, constructed from three strange quarks, that Gell-Mann dubbed the “Omega minus (-).” Just as a gap in the periodic table suggested an element waiting to be discovered, the prediction of the quark model set off a search to find the missing baryon. Within the year, it culminated in the discovery of the Omega minus in the 80-inch bubble chamber at Brookhaven National Laboratory’s 80-inch bubble chamber. Just like the periodic table, the quark model had predictive power.

Figure 10: The periodic table for heavier mesons and baryons. Source: © Wikimedia Commons, GNU license version 1.2. Authors: Laurascudder, 2007 (Meson octet and Baryon decuplet) and Dr_Eric_Simon, 2006 (Baryon octet).

 

 

Despite this triumph, most physicists still did not believe that quarks really existed. Rather, they merely provided a useful artifice to explain the pattern of particles observed in nature. That opinion gained strength when experimentalists failed to find fractionally charged particles. But a new and powerful electron accelerator in California overturned that view.

5. The Discovery of Quarks

The accelerators at Berkeley and Brookhaven were designed to accelerate protons. Physicists at Stanford University had a different idea: an electron accelerator. After all, they reasoned, the proton was not a fundamental particle. And because the electron appeared to have no substructure, it should make a cleaner probe. So Stanford designed and built several generations of linear electron accelerators, culminating in the Mark III accelerator, which grew to over 300 feet in length.

 

Then in 1951, a diminutive firebrand named Wolfgang Panofsky arrived from Berkeley, after refusing to sign the McCarthy-era loyalty oath required by the state of California. Panofsky led the Stanford faculty in developing a proposal to construct a new two-mile-long linear accelerator, dubbed Project M—for Monster. In 1962, the Atomic Energy Commission provided $114 million to build the Monster under the more benign name of the Stanford Linear Accelerator Center (SLAC). Four years later, the linac (for linear accelerator) began accelerating intense beams of electrons up to energies of 20 billion electron volts.

Figure 11: Overview of the Stanford Linear Accelerator Center. Source: © SLAC National Accelerator Laboratory historical photo index.

A beam switchyard at the end of the linac directed the beam to different experimental areas, or end stations, much like a railroad switchyard. In End Station A, an enormous version of Rutherford’s scattering experiment used liquid hydrogen and deuterium (or heavy hydrogen) as targets. And just as Rutherford had discovered a small hard nucleus that occasionally caused an alpha particle to scatter at a large angle or even backwards, researchers at SLAC observed electrons scattering at wide angles much more frequently than expected. By the early 1970s, detailed analyses of the distribution of the scattered electrons measured in the giant magnetic spectrometers in End Station A revealed three scattering centers within the nucleon—the first experimental evidence that quarks were in fact real. Physicists Jerome Friedman, Henry Kendall, and Richard Taylor received the Nobel Prize for this discovery in 1990.

Unfortunately, physicists can’t take the next step of observing isolated individual quarks. The reason: a property known as color confinement. If you try to pluck a single quark out of a proton, a new quark-anti-quark pair will suddenly pop out of the vacuum; it turns the single quark into a hadron and shields its nakedness from view. Particles called gluons bind the quarks together and play the same role in strong interactions that the photon plays in electromagnetic interactions. We shall discuss this in more detail in the next unit.

 

Rapid development of quark theory

Despite their aggregation into composite particles, the confirmation of fractionally charged particles inside the neutron and proton set the stage for rapid development in the next two decades. The three flavors of quark—up, down, and strange—were soon augmented by the discovery of a fourth. In 1974, two scientific teams almost simultaneously discovered the so-called “charm quark,” in the form of a meson made up of a charm and an anti-charm quark. The fact that the teams used entirely different approaches to the discovery gave the find added credibility.

Figure 12: Computer reconstruction of a psi-prime decay in the SLAC Mark I detector. Source: © SLAC National Accelerator Laboratory.

 

A team at SLAC headed by Burton Richter caused collisions between beams of electrons and their antiparticles, positrons, creating showers of particle-antiparticle pairs. The SLAC team tuned the beam energy, watching for any change in the amount of particles produced in the collision. The new meson revealed itself as a huge spike called a resonance in the probability of interactions between particles. The resonance appeared when the energy produced in the collision was near the new meson’s mass. The other group, led by Samuel Ting of MIT, took a different tack. They fired protons onto a fixed target at Brookhaven National Laboratory and identified the meson’s signature against the background of other particles.

Intriguingly, the two teams first gave the new meson different names. The SLAC physicists called it the “psi particle” because one of its characteristic decay modes produced four particles that curved in their detector’s magnetic field to look like the Greek letter psi. Ting took an equally symbolic approach. He chose the name “J,” owing to the similarity in shape between that letter and the ideogram for his Chinese name. Once they realized that they had discovered the same particle, the two teams agreed to name it “J/psi.”

 

At this point in the story, the fundamental constituents of matter were once again manageable in number. We have two generations of particles, each of which consists of a lepton with charge -1, and two quarks with charges +2/3 and -1/3. The first generation has the three fundamental building blocks of entirely stable matter: the electron and the up and down quarks. The second generation consists of the muon, charm, and strange quarks. All are unstable and eventually decay into particles of the first generation. Why does a second generation exist? This remains a mystery that has only deepened with the discoveries that followed.

More surprising particles

Surprises continued beyond the 1970s. The next was the third lepton, the tau (after the Greek letter for or third). SLAC made the find within a couple of years after the discovery of the charm quark. Initially, the tau lepton confused the situation by making it much more difficult for experimenters to understand the detailed properties of mesons containing a charm quark. Eventually, however, the story fell into place. It became clear that the electron and muon had a third, much heavier cousin. While the muon is about 200 times heavier than the electron, the tau is about 3,500 times more massive. This immediately begged the question of the existence of a third generation of quarks, setting off another of those rushes to be the first to discover the missing puzzle pieces that were so clearly waiting to be found.

Experimenters at the Fermi National Accelerator Laboratory (Fermilab) near Chicago sought evidence of the bottom quark using the Tevatron, a new and bigger accelerator with higher energy protons. Fermilab physicists looked for evidence of the bottom quark in the particles produced in proton collisions with a stationary target, a process known as “bump hunting.” The resonances appear as small bumps in the probability of particles being produced in a collision. Identifying the bumps requires careful statistical analysis.

 

Figure 13: Aerial view of the Tevatron at Fermilab.
Source: © Fermilab.

The Tevatron team searched for a resonance bump that would reveal the existence of the meson known as the “upsilon,” consisting of bottom and anti-bottom quarks. After a false alarm due to statistical fluctuations that became known as the “Oops-leon,” the team led by Leon Lederman was finally successful in discovering the upsilon.

Tracking down the top quark

The existence of the sixth quark, known as the “top quark,” was now all but a certainty. Several groups around the world built accelerators that theorists regarded as energetic enough to produce and detect it, but not until 1995 did the top quark finally reveal itself. The Tevatron revealed it by producing top-anti-top quark pairs. Measurements showed that the top quark is about as heavy as a nucleus of gold. That’s 40 times more massive than the bottom quark.

Figure 14: The Collider Detector at Fermilab (CDF).
Source: © Fermilab. More info

If creating enough energy to produce the top quark presented a huge challenge, so did identifying it. The top quark decays immediately to a bottom quark, which then usually decays to a charm quark. That, in turn, usually decays to a strange quark. These quarks are “clothed” as mesons, and the decay chain produces a variety of particles that finally live long enough to be seen inside the enormous detectors built around the collision point. Physicists must reconstruct the decay chain in order to determine if it reveals a top quark rather than a random combination of unrelated particles. Digging this rare signal out of the much noisier background caused by random combinations was a major success of the Fermilab program. It put the capstone on the Standard Model of fundamental particles.

The discovery of the sixth quark also completed the three families of quarks. It still leaves some unanswered questions, however. Why three families, when only the first generation of up and down quarks is necessary for ordinary matter? What does the pattern of masses mean, especially the very heavy top quark? And is there a fourth generation of quarks and leptons? Numerous searches have failed to find one, indicating that it must be very heavy if it exists. And evidence coming from the neutrino sector indicates that there are probably only three generations of quarks and leptons, as we shall now explain.

6. The Little Neutral Ones: Neutrinos

We have leapfrogged ahead in our story, ignoring an important but easily overlooked particle: the neutrino. German theorist Wolfgang Pauli first proposed the concept of neutrinos in the 1930s to explain a puzzling feature observed in nuclear beta decay. A beta decay happens when a neutron in the nucleus converts (decays) into a proton, an electron, and (for reasons outlined below) an anti-neutrino. The proton remains bound to the nucleus by the strong nuclear force, but the electron and the anti-neutrino escape as radiation. These radioactive decays emit a negative beta ray (that is, an electron). As a result, the nucleus gains one unit of positive charge, which transforms it into the next element in the periodic table. Because energy is conserved, the electron should carry off a well-defined amount of kinetic energy corresponding to the mass difference between the two nuclear states. However, the emitted electrons did not exhibit a sharp peak in energy. Instead, the measured electron energies were seen to spread over a broad range, rather than the single value that would correspond to the electron being the only emitted particle.

Figure 15: Beta decay spectrum: The puzzling process explained by the detection of the neutrino.
Source: © Michelle Leber, 2009.

It appeared that the sacrosanct principle of energy conservation was violated in beta decay. Niels Bohr even suggested that perhaps energy conservation did not hold inside the nucleus. Pauli offered an alternative suggestion: An undetected, electrically neutral particle could be emitted, so that it and the emitted electron could share the energy of the decay process between them.

At the time, nobody regarded this ghost particle explanation as satisfactory, though it was certainly better than Bohr’s alternative. But in 1932, British physicist James Chadwick discovered the neutron and confirmed that the electrons emitted in beta decay do not have a well-defined energy. Chadwick’s work prompted the great Italian physicist Enrico Fermi to write down what turned out to be the correct theory of beta decay: A neutron decays into a proton, an electron—and a ghost. Fermi named the ghost a “neutrino.” This particle possessed no mass and no charge, and hardly ever interacted—just like Pauli’s ghost particle. Fermi’s theory worked not only for beta decay, but also for a variety of other processes with missing energy, including decays of pions and muons. The process would later be called the weak interaction, because of the very low probability that it would occur.

 

Neutrinos detected

Inverse beta decay is the most commonly used process to detect electron neutrinos.

Physicists did not directly detect the neutrino until 1956, using the standard technique of fixed target scattering that had previously led to the discoveries of the nucleus and later the quark. In this case, the challenge was not to probe inside the target but to detect the neutrino beam, which could be discovered only by detecting the products of its scattering interaction. A single neutrino with 1 GeV of energy will travel, on average, through one million earths before interacting; so to catch one in the act requires both a copious source of neutrinos and a massive detector to increase the odds. Frederick Reines and Clyde Cowan Jr. designed an experiment to do just that. They used a large water tank located next to the Savannah River nuclear reactor in South Carolina, which produced about 1012– 1013 neutrinos per square centimeter per second. Reines and Cowan looked for evidence of the “inverse beta decay reaction” that occurs when a neutrino interacts with a proton, producing a neutron and a positron: .  See The Math in Section 9 below.

 

Figure 17: Aerial view of South Carolina’s Savannah River nuclear reactor. Source: © NASA, visibleearth.nasa.gov.

In the water tank, the positron will immediately annihilate with an electron, emitting two photons, each with the same characteristic energy. Cadmium dissolved in the water absorbs the neutron and undergoes gamma decay, which emits a third photon with a different energy a few microseconds later. Reines and Cowan devised a way of distinguishing this characteristic signature—two photons of the same energy, followed by a third photon at a different energy—from the many accidental background coincidences caused by cosmic rays and other extraneous signals. Despite the huge flux of neutrons, they observed only a handful of events per day. So as a check, they verified that the signal went away when the reactor was turned off. Technically, the pair discovered the anti-neutrino. However, as we shall see later in this unit, certain types of neutrinos may be identical to their anti-neutrinos.

Many open questions

This experiment conclusively established the existence of the elusive neutrino, but many open questions remained. It would take several more decades of challenging experiments using neutrinos from reactors, cosmic rays, the Sun, and accelerators to establish the existence of three different kinds, or flavors, of neutrinos, corresponding to the three different types of lepton: electron neutrinos, muon neutrinos, and tau neutrinos. All three neutrino flavors are light in mass. Indeed, they were originally assumed to be massless.

 

Figure 1: Fundamental particles of the Standard Model.

A positron-electron collider called LEP at the European Organization for Nuclear Research (CERN) played a critical role in putting neutrinos into the broad context of the Standard Model. CERN scientists studied the “invisible” decays of the Z boson—the neutral carrier of the weak force that we shall meet in the next unit—to a pair of neutrinos. The observed rate of these decays showed that only three generations of light neutrinos exist. This important result suggests that there are only three generations of particles in the Standard Model, organized in the “periodic table” of fundamental particles shown in the accompanying figure.

This is the happy family of quarks and leptons that all particle physicists know and love. But in it, there lurked a big surprise in the neutrino sector. Some call it the first evidence of physics beyond the Standard Model.

The evidence first showed up in experiments conducted deep underground in South Dakota’s Homestake Gold Mine, away from cosmic ray backgrounds, to detect the neutrino flux from the Sun. This started as a way to study the properties of the Sun, by monitoring the neutrinos from the nuclear reactions that power the Sun’s energy. The initial experiments, pioneered by Raymond Davis Jr. of Brookhaven National Laboratory, reported far too few neutrinos. The shortfall wasn’t trifling. Davis detected only one-third as many neutrinos as expected.

Figure 19: Drawing of the underground Brookhaven Solar Neutrino Observatory.
Source: © Courtesy Brookhaven National Laboratory.

This discrepancy spurred new questions—was the solar model wrong, or was something strange going on with neutrinos? It also generated new types of experiments to unravel the puzzle. Studies that used neutrinos produced in the decay of cosmic rays provided the surprising answer: Neutrinos could change from one flavor into another.

A Japanese-led experiment called Super-Kamiokande showed that the flux of muon neutrinos from cosmic rays differed depending on whether the detected neutrinos were moving down or up. Upward-moving neutrinos are produced by cosmic rays that impact the atmosphere on the opposite side of the Earth to the detector. They travel all the way through the Earth before being detected. That gives them more time to change flavors. During this time, about half of the muon neutrinos had changed into tau neutrinos. The same effect explained the reduced neutrino flux from the Sun: Electron neutrinos produced in the Sun were changing into muon and tau neutrinos before they reached the Earth. Since the early solar neutrino experiments were sensitive only to electron neutrinos, they could not detect the two-thirds that had mutated. Later, more sophisticated experiments sensitive to all three flavors of neutrinos confirmed that all three types of neutrinos can change, or oscillate, into one another.

The mass of neutrinos

Physicists had already observed this type of mixing behavior in neutral mesons, but they had no reason to expect it in neutrinos. After all, the Standard Model assumed that neutrinos had no mass. However, oscillation between neutrino flavors, which means that individual neutrinos change their identities, is theoretically possible only if different flavors of neutrinos have different masses. Physicists still do not know the absolute mass scale of neutrinos, but they have measured the mass differences between pairs of neutrino flavors through careful study of their oscillation properties. These differences are very tiny, suggesting that neutrinos may be a million times lighter than the electron. Now theorists face the challenge of explaining why nature should have given neutrinos such miniscule masses.

The Sudbury Neutrino Detector led to the discovery of neutrino mass.Experiments to make more accurate measurements of neutrinos’ mass differences and their mixing rates are under way in several countries. Some use nuclear reactors as the sources of neutrino beams. Others rely on neutrinos produced in accelerators by the decay of a secondary beam of mesons produced when high-energy protons smash into a target. The results of both types of studies may make it experimentally feasible for the next generation of projects to look for CP violation in neutrinos—a phenomenon that we shall explain in the next section.

Some experimenters are trying to pin down the absolute mass scale of neutrinos by making precision measurements of the highest-energy electrons emitted in beta decay. These experiments are performed on large and small scales, using a spectrometer as large as a house, or making careful measurements of a single atom as a neutron in its nucleus decays. Other experimenters are trying to measure the absolute mass scale of the neutrino through a process called “neutrino-less double-beta decay.” In this phenomenon, two beta decays occur simultaneously; the neutrino emitted in one decay is absorbed in the second, so that only two electrons emerge. This type of decay is possible only if neutrinos are their own antiparticles, otherwise known as Majorana neutrinos. (If neutrinos and anti-neutrinos are distinct from one another, they are called “Dirac neutrinos.”) Because neutrino-less double-beta decay is extremely rare, experiments intended to differentiate between the Majorana and Dirac scenarios take place deep underground, insulated from cosmic rays and other radioactive backgrounds. The distinction is significant because it might have played a role in the asymmetry between matter and antimatter, as we shall discuss in the following section.

7. Matter and Antimatter

In his speech accepting the 1933 Nobel Prize for predicting the positron, Paul Dirac speculated on the existence of anti-worlds in which everything consisted of antimatter. More than three-quarters of a century later, we have experimentally observed that every particle has a corresponding antiparticle with the opposite quantum properties. Particle physicists have collided electrons with positrons, as well as protons with anti-protons, to produce new kinds of particle-antiparticle pairs. This was how scientists at Fermilab’s Tevatron collider discovered the top quark.

There are a few possible exceptions to the general rule that an antiparticle exists for every particle. As we saw in the last section, the neutrino may be its own antiparticle. But this remains an open question that experimenters will try to answer in the next decade.

Figure 21: Matter and antimatter: An imperfect mirror.
Source: © Fermilab. More info

However, astronomers have not detected the smoking gun for anti-worlds: energetic forms of high-frequency radiation known as gamma rays that would be produced when anti-hydrogen and hydrogen gas annihilate each other along the boundary region between clumps of matter and antimatter. The lack of any signal suggests that Dirac’s anti-worlds do not exist in our universe. But the biggest problem for physicists today is not the absence of antimatter. Rather, it is how to explain why the universe contains any matter at all. To understand this, we need to go back to the beginning.

Astrophysicists have strong circumstantial evidence that the universe started with a Big Bang, an explosion assumed to have produced matter and antimatter. Conservation principles require that matter and antimatter pairs appear together. But if every particle created in the Big Bang had its own antiparticle, why did they not eventually annihilate, leaving an empty universe filled only with radiation? Today, ordinary matter accounts for just about 4 percent of the universe’s total energy budget. (Dark matter and dark energy make up the rest, as we shall see in later units.) Five percent does not seem like much, but the Standard Model cannot explain how even this much matter remained after the fiery particle soup of the early universe cooled and expanded to form the galaxies, stars, and planets we see today.

Pondering the question of how any matter could have survived, Russian physicist and dissident Andrei Sakharov concluded that our world could have come about only if there exists an asymmetry between matter and antimatter known as “CP violation.” CP is the acronym for charge conjugation, C, and parity, P. Charge conjugation is an operation that changes a matter particle to its corresponding antiparticle. Parity creates a mirror image of a particle or system, reversing left and right. Both charge and parity must be flipped to change matter to antimatter with the correct particle “helicity,” the term that indicates left- or right-handedness.

Differences in behavior

Broken CP symmetry would imply that matter and antimatter behave differently. It would mean, for example, that if we were to discover intelligent life in a distant part of the universe, we could ask their physicists about particle reactions they had observed and from their answers tell if their world consisted of matter or antimatter. That would be a good thing to know before embarking on a visit, even if we are quite sure the universe does not contain a lot of antimatter.

 

Figure 22: Neutral kaon oscillation.
Source: © Wikimedia Commons, GNU Free Documentation License, Version 1.2. Author: Bambaiah, 22 June 2005.

Physicists know that particle reactions involving the electromagnetic and strong forces are symmetric with respect to C, P, and their product, CP. In other words, they conserve CP. But it turns out that weak interactions, such as beta decay, are not symmetric with respect to CP. Princeton University physicists James Cronin and Val Fitch first demonstrated that in 1964 in an experiment involving neutral kaons. These mesons can oscillate between matter and antimatter states: The combination of a strange quark and an anti-down quark changes into an anti-strange quark and a down quark.

This oscillation, or mixing, is analogous to that observed more than three decades later between neutrino flavors. But Cronin and Fitch found that the oscillation rate was not exactly the same in both directions—a clear violation of the expected symmetry between matter and antimatter.

More recently, physicists have measured CP violation with very high precision in B mesons. These differ from kaons by substituting a bottom quark for the strange quark. They are produced in copious quantities in machines called B factories. These contain particle colliders to produce the B mesons and detectors that identify the particles produced when the mesons decay. By producing literally hundreds of millions of the mesons each year, they give scientists a picture of the processes at work in the early universe—and enough unusual decays to provide some understanding of that environment.

In the 1990s, engineers at SLAC and in Japan built B factories for precision studies of CP violation in B decay. Those studies, they hoped, would provide a window into physics beyond the Standard Model. That’s because, although CP violation is necessary to create a matter-dominated universe, the amount of CP violation in the Standard Model falls orders of magnitude too short to account for the makeup of our world. Yet, despite successful runs that have produced hundreds of millions of mesons, the B factories have observed no detectable difference from the predictions of the Standard Model.

This is why physicists have expressed so much interest in the possibility of CP violation in neutrinos. The early universe was flooded with neutrinos. Perhaps, the speculation goes, they could have caused the tiny asymmetry between matter and antimatter that eventually allowed roughly one in 10 billion matter particles to escape annihilation—producing enough excess matter to create the universe, including our little blue orb circling around a modest star on the outskirts of an ordinary galaxy that we call “the Milky Way.”

8. The Origin of Mass

The mystery of CP violation and the origin of our matter-dominated universe represent two of the basic issues in 21st century physics. But thousands of physicists are working night and day to solve an even more fundamental problem: How do particles acquire mass? Although many of us would like to have less mass, particle theorists find it extremely difficult to explain how we have any at all.

Scottish theorist Peter Higgs postulated that particles acquire mass by scattering off of a particle that fills all space, now called the Higgs boson. The heavier the individual particle, the more often it will interact with the Higgs. Think of a politician moving through a crowd. The more popular she is, the more people will try to shake her hand. In analogy, the heavy top quark interacts constantly by scattering off of Higgs particles, while the light electron moves through the crowd with only an occasional handshake.

Physicists have sought the Higgs boson for decades, hoping to find it each time. A new, more powerful accelerator opened up another window on the production of heavier particles. CERN’s $9 billion Large Hadron Collider (LHC) is the latest and greatest vehicle, replacing Fermilab’s Tevatron as the most powerful accelerator on Earth. Many hopes ride on the LHC. However, the collider’s promise suffered an early blow. In July 2009, less than ten months after the machine generated its first proton beams, physicists identified problems in its electrical connections that threatened its ability to run at full power. Those problems delayed the LHC’s experimental timetable. In doing so, it increased the—admittedly small—chance that the Tevatron might find the first evidence for the Higgs boson.

Early in 2009, scientists working at the Tevatron reported precise studies of the mass of the W boson, which carries the weak nuclear force. Those measurements put strict bounds on the mass of the Higgs boson, suggesting that it is probably quite light, and implying that the LHC will have some difficulty detecting it. Plainly, the race for the Holy Grail of particle physics will continue unabated.

 

Figure 24: Jan Stark presenting limits on the Higgs boson’s mass. Source: © Fermilab

Of course, it is quite possible that neither the Tevatron nor the LHC will observe the Higgs boson. There may even be several Higgs particles, in addition to new partners for all of the known fundamental particles. And, if neutrinos are confirmed to be their own antiparticle in double beta-decay experiments, the Higgs mechanism cannot explain neutrino masses, replacing one mystery with another. This may provide the most exciting scenario of all for particle physicists: the opportunity to discover new particles and the laws that govern them.

The Math

Einstein’s Equation

The well-known equation,  is a shorthand version of Einstein’s full equation:

   or    

Where m is the particle’s mass, p is the particle’s momentum, and c is the speed of light.

The equation states that the total energy of a particle has two contributions that add in quadrature: one from the particle’s mass (mc2), the other from the particle’s motion (pc).

If a particle is not moving (or is moving very slowly compared to the speed of light), the equation reduces to the familiar shorthand version. When a particle is accelerated to near the speed of light, the p2c2 term makes a significant contribution to the particle energy.

This is how high-energy collisions of light particles in a particle accelerator can create much heavier particles. For example, the LHC will accelerate protons to 0.999999991 times the speed of light. Traveling at this speed, a proton has a total energy of 7 TeV, while a proton sitting still has an energy of 0.938 GeV.

 

Lorentz Factor

If a particle moving near the speed of light carried a clock with it, the clock would run slow compared to a stationary clock on Earth. In the terminology of Einstein’s theory of special relativity, the particle experiences time dilation. The following equation is the factor by which time slows:

Gamma (), called the “Lorentz factor,” grows very large as the particle velocity (v) approaches the speed of light (c). This means that the faster a decaying particle travels, the longer it appears to live to a stationary observer. This counterintuitive and confusing aspect of the theory of special relativity is demonstrated every day by cosmic rays. If time weren’t dilated for the fast-moving cosmic ray particles, they would decay long before reaching the Earth’s surface.

 

Inverse Beta Decay

inverse beta decay equation

Inverse beta decay is a decay mode for radioactive nuclei. It takes place when a proton (p) in the nucleus of an atom interacts with a neutrino () to produce a neutron (n) and a positron . This is an example of a weak interaction—meaning that the weak nuclear force is acting on these particles—which proceeds via the exchange of a W+ boson.

 

 

9. Further Reading

Series Directory

Physics for the 21st Century

Credits

Produced by the Harvard-Smithsonian Center for Astrophysics Science Media Group in association with the Harvard University Department of Physics. 2010.
  • Closed Captioning
  • ISBN: 1-57680-891-2