2008/08/11
subatomic particle
Introduction
also called elementary particle any of various self-contained units of matter or energy that are the fundamental constituents of all matter. Subatomic particles include electrons, the negatively charged, almost massless particles that nevertheless account for most of the size of the atom, and they include the heavier building blocks of the small but very dense nucleus of the atom, the positively charged protons and the electrically neutral neutrons. But these basic atomic components are by no means the only known subatomic particles. Protons and neutrons, for instance, are themselves made up of elementary particles called quarks, and the electron is only one member of a class of elementary particles that also includes the muon and the neutrino. More-unusual subatomic particles—such as the positron, the antimatter counterpart of the electron—have been detected and characterized in cosmic-ray interactions in the Earth's atmosphere. The field of subatomic particles has expanded dramatically with the construction of powerful particle accelerators to study high-energy collisions of electrons, protons, and other particles with matter. As particles collide at high energy, the collision energy becomes available for the creation of subatomic particles such as mesons and hyperons. Finally, completing the revolution that began in the early 20th century with theories of the equivalence of matter and energy, the study of subatomic particles has been transformed by the discovery that the actions of forces are due to the exchange of “force” particles such as photons and gluons. More than 200 subatomic particles have been detected—most of them highly unstable, existing for less than a millionth of a second—as a result of collisions produced in cosmic-ray reactions or particle-accelerator experiments. Theoretical and experimental research in particle physics, the study of subatomic particles and their properties, has given scientists a clearer understanding of the nature of matter and energy and of the origin of the universe.
The current understanding of the state of particle physics is integrated within a conceptual framework known as the Standard Model. The Standard Model provides a classification scheme for all the known subatomic particles based on theoretical descriptions of the basic forces of matter.
Basic concepts of particle physics
The divisible atom
The physical study of subatomic particles became possible only during the 20th century, with the development of increasingly sophisticated apparatuses to probe matter at scales of 10−15 metre and less (that is, at distances comparable to the diameter of the proton or neutron). Yet the basic philosophy of the subject now known as particle physics dates to at least 500 BC, when the Greek philosopher Leucippus and his pupil Democritus put forward the notion that matter consists of invisibly small, indivisible particles, which they called atoms. For more than 2,000 years the idea of atoms lay largely neglected, while the opposing view that matter consists of four elements—earth, fire, air, and water—held sway. But by the beginning of the 19th century, the atomic theory of matter had returned to favour, strengthened in particular by the work of John Dalton, an English chemist whose studies suggested that each chemical element consists of its own unique kind of atom. As such, Dalton's atoms are still the atoms of modern physics. By the close of the century, however, the first indications began to emerge that atoms are not indivisible, as Leucippus and Democritus had imagined, but that they instead contain smaller particles.
In 1896 the French physicist Henri Becquerel discovered radioactivity, and in the following year J.J. Thomson, a professor of physics at the University of Cambridge in England, demonstrated the existence of tiny particles much smaller in mass than hydrogen, the lightest atom. Thomson had discovered the first subatomic particle, the electron. Six years later Ernest Rutherford and Frederick Soddy, working at McGill University in Montreal, found that radioactivity occurs when atoms of one type transmute into those of another kind. The idea of atoms as immutable, indivisible objects had become untenable.
The basic structure of the atom became apparent in 1911, when Rutherford showed that most of the mass of an atom lies concentrated at its centre, in a tiny nucleus. Rutherford postulated that the atom resembled a miniature solar system, with light, negatively charged electrons orbiting the dense, positively charged nucleus, just as the planets orbit the Sun. The Danish theorist Niels Bohr refined this model in 1913 by incorporating the new ideas of quantization that had been developed by the German physicist Max Planck at the turn of the century. Planck had theorized that electromagnetic radiation, such as light, occurs in discrete bundles, or “quanta,” of energy now known as photons. Bohr postulated that electrons circled the nucleus in orbits of fixed size and energy and that an electron could jump from one orbit to another only by emitting or absorbing specific quanta of energy. By thus incorporating quantization into his theory of the atom, Bohr introduced one of the basic elements of modern particle physics and prompted wider acceptance of quantization to explain atomic and subatomic phenomena.
Size
Subatomic particles play two vital roles in the structure of matter. They are both the basic building blocks of the universe and the mortar that binds the blocks. Although the particles that fulfill these different roles are of two distinct types, they do share some common characteristics, foremost of which is size.
The small size of subatomic particles is perhaps most convincingly expressed not by stating their absolute units of measure but by comparing them with the complex particles of which they are a part. An atom, for instance, is typically 10−10 metre across, yet almost all of the size of the atom is unoccupied “empty” space available to the point-charge electrons surrounding the nucleus. The distance across an atomic nucleus of average size is roughly 10−14 metre—only 1/10,000 the diameter of the atom. The nucleus, in turn, is made up of positively charged protons and electrically neutral neutrons, collectively referred to as nucleons, and a single nucleon has a diameter of about 10−15 metre—that is, about 1/10 that of the nucleus and 1/100,000 that of the atom. (The distance across the nucleon, 10−15 metre, is known as a fermi, in honour of the Italian-born physicist Enrico Fermi, who did much experimental and theoretical work on the nature of the nucleus and its contents.)
The sizes of atoms, nuclei, and nucleons are measured by firing a beam of electrons at an appropriate target. The higher the energy of the electrons, the farther they penetrate before being deflected by the electric charges within the atom. For example, a beam with an energy of a few hundred electron volts (eV) scatters from the electrons in a target atom. The way in which the beam is scattered (electron scattering) can then be studied to determine the general distribution of the atomic electrons.
At energies of a few hundred megaelectron volts (MeV; 106 eV), electrons in the beam are little affected by atomic electrons; instead, they penetrate the atom and are scattered by the positive nucleus. Therefore, if such a beam is fired at liquid hydrogen, whose atoms contain only single protons in their nuclei, the pattern of scattered electrons reveals the size of the proton. At energies greater than a gigaelectron volt (GeV; 109 eV), the electrons penetrate within the protons and neutrons, and their scattering patterns reveal an inner structure. Thus, protons and neutrons are no more indivisible than atoms are; indeed, they contain still smaller particles, which are called quarks.
Quarks are as small as or smaller than physicists can measure. In experiments at very high energies, equivalent to probing protons in a target with electrons accelerated to nearly 50,000 GeV, quarks appear to behave as points in space, with no measurable size; they must therefore be smaller than 10−18 metre, or less than 1/1,000 the size of the individual nucleons they form. Similar experiments show that electrons too are smaller than it is possible to measure.
Elementary particles
Electrons and quarks contain no discernible structure; they cannot be reduced or separated into smaller components. It is therefore reasonable to call them “elementary” particles, a name that in the past was mistakenly given to particles such as the proton, which is in fact a complex particle that contains quarks. The term subatomic particle refers both to the true elementary particles, such as quarks and electrons, and to the larger particles that quarks form.
Although both are elementary particles, electrons and quarks differ in several respects. Whereas quarks together form nucleons within the atomic nucleus, the electrons generally circulate toward the periphery of atoms. Indeed, electrons are regarded as distinct from quarks and are classified in a separate group of elementary particles called leptons. There are several types of lepton, just as there are several types of quark (see below Quarks and antiquarks). Only two types of quark are needed to form protons and neutrons, however, and these, together with the electron and one other elementary particle, are all the building blocks that are necessary to build the everyday world. The last particle required is an electrically neutral particle called the neutrino.
Neutrinos do not exist within atoms in the sense that electrons do, but they play a crucial role in certain types of radioactive decay. In a basic process of one type of radioactivity, known as beta decay, a neutron changes into a proton. In making this change, the neutron acquires one unit of positive charge. To keep the overall charge in the beta-decay process constant and thereby conform to the fundamental physical law of charge conservation, the neutron must emit a negatively charged electron. In addition, the neutron also emits a neutrino (strictly speaking, an antineutrino), which has little or no mass and no electric charge. Beta decays are important in the transitions that occur when unstable atomic nuclei change to become more stable, and for this reason neutrinos are a necessary component in establishing the nature of matter.
The neutrino, like the electron, is classified as a lepton. Thus, it seems at first sight that only four kinds of elementary particles—two quarks and two leptons—should exist. In the 1930s, however, long before the concept of quarks was established, it became clear that matter is more complicated.
Spin
The concept of quantization led during the 1920s to the development of quantum mechanics, which appeared to provide physicists with the correct method of calculating the structure of the atom. In his model Niels Bohr had postulated that the electrons in the atom move only in orbits in which the angular momentum (angular velocity multiplied by mass) has certain fixed values. Each of these allowed values is characterized by a quantum number that can have only integer values. In the full quantum mechanical treatment of the structure of the atom, developed in the 1920s, three quantum numbers relating to angular momentum arise because there are three independent variable parameters in the equation describing the motion of atomic electrons.
In 1925, however, two Dutch physicists, Samuel Goudsmit and George Uhlenbeck, realized that, in order to explain fully the spectra of light emitted by the atoms of alkali metals, such as sodium, which have one outer valence electron beyond the main core, there must be a fourth quantum number that can take only two values, −1/2 and +1/2. Goudsmit and Uhlenbeck proposed that this quantum number refers to an internal angular momentum, or spin, that the electrons possess. This implies that the electrons, in effect, behave like spinning electric charges. Each therefore creates a magnetic field and has its own magnetic moment. The internal magnet of an atomic electron orients itself in one of two directions with respect to the magnetic field created by the rest of the atom. It is either parallel or antiparallel; hence, there are two quantized states—and two possible values of the associated spin quantum number.
The concept of spin is now recognized as an intrinsic property of all subatomic particles. Indeed, spin is one of the key criteria used to classify particles into two main groups: fermions, with half-integer values of spin (1/2, 3/2,…), and bosons, with integer values of spin (0, 1, 2,…). In the Standard Model all of the “matter” particles (quarks and leptons) are fermions, whereas “force” particles such as photons are bosons. These two classes of particles have different symmetry properties that affect their behaviour.
Antiparticles
Two years after the work of Goudsmit and Uhlenbeck, the English theorist P.A.M. Dirac provided a sound theoretical background for the concept of electron spin. In order to describe the behaviour of an electron in an electromagnetic field, Dirac introduced the German-born physicist Albert Einstein's theory of special relativity into quantum mechanics. Dirac's relativistic theory showed that the electron must have spin and a magnetic moment, but it also made what seemed a strange prediction. The basic equation describing the allowed energies for an electron would admit two solutions, one positive and one negative. The positive solution apparently described normal electrons. The negative solution was more of a mystery; it seemed to describe electrons with positive rather than negative charge.
The mystery was resolved in 1932, when Carl Anderson, an American physicist, discovered the particle called the positron. Positrons are very much like electrons: they have the same mass and the same spin, but they have opposite electric charge. Positrons, then, are the particles predicted by Dirac's theory, and they were the first of the so-called antiparticles to be discovered. Dirac's theory, in fact, applies to any subatomic particle with spin 1/2; therefore, all spin-1/2 particles should have corresponding antiparticles. Matter cannot be built from both particles and antiparticles, however. When a particle meets its appropriate antiparticle, the two disappear in an act of mutual destruction known as annihilation. Atoms can exist only because there is an excess of electrons, protons, and neutrons in the everyday world, with no corresponding positrons, antiprotons, and antineutrons.
Positrons do occur naturally, however, which is how Anderson discovered their existence. High-energy subatomic particles in the form of cosmic rays continually rain down on the Earth's atmosphere from outer space, colliding with atomic nuclei and generating showers of particles that cascade toward the ground. In these showers the enormous energy of the incoming cosmic ray is converted to matter, in accordance with Einstein's theory of special relativity, which states that E = mc2, where E is energy, m is mass, and c is the velocity of light. Among the particles created are pairs of electrons and positrons. The positrons survive for a tiny fraction of a second until they come close enough to electrons to annihilate. The total mass of each electron-positron pair is then converted to energy in the form of gamma-ray photons.
Electrons and positrons produced simultaneously from individual gamma rays curl in opposite …
Using particle accelerators, physicists can mimic the action of cosmic rays and create collisions at high energy (see the figure). In 1955 a team led by the Italian-born scientist Emilio Segrè and the American Owen Chamberlain found the first evidence for the existence of antiprotons in collisions of high-energy protons produced by the Bevatron, an accelerator at what is now the Lawrence Berkeley National Laboratory in California. Shortly afterward, a different team working on the same accelerator discovered the antineutron.
Since the 1960s physicists have discovered that protons and neutrons consist of quarks with spin 1/2 and that antiprotons and antineutrons consist of antiquarks. Neutrinos too have spin 1/2 and therefore have corresponding antiparticles known as antineutrinos. Indeed, it is an antineutrino, rather than a neutrino, that emerges when a neutron changes by beta decay into a proton. This reflects an empirical law regarding the production and decay of quarks and leptons: in any interaction the total numbers of quarks and leptons seem always to remain constant. Thus, the appearance of a lepton—the electron—in the decay of a neutron must be balanced by the simultaneous appearance of an antilepton, in this case the antineutrino.
In addition to such familiar particles as the proton, neutron, and electron, studies have slowly revealed the existence of more than 200 other subatomic particles. These “extra” particles do not appear in the low-energy environment of everyday human experience; they emerge only at the higher energies found in cosmic rays or particle accelerators. Moreover, they immediately decay to the more-familiar particles after brief lifetimes of only fractions of a second. The variety and behaviour of these extra particles initially bewildered scientists but have since come to be understood in terms of the quarks and leptons. In fact, only six quarks, six leptons, and their corresponding antiparticles are necessary to explain the variety and behaviour of all the subatomic particles, including those that form normal atomic matter.
Four basic forces
Quarks and leptons are the building blocks of matter, but they require some sort of mortar to bind themselves together into more-complex forms, whether on a nuclear or a universal scale. The particles that provide this mortar are associated with four basic forces that are collectively referred to as the fundamental interactions of matter. These four basic forces are gravity (or the gravitational force), the electromagnetic force, and two forces more familiar to physicists than to laypeople: the strong force and the weak force.
On the largest scales the dominant force is gravity. Gravity governs the aggregation of matter into stars and galaxies and influences the way that the universe has evolved since its origin in the big bang. The best-understood force, however, is the electromagnetic force, which underlies the related phenomena of electricity and magnetism. The electromagnetic force binds negatively charged electrons to positively charged atomic nuclei and gives rise to the bonding between atoms to form matter in bulk.
Gravity and electromagnetism are well known at the macroscopic level. The other two forces act only on subatomic scales, indeed on subnuclear scales. The strong force binds quarks together within protons, neutrons, and other subatomic particles. Rather as the electromagnetic force is ultimately responsible for holding bulk matter together, so the strong force also keeps protons and neutrons together within atomic nuclei. Unlike the strong force, which acts only between quarks, the weak force acts on both quarks and leptons. This force is responsible for the beta decay of a neutron into a proton and for the nuclear reactions that fuel the Sun and other stars.
Field theory
Since the 1930s physicists have recognized that they can use field theory to describe the interactions of all four basic forces with matter. In mathematical terms a field describes something that varies continuously through space and time. A familiar example is the field that surrounds a piece of magnetized iron. The magnetic field maps the way that the force varies in strength and direction around the magnet. The appropriate fields for the four basic forces appear to have an important property in common: they all exhibit what is known as gauge symmetry. Put simply, this means that certain changes can be made that do not affect the basic structure of the field. It also implies that the relevant physical laws are the same in different regions of space and time.
At a subatomic, quantum level these field theories display a significant feature. They describe each basic force as being in a sense carried by its own subatomic particles. These “force” particles are now called gauge bosons, and they differ from the “matter” particles—the quarks and leptons discussed earlier—in a fundamental way. Bosons are characterized by integer values of their spin quantum number, whereas quarks and leptons have half-integer values of spin.
The most familiar gauge boson is the photon, which transmits the electromagnetic force between electrically charged objects such as electrons and protons. The photon acts as a private, invisible messenger between these particles, influencing their behaviour with the information it conveys, rather as a ball influences the actions of children playing catch. Other gauge bosons, with varying properties, are involved with the other basic forces.
In developing a gauge theory for the weak force in the 1960s, physicists discovered that the best theory, which would always yield sensible answers, must also incorporate the electromagnetic force. The result was what is now called electroweak theory. It was the first workable example of a unified field theory linking forces that manifest themselves differently in the everyday world. Unified theory reveals that the basic forces, though outwardly diverse, are in fact separate facets of a single underlying force. The search for a unified theory of everything, which incorporates all four fundamental forces, is one of the major goals of particle physics. It is leading theorists to an exciting area of study that involves not only subatomic particle physics but also cosmology and astrophysics.
The basic forces and their messenger particles
The previous section of this article presented an overview of the basic issues in particle physics, including the four fundamental interactions that affect all of matter. In this section the four interactions, or basic forces, are treated in greater detail. Each force is described on the basis of the following characteristics: (1) the property of matter on which each force acts; (2) the particles of matter that experience the force; (3) the nature of the messenger particle (gauge boson) that mediates the force; and (4) the relative strength and range of the force.
Gravity
The weakest, and yet the most pervasive, of the four basic forces is gravity. It acts on all forms of mass and energy and thus acts on all subatomic particles, including the gauge bosons that carry the forces. The 17th-century English scientist Isaac Newton was the first to develop a quantitative description of the force of gravity. He argued that the force that binds the Moon in orbit around the Earth is the same force that makes apples and other objects fall to the ground, and he proposed a universal law of gravitation.
According to Newton's law, all bodies are attracted to each other by a force that depends directly on the mass of each body and inversely on the square of the distance between them. For a pair of masses, m1 and m2, a distance r apart, the strength of the force F is given by
F = Gm1m2/r2.
G is called the constant of gravitation and is equal to 6.67 × 10−11 newton-metre2-kilogram−2.
The constant G gives a measure of the strength of the gravitational force, and its smallness indicates that gravity is weak. Indeed, on the scale of atoms the effects of gravity are negligible compared with the other forces at work. Although the gravitational force is weak, its effects can be extremely long-ranging. Newton's law shows that at some distance the gravitational force between two bodies becomes negligible but that this distance depends on the masses involved. Thus, the gravitational effects of large, massive objects can be considerable, even at distances far outside the range of the other forces. The gravitational force of the Earth, for example, keeps the Moon in orbit some 384,400 km (238,900 miles) distant.
Newton's theory of gravity proves adequate for many applications. In 1915, however, the German-born physicist Albert Einstein developed the theory of general relativity, which incorporates the concept of gauge symmetry and yields subtle corrections to Newtonian gravity. Despite its importance, Einstein's general relativity remains a classical theory in the sense that it does not incorporate the ideas of quantum mechanics. In a quantum theory of gravity, the gravitational force must be carried by a suitable messenger particle, or gauge boson. No workable quantum theory of gravity has yet been developed, but general relativity determines some of the properties of the hypothesized “force” particle of gravity, the so-called graviton. In particular, the graviton must have a spin quantum number of 2 and no mass, only energy.
Electromagnetism
The first proper understanding of the electromagnetic force dates to the 18th century, when a French physicist, Charles Coulomb, showed that the electrostatic force between electrically charged objects follows a law similar to Newton's law of gravitation. According to Coulomb's law, the force F between one charge, q1, and a second charge, q2, is proportional to the product of the charges divided by the square of the distance r between them, or F = kq1q2/r2. Here k is the proportionality constant, equal to 1/4πε0 (ε0 being the permittivity of free space). An electrostatic force can be either attractive or repulsive, because the source of the force, electric charge, exists in opposite forms: positive and negative. The force between opposite charges is attractive, whereas bodies with the same kind of charge experience a repulsive force. Coulomb also showed that the force between magnetized bodies varies inversely as the square of the distance between them. Again, the force can be attractive (opposite poles) or repulsive (like poles).
Magnetism and electricity are not separate phenomena; they are the related manifestations of an underlying electromagnetic force. Experiments in the early 19th century by, among others, Hans Ørsted (in Denmark), André-Marie Ampère (in France), and Michael Faraday (in England) revealed the intimate connection between electricity and magnetism and the way the one can give rise to the other. The results of these experiments were synthesized in the 1850s by the Scottish physicist James Clerk Maxwell in his electromagnetic theory. Maxwell's theory predicted the existence of electromagnetic waves—undulations in intertwined electric and magnetic fields, traveling with the velocity of light.
Max Planck's work in Germany at the turn of the 20th century, in which he explained the spectrum of radiation from a perfect emitter (blackbody radiation), led to the concept of quantization and photons. In the quantum picture, electromagnetic radiation has a dual nature, existing both as Maxwell's waves and as streams of particles called photons. The quantum nature of electromagnetic radiation is encapsulated in quantum electrodynamics, the quantum field theory of the electromagnetic force. Both Maxwell's classical theory and the quantized version contain gauge symmetry, which now appears to be a basic feature of the fundamental forces.
The electromagnetic force is intrinsically much stronger than the gravitational force. If the relative strength of the electromagnetic force between two protons separated by the distance within the nucleus was set equal to one, the strength of the gravitational force would be only 10−36. At an atomic level the electromagnetic force is almost completely in control; gravity dominates on a large scale only because matter as a whole is electrically neutral.
The gauge boson of electromagnetism is the photon, which has zero mass and a spin quantum number of 1. Photons are exchanged whenever electrically charged subatomic particles interact. The photon has no electric charge, so it does not experience the electromagnetic force itself; in other words, photons cannot interact directly with one another. Photons do carry energy and momentum, however, and, in transmitting these properties between particles, they produce the effects known as electromagnetism.
In these processes energy and momentum are conserved overall (that is, the totals remain the same, in accordance with the basic laws of physics), but, at the instant one particle emits a photon and another particle absorbs it, energy is not conserved. Quantum mechanics allows this imbalance, provided that the photon fulfills the conditions of Heisenberg's uncertainty principle. This rule, described in 1927 by the German scientist Werner Heisenberg, states that it is impossible, even in principle, to know all the details about a particular quantum system. For example, if the exact position of an electron is identified, it is impossible to be certain of the electron's momentum. This fundamental uncertainty allows a discrepancy in energy, ΔE, to exist for a time, Δt, provided that the product of ΔE and Δt is very small—equal to the value of Planck's constant divided by 2π, or 1.05 × 10−34 joule seconds. The energy of the exchanged photon can thus be thought of as “borrowed,” within the limits of the uncertainty principle (i.e., the more energy borrowed, the shorter the time of the loan). Such borrowed photons are called “virtual” photons to distinguish them from real photons, which constitute electromagnetic radiation and can, in principle, exist forever. This concept of virtual particles in processes that fulfill the conditions of the uncertainty principle applies to the exchange of other gauge bosons as well.
The weak force
Since the 1930s physicists have been aware of a force within the atomic nucleus that is responsible for certain types of radioactivity that are classed together as beta decay. A typical example of beta decay occurs when a neutron transmutes into a proton. The force that underlies this process is known as the weak force to distinguish it from the strong force that binds quarks together (see below The strong force).
The correct gauge field theory for the weak force incorporates the quantum field theory of electromagnetism (quantum electrodynamics) and is called electroweak theory. It treats the weak force and the electromagnetic force on an equal footing by regarding them as different manifestations of a more-fundamental electroweak force, rather as electricity and magnetism appear as different aspects of the electromagnetic force.
The electroweak theory requires four gauge bosons. One of these is the photon of electromagnetism; the other three are involved in reactions that occur via the weak force. These weak gauge bosons include two electrically charged versions, called W+ and W−, where the signs indicate the charge, and a neutral variety called Z0, where the zero indicates no charge. Like the photon, the W and Z particles have a spin quantum number of 1; unlike the photon, they are very massive. The W particles have a mass of about 80.4 GeV, while the mass of the Z0 particle is 91.187 GeV. By comparison, the mass of the proton is 0.94 GeV, or about one-hundredth that of the Z particle. (Strictly speaking, mass should be given in units of energy/c2, where c is the velocity of light. However, common practice is to set c = 1 so that mass is quoted simply in units of energy, eV, as in this paragraph.)
The charged W particles are responsible for processes, such as beta decay, in which the charge of the participating particles changes hands. For example, when a neutron transmutes into a proton, it emits a W−; thus, the overall charge remains zero before and after the decay process. The W particle involved in this process is a virtual particle. Because its mass is far greater than that of the neutron, the only way that it can be emitted by the lightweight neutron is for its existence to be fleetingly short, within the requirements of the uncertainty principle. Indeed, the W− immediately transforms into an electron and an antineutrino, the particles that are observed in the laboratory as the products of neutron beta decay. Z particles are exchanged in similar reactions that involve no change in charge.
In the everyday world the weak force is weaker than the electromagnetic force but stronger than the gravitational force. Its range, however, is very short. Because of the large amounts of energy needed to create the large masses of the W and Z particles, the uncertainty principle ensures that a weak gauge boson cannot be borrowed for long, which limits the range of the force to distances less than 10−17 metre. The weak force between two protons in a nucleus is only 10−7 the strength of the electromagnetic force. As the electroweak theory reveals and as experiments confirm, however, this weak force becomes effectively stronger as the energies of the participating particles increase. When the energies reach 100 GeV or so—roughly the energy equivalent to the mass of the W and Z particles—the strength of the weak force becomes comparable to that of the electromagnetic force. This means that reactions that involve the exchange of a Z0 become as common as those in which a photon is exchanged. Moreover, at these energies real W and Z particles, as opposed to virtual ones, can be created in reactions.
Unlike the photon, which is stable and can in principle live forever, the heavy weak gauge bosons decay to lighter particles within an extremely brief lifetime of about 10−25 second. This is roughly a million million times shorter than experiments can measure directly, but physicists can detect the particles into which the W and Z particles decay and can thus infer their existence.
The strong force
Although the aptly named strong force is the strongest of all the fundamental interactions, it, like the weak force, is short-ranged and is ineffective much beyond nuclear distances of 10−15 metre or so. Within the nucleus and, more specifically, within the protons and other particles that are built from quarks, however, the strong force rules supreme; between quarks in a proton, it can be almost 100 times stronger than the electromagnetic force, depending on the distance between the quarks.
During the 1970s physicists developed a theory for the strong force that is similar in structure to quantum electrodynamics. In this theory quarks are bound together within protons and neutrons by exchanging gauge bosons called gluons. The quarks carry a property called “colour” that is analogous to electric charge. Just as electrically charged particles experience the electromagnetic force and exchange photons, so colour-charged, or coloured, particles feel the strong force and exchange gluons. This property of colour gives rise in part to the name of the theory of the strong force: quantum chromodynamics.
Gluons are massless and have a spin quantum number of 1. In this respect they are much like photons, but they differ from photons in one crucial way. Whereas photons do not interact among themselves—because they are not electrically charged—gluons do carry colour charge. This means that gluons can interact together, which has an important effect in limiting the range of gluons and in confining quarks within protons and other particles.
There are three types of colour charge, called red, green, and blue, although there is no connection between the colour charge of quarks and gluons and colour in the usual sense. Quarks each carry a single colour charge, while gluons carry both a colour and an anticolour charge.
The strong force acts in such a way that quarks of different colour are attracted to one another; thus, red attracts green, blue attracts red, and so on. Quarks of the same colour, on the other hand, repel each other. The quarks can combine only in ways that give a net colour charge of zero. In particles that contain three quarks, such as protons, this is achieved by adding red, blue, and green. An alternative, observed in particles called mesons (see below Hadrons), is for a quark to couple with an antiquark of the same basic colour. In this case the colour of the quark and the anticolour of the antiquark cancel each other out. These combinations of three quarks (or three antiquarks) or of quark-antiquark pairs are the only combinations that the strong force seems to allow.
Three “jets” of particles streaming out from an electron-positron collision at the …
The constraint that only colourless objects can appear in nature seems to limit attempts to observe single quarks and free gluons. Although a quark can radiate a real gluon just as an electron can radiate a real photon, the gluon never emerges on its own into the surrounding environment. Instead, it somehow creates additional gluons, quarks, and antiquarks from its own energy and materializes as normal particles built from quarks. (See the figure.) Similarly, it appears that the strong force keeps quarks permanently confined within larger particles. Attempts to knock quarks out of protons by, for example, knocking protons together at high energies succeed only in creating more particles—that is, in releasing new quarks and antiquarks that are bound together and are themselves confined by the strong force.
Classes of subatomic particles
From the early 1930s to the mid-1960s, studies of the composition of cosmic rays and experiments using particle accelerators revealed more than 200 types of subatomic particles. In order to comprehend this rich variety, physicists began to classify the particles according to their properties (such as mass, charge, and spin) and to their behaviour in response to the fundamental interactions—in particular, the weak and strong forces. The aim was to discover common features that would simplify the variety, much as the periodic table of chemical elements had done for the wealth of atoms discovered in the 19th century. An important result was that many of the particles, those classified as hadrons, were found to be composed of a much smaller number of more-elementary particles, the quarks. Today the quarks, together with the group of leptons, are recognized as fundamental particles of matter.
Leptons and antileptons
Leptons are a group of subatomic particles that do not experience the strong force. They do, however, feel the weak force and the gravitational force, and electrically charged leptons interact via the electromagnetic force. In essence, there are three types of electrically charged leptons and three types of neutral leptons, together with six related antileptons. In all three cases the charged lepton has a negative charge, whereas its antiparticle is positively charged. Physicists coined the name lepton from the Greek word for “slender” because, before the discovery of the tau in 1975, it seemed that the leptons were the lightest particles. Although the name is no longer appropriate, it has been retained to describe all spin-1/2 particles that do not feel the strong force.
Charged leptons (electron, muon, tau)
Probably the most-familiar subatomic particle is the electron, the component of atoms that makes interatomic bonding and chemical reactions—and hence life—possible. The electron was also the first particle to be discovered. Its negative charge of 1.6 × 10−19 coulomb seems to be the basic unit of electric charge, although theorists have a poor understanding of what determines this particular size.
The electron, with a mass of 0.511 megaelectron volts (MeV; 106 eV), is the lightest of the charged leptons. The next-heavier charged lepton is the muon. It has a mass of 106 MeV, which is some 200 times greater than the electron's mass but is significantly less than the proton's mass of 938 MeV. Unlike the electron, which appears to be completely stable, the muon decays after an average lifetime of 2.2 millionths of a second into an electron, a neutrino, and an antineutrino. This process, like the beta decay of a neutron into a proton, an electron, and an antineutrino, occurs via the weak force. Experiments have shown that the intrinsic strength of the underlying reaction is the same in both kinds of decay, thus revealing that the weak force acts equally upon leptons (electrons, muons, neutrinos) and quarks (which form neutrons and protons).
There is a third, heavier type of charged lepton, called the tau. The tau, with a mass of 1,777 MeV, is even heavier than the proton and has a very short lifetime of about 10−13 second. Like the electron and the muon, the tau has its associated neutrino. The tau can decay into a muon, plus a tau-neutrino and a muon-antineutrino; or it can decay directly into an electron, plus a tau-neutrino and an electron-antineutrino. Because the tau is heavy, it can also decay into particles containing quarks. In one example the tau decays into particles called pi-mesons (see below Quarks and antiquarks), which are accompanied by a tau-neutrino.
Neutral leptons (neutrino)
Unlike the charged leptons, the electrically neutral leptons, the neutrinos, do not come under the influence of the electromagnetic force. They experience only the weakest two of nature's forces, the weak force and gravity. For this reason neutrinos react extremely weakly with matter. They can, for example, pass through the Earth without interacting, which makes it difficult to detect neutrinos and to measure their properties.
Although electrically neutral, the neutrinos seem to carry an identifying property that associates them specifically with one type of charged lepton. In the example of the muon's decay, the antineutrino produced is not simply the antiparticle of the neutrino that appears with it. The neutrino carries a muon-type hallmark, while the antineutrino, like the antineutrino emitted when a neutron decays, is always an electron-antineutrino. In interactions with matter, such electron-neutrinos and antineutrinos never produce muons, only electrons. Likewise, muon-neutrinos give rise to muons only, never to electrons.
Theory does not require the mass of neutrinos to be any specific amount, and in the past it was assumed to be zero. Experiments indicate that the mass of the antineutrino emitted in beta decay must be less than 10 eV, or less than 1/30,000 the mass of an electron. However, it remains possible that any or all of the neutrinos have some tiny mass. If so, both the tau-neutrino and the muon-neutrino, like the electron-neutrino, have masses that are much smaller than those of their charged counterparts. There is growing evidence that neutrinos can change from one type to another, or “oscillate.” This can happen only if the neutrino types in question have small differences in mass—and hence must have mass.
Hadrons
The name hadron comes from the Greek word for “strong”; it refers to all those particles that are built from quarks and therefore experience the strong force. The most common examples of this class are the proton and the neutron, the two types of particle that build up the nucleus of every atom.
Stable and resonant hadrons
Experiments have revealed a large number of hadrons, of which only the proton appears to be stable. Indeed, even if the proton is not absolutely stable, experiments show that its lifetime is at least in excess of 1032 years. In contrast, a single neutron, free from the forces at work within the nucleus, lives an average of nearly 15 minutes before decaying. Within a nucleus, however—even the simple nucleus of deuterium, which consists of one proton and one neutron—the balance of forces is sufficient to prolong the neutron's lifetime so that many nuclei are stable and a large variety of chemical elements exist.
Some hadrons typically exist only 10−10 to 10−8 second. Fortunately for experimentalists, these particles are usually born in such high-energy collisions that they are moving at velocities close to the speed of light. Their timescale is therefore “stretched” or “slowed down” so that, in the high-speed particle's frame of reference, its lifetime may be 10−10 second, but, in a stationary observer's frame of reference, the particle lives much longer. This effect, known as time dilation in the theory of special relativity, allows stationary particle detectors to record the tracks left by these short-lived particles. These hadrons, which number about a dozen, are usually referred to as “stable” to distinguish them from still shorter-lived hadrons with lifetimes typically in the region of a mere 10−23 second.
The stable hadrons usually decay via the weak force. In some cases they decay by the electromagnetic force, which results in somewhat shorter lifetimes because the electromagnetic force is stronger than the weak force. The very-short-lived hadrons, however, which number 200 or more, decay via the strong force. This force is so strong that it allows the particles to live only for about the time it takes light to cross the particle; the particles decay almost as soon as they are created.
The “footprint” of a D0 meson in a bubble chamber sensitive enough to reveal …
These very-short-lived particles are called “resonant” because they are observed as a resonance phenomenon; they are too short-lived to be observed in any other way (see the figure). Resonance occurs when a system absorbs more energy than usual because the energy is being supplied at the system's own natural frequency. For example, soldiers break step when they cross a bridge because their rhythmic marching could make the bridge resonate—set it vibrating at its own natural frequency—so that it absorbs enough energy to cause damage. Subatomic-particle resonances occur when the net energy of colliding particles is just sufficient to create the rest mass of the new particle, which the strong force then breaks apart within 10−23 second. The absorption of energy, or its subsequent emission in the form of particles as the resonance decays, is revealed as the energy of the colliding particles is varied.
Baryons and mesons
The hadrons, whether stable or resonant, fall into two classes: baryons and mesons. Originally the names referred to the relative masses of the two groups of particles. The baryons (from the Greek word for “heavy”) included the proton and heavier particles; the mesons (from the Greek word for “between”) were particles with masses between those of the electron and the proton. Now, however, the name baryon refers to any particle built from three quarks, such as the proton and the neutron. Mesons, on the other hand, are particles built from a quark combined with an antiquark. As described in the section The strong force, these are the only two combinations of quarks and antiquarks that the strong force apparently allows.
The two groups of hadrons are also distinguished from one another in terms of a property called baryon number. The baryons are characterized by a baryon number, B, of 1; antibaryons have a baryon number of −1; and the baryon number of the mesons, leptons, and messenger particles is 0. Baryon numbers are additive; thus, an atom containing one proton and one neutron (each with a baryon number of 1) has a baryon number of 2. Quarks therefore must have a baryon number of 1/3, and the antiquarks a baryon number of −1/3, in order to give the correct values of 1 or 0 when they combine to form baryons and mesons.
The empirical law of baryon conservation states that in any reaction the total number of baryons must remain constant. If any baryons are created, then so must be an equal number of antibaryons, which in principle negate the baryons. Conservation of baryon number explains the apparent stability of the proton. The proton does not decay into lighter positive particles, such as the positron or the mesons, because those particles have a baryon number of 0. Neutrons and other heavy baryons can decay into the lighter protons, however, because the total number of baryons present does not change.
At a more-detailed level, baryons and mesons are differentiated from one another in terms of their spin. The basic quarks and antiquarks have a spin of 1/2 (which may be oriented in either of two directions). When three quarks combine to form a baryon, their spins can add up to only half-integer values. In contrast, when quarks and antiquarks combine to form mesons, their spins always add up to integer values. As a result, baryons are classified as fermions within the Standard Model of particle physics, whereas mesons are classified as bosons.
Quarks and antiquarks
The baryons and mesons are complex subatomic particles built from more-elementary objects, the quarks. Six types of quark, together with their corresponding antiquarks, are necessary to account for all the known hadrons. The six varieties, or “flavours,” of quark have acquired the names up, down, charm, strange, top, and bottom. The meaning of these somewhat unusual names is not important; they have arisen for a number of reasons. What is important is the way that the quarks contribute to matter at different levels and the properties that they bear.
The quarks are unusual in that they carry electric charges that are smaller in magnitude than e, the size of the charge of the electron (1.6 × 10−19 coulomb). This is necessary if quarks are to combine together to give the correct electric charges for the observed particles, usually 0, +e, or −e. Only two types of quark are necessary to build protons and neutrons, the constituents of atomic nuclei. These are the up quark, with a charge of +2/3e, and the down quark, which has a charge of −1/3e. The proton consists of two up quarks and one down quark, which gives it a total charge of +e. The neutron, on the other hand, is built from one up quark and two down quarks, so that it has a net charge of zero. The other properties of the up and down quarks also add together to give the measured values for the proton and neutron. For example, the quarks have spins of 1/2. In order to form a proton or a neutron, which also have spin 1/2, the quarks must align in such a way that two of the three spins cancel each other, leaving a net value of 1/2.
Up and down quarks can also combine to form particles other than protons and neutrons. For example, the spins of the three quarks can be arranged so that they do not cancel. In this case they form short-lived resonance states, which have been given the name delta, or Δ. The deltas have spins of 3/2, and the up and down quarks combine in four possible configurations—uuu, uud, udd, and ddd—where u and d stand for up and down. The charges of these Δ states are +2e, +e, 0, and −e, respectively.
The up and down quarks can also combine with their antiquarks to form mesons. The pi-meson, or pion, which is the lightest meson and an important component of cosmic rays, exists in three forms: with charge e (or 1), with charge 0, and with charge −e (or −1). In the positive state an up quark combines with a down antiquark; a down quark together with an up antiquark compose the negative pion; and the neutral pion is a quantum mechanical mixture of two states—uu and dd, where the bar over the top of the letter indicates the antiquark.
Up and down are the lightest varieties of quarks. Somewhat heavier are a second pair of quarks, charm (c) and strange (s), with charges of +2/3e and −1/3e, respectively. A third, still heavier pair of quarks consists of top (or truth, t) and bottom (or beauty, b), again with charges of +2/3e and −1/3e, respectively. These heavier quarks and their antiquarks combine with up and down quarks and with each other to produce a range of hadrons, each of which is heavier than the basic proton and pion, which represent the lightest varieties of baryon and meson, respectively. For example, the particle called lambda (Λ) is a baryon built from u, d, and s quarks; thus, it is like the neutron but with a d quark replaced by an s quark.
The development of modern particle theory
Quantum electrodynamics: Describing the electromagnetic force
The year of the birth of particle physics is often cited as 1932. Near the beginning of that year James Chadwick, working in England at the Cavendish Laboratory in Cambridge, discovered the existence of the neutron. This discovery seemed to complete the picture of atomic structure that had begun with Ernest Rutherford's work at the University of Manchester, England, in 1911, when it became apparent that almost all of the mass of an atom was concentrated in a nucleus. The elementary particles seemed firmly established as the proton, the neutron, and the electron. By the end of 1932, however, Carl Anderson in the United States had discovered the first antiparticle—the positron, or antielectron. Moreover, Patrick Blackett and Giuseppi Occhialini, working, like Chadwick, at the Cavendish Laboratory, had revealed how positrons and electrons are created in pairs when cosmic rays pass through dense matter. It was becoming apparent that the simple pictures provided by electrons, protons, and neutrons were incomplete and that a new theory was needed to explain fully the phenomena of subatomic particles.
The English physicist P.A.M. Dirac had provided the foundations for such a theory in 1927 with his quantum theory of the electromagnetic field. Dirac's theory treated the electromagnetic field as a “gas” of photons (the quanta of light), and it yielded a correct description of the absorption and emission of radiation by electrons in atoms. It was the first quantum field theory.
A year later Dirac published his relativistic electron theory, which took correct account of Albert Einstein's theory of special relativity. Dirac's theory showed that the electron must have a spin quantum number of 1/2 and a magnetic moment. It also predicted the existence of the positron, although Dirac did not at first realize this and puzzled over what seemed like extra solutions to his equations. Only with Anderson's discovery of the positron did the picture become clear: radiation, a photon, can produce electrons and positrons in pairs, provided the energy of the photon is greater than the total mass-energy of the two particles—that is, about 1 megaelectron volt (MeV; 106 eV).
Dirac's quantum field theory was a beginning, but it explained only one aspect of the electromagnetic interactions between radiation and matter. During the following years other theorists began to extend Dirac's ideas to form a comprehensive theory of quantum electrodynamics (QED) that would account fully for the interactions of charged particles not only with radiation but also with one another. One important step was to describe the electrons in terms of fields, in analogy to the electromagnetic field of the photons. This enabled theorists to describe everything in terms of quantum field theory. It also helped to cast light on Dirac's positrons.
According to QED, a vacuum is filled with electron-positron fields. Real electron-positron pairs are created when energetic photons, represented by the electromagnetic field, interact with these fields. Virtual electron-positron pairs, however, can also exist for minute durations, as dictated by Heisenberg's uncertainty principle, and this at first led to fundamental difficulties with QED.
During the 1930s it became clear that, as it stood, QED gave the wrong answers for quite simple problems. For example, the theory said that the emission and reabsorption of the same photon would occur with an infinite probability. This led in turn to infinities occurring in many situations; even the mass of a single electron was infinite according to QED because, on the timescales of the uncertainty principle, the electron could continuously emit and absorb virtual photons.
It was not until the late 1940s that a number of theorists working independently resolved the problems with QED. Julian Schwinger and Richard Feynman in the United States and Tomonaga Shin'ichirō in Japan proved that they could rid the theory of its embarrassing infinities by a process known as renormalization. Basically, renormalization acknowledges all possible infinities and then allows the positive infinities to cancel the negative ones; the mass and charge of the electron, which are infinite in theory, are then defined to be their measured values.
Once these steps have been taken, QED works beautifully. It is the most accurate quantum field theory scientists have at their disposal. In recognition of their achievement, Feynman, Schwinger, and Tomonaga were awarded the Nobel Prize for Physics in 1965; Dirac had been similarly honoured in 1933.
Quantum chromodynamics: Describing the strong force
The nuclear binding force
As early as 1920, when Ernest Rutherford named the proton and accepted it as a fundamental particle, it was clear that the electromagnetic force was not the only force at work within the atom. Something stronger had to be responsible for binding the positively charged protons together and thereby overcoming their natural electrical repulsion. The discovery in 1932 of the neutron showed that there are (at least) two kinds of particles subject to the same force. Later in the same year, Werner Heisenberg in Germany made one of the first attempts to develop a quantum field theory that was analogous to QED but appropriate to the nuclear binding force.
According to quantum field theory, particles can be held together by a “charge-exchange” force, which is carried by charged intermediary particles. Heisenberg's application of this theory gave birth to the idea that the proton and neutron were charged and neutral versions of the same particle—an idea that seemed to be supported by the fact that the two particles have almost equal masses. Heisenberg proposed that a proton, for example, could emit a positively charged particle that was then absorbed by a neutron; the proton thus became a neutron, and vice versa. The nucleus was no longer viewed as a collection of two kinds of immutable billiard balls but rather as a continuously changing collection of protons and neutrons that were bound together by the exchange particles flitting between them.
Heisenberg believed that the exchange particle involved was an electron (he did not have many particles from which to choose). This electron had to have some rather odd characteristics, however, such as no spin and no magnetic moment, and this made Heisenberg's theory ultimately unacceptable. Quantum field theory did not seem applicable to the nuclear binding force. Then in 1935 a Japanese theorist, Yukawa Hideki, took a bold step: he invented a new particle as the carrier of the nuclear binding force.
The size of a nucleus shows that the binding force must be short-ranged, confining protons and neutrons within distances of about 10−14 metre. Yukawa argued that, to give this limited range, the force must involve the exchange of particles with mass, unlike the massless photons of QED. According to the uncertainty principle, exchanging a particle with mass sets a limit on the time allowed for the exchange and therefore restricts the range of the resulting force. Yukawa calculated a mass of about 200 times the electron's mass, or 100 MeV, for the new intermediary. Because the predicted mass of the new particle was between those of the electron and the proton, the particle was named the mesotron, later shortened to meson.
Yukawa's work was little known outside Japan until 1937, when Carl Anderson and his colleague Seth Neddermeyer announced that, five years after Anderson's discovery of the positron, they had found a second new particle in cosmic radiation. The new particle seemed to have exactly the mass Yukawa had prescribed and thus was seen as confirmation of Yukawa's theory by the Americans J. Robert Oppenheimer and Robert Serber, who made Yukawa's work more widely known in the West.
In the following years, however, it became clear that there were difficulties in reconciling the properties expected for Yukawa's intermediary particle with those of the new cosmic-ray particle. In particular, as a group of Italian physicists succeeded in demonstrating (while hiding from the occupying German forces during World War II), the cosmic-ray particles penetrate matter far too easily to be related to the nuclear binding force. To resolve this apparent paradox, theorists both in Japan and in the United States had begun to think that there might be two mesons. The two-meson theory proposed that Yukawa's nuclear meson decays into the penetrating meson observed in the cosmic rays.
In 1947 scientists at Bristol University in England found the first experimental evidence of two mesons in cosmic rays high on the Pic du Midi in France. Using detectors equipped with special photographic emulsion that can record the tracks of charged particles, the physicists at Bristol found the decay of a heavier meson into a lighter one. They called the heavier particle π (or pi), and it has since become known as the pi-meson, or pion. The lighter particle was dubbed μ (or mu) and is now known simply as the muon. (According to the modern definition of a meson as a particle consisting of a quark bound with an antiquark, the muon is not actually a meson. It is classified as a lepton—a relation of the electron.)
Studies of pions produced in cosmic radiation and in the first particle accelerators showed that the pion behaves precisely as expected for Yukawa's particle. Moreover, experiments confirmed that positive, negative, and neutral varieties of pions exist, as predicted by Nicholas Kemmer in England in 1938. Kemmer regarded the nuclear binding force as symmetrical with respect to the charge of the particles involved. He proposed that the nuclear force between protons and protons or between neutrons and neutrons is the same as the one between protons and neutrons. This symmetry required the existence of a neutral intermediary that did not figure in Yukawa's original theory. It also established the concept of a new “internal” property of subatomic particles—isospin.
Kemmer's work followed to some extent the trail Heisenberg had begun in 1932. Close similarities between nuclei containing the same total number of protons and neutrons, but in different combinations, suggest that protons can be exchanged for neutrons and vice versa without altering the net effect of the nuclear binding force. In other words, the force recognizes no difference between protons and neutrons; it is symmetrical under the interchange of protons and neutrons, rather as a square is symmetrical under rotations through 90°, 180°, and so on.
To introduce this symmetry into the theory of the nuclear force, it proved useful to adopt the mathematics describing the spin of particles. In this respect the proton and neutron are seen as different states of a single basic nucleon. These states are differentiated by an internal property that can have two values, +1/2 and −1/2, in analogy with the spin of a particle such as the electron. This new property is called isotopic spin, or isospin for short, and the nuclear binding force is said to exhibit isospin symmetry.
Symmetries are important in physics because they simplify the theories needed to describe a range of observations. For example, as far as physicists can tell, all physical laws exhibit translational symmetry. This means that the results of an experiment performed at one location in space and time can be used to predict correctly the outcome of the same experiment in another part of space and time. This symmetry is reflected in the conservation of momentum—the fact that the total momentum of a system remains constant unless it is acted upon by an external force.
Isospin symmetry is an important symmetry in particle physics, although it occurs only in the action of the nuclear binding force—or, in modern terminology, the strong force. The symmetry leads to the conservation of isospin in nuclear interactions that occur via the strong force and thereby determines which reactions can occur.
“Strangeness”
The discovery of the pion in 1947 seemed to restore order to the study of particle physics, but this order did not last long. Later in the year Clifford Butler and George Rochester, two British physicists studying cosmic rays, discovered the first examples of yet another type of new particle. The new particles were heavier than the pion or muon but lighter than the proton, with a mass of about 800 times the electron's mass. Within the next few years, researchers found copious examples of these particles, as well as other new particles that were heavier even than the proton. The evidence seemed to indicate that these particles were created in strong interactions in nuclear matter, yet the particles lived for a relatively long time without themselves interacting strongly with matter. This strange behaviour in some ways echoed the earlier problem with Yukawa's supposed meson, but the solution for the new “strange” particles proved to be different.
By 1953 at least four different kinds of strange particles had been observed. In an attempt to bring order into this increasing number of subatomic particles, Murray Gell-Mann in the United States and Nishijima Kazuhiko in Japan independently suggested a new conservation law. They argued that the strange particles must possess some new property, dubbed “strangeness,” that is conserved in the strong nuclear reactions in which the particles are created. In the decay of the particles, however, a different, weaker force is at work, and this weak force does not conserve strangeness—as with isospin symmetry, which is respected only by the strong force.
According to this proposal, particles are assigned a strangeness quantum number, S, which can have only integer values. The pion, proton, and neutron have S = 0. Because the strong force conserves strangeness, it can produce strange particles only in pairs, in which the net value of strangeness is zero. This phenomenon, the importance of which was recognized by both Nishijima and the American physicist Abraham Pais in 1952, is known as associated production.
SU(3) symmetry
With the introduction of strangeness, physicists had several properties with which they could label the various subatomic particles. In particular, values of mass, electric charge, spin, isospin, and strangeness gave physicists a means of classifying the strongly interacting particles—or hadrons—and of establishing a hierarchy of relationships between them. In 1962 Gell-Mann and Yuval Neʾeman, an Israeli scientist, independently showed that a particular type of mathematical symmetry provides the kind of grouping of hadrons that is observed in nature. The name of the mathematical symmetry is SU(3), which stands for “special unitary group in three dimensions.”
Combinations of the quarks u, d, and s and their corresponding antiquarks to …
SU(3) contains subgroups of objects that are related to each other by symmetrical transformations, rather as a group describing the rotations of a square through 90° contains the four symmetrical positions of the square. Gell-Mann and Neʾeman both realized that the basic subgroups of SU(3) contain either 8 or 10 members and that the observed hadrons can be grouped together in 8s or 10s in the same way. (The classification of the hadron class of subatomic particles into groups on the basis of their symmetry properties is also referred to as the Eightfold Way.) For example, the proton, neutron, and their relations with spin 1/2 fall into one octet, or group of 8, while the pion and its relations with spin 0 fit into another octet (see the figure). A group of 9 very short-lived resonance particles with spin 3/2 could be seen to fit into a decuplet, or group of 10, although at the time the classification was introduced, the 10th member of the group, the particle known as the Ω− (or omega-minus), had not yet been observed. Its discovery early in 1964, at the Brookhaven National Laboratory in Upton, New York, confirmed the validity of the SU(3) symmetry of the hadrons.
The development of quark theory
The beauty of the SU(3) symmetry does not, however, explain why it holds true. Gell-Mann and another American physicist, George Zweig, independently decided in 1964 that the answer to that question lies in the fundamental nature of the hadrons. The most basic subgroup of SU(3) contains only three objects, from which the octets and decuplets can be built. The two theorists made the bold suggestion that the hadrons observed at the time were not simple structures but were instead built from three basic particles. Gell-Mann called these particles quarks—the name that remains in use today.
By the time Gell-Mann and Zweig put forward their ideas, the list of known subatomic particles had grown from the three of 1932—electron, proton, and neutron—to include most of the stable hadrons and a growing number of short-lived resonances, as well as the muon and two types of neutrino. That the seemingly ever-increasing number of hadrons could be understood in terms of only three basic building blocks was remarkable indeed. For this to be possible, however, those building blocks—the quarks—had to have some unusual properties.
These properties were so odd that for a number of years it was not clear whether quarks actually existed or were simply a useful mathematical fiction. For example, quarks must have charges of +2/3e or −1/3e, which should be very easy to spot in certain kinds of detectors; but intensive searches, both in cosmic rays and using particle accelerators, have never revealed any convincing evidence for fractional charge of this kind. By the mid-1970s, however, 10 years after quarks were first proposed, scientists had compiled a mass of evidence that showed that quarks do exist but are locked within the individual hadrons in such a way that they can never escape as single entities.
This evidence resulted from experiments in which beams of electrons, muons, or neutrinos were fired at the protons and neutrons in such target materials as hydrogen (protons only), deuterium, carbon, and aluminum. The incident particles used were all leptons, particles that do not feel the strong binding force and that were known, even then, to be much smaller than the nuclei they were probing. The scattering of the beam particles caused by interactions within the target clearly demonstrated that protons and neutrons are complex structures that contain structureless, pointlike objects, which were named partons because they are parts of the larger particles. The experiments also showed that the partons can indeed have fractional charges of +2/3e or −1/3e and thus confirmed one of the more surprising predictions of the quark model.
Gell-Mann and Zweig required only three quarks to build the particles known in 1964. These quarks are the ones known as up (u), down (d), and strange (s). Since then, experiments have revealed a number of heavy hadrons—both mesons and baryons—which show that there are more than three quarks. Indeed, the SU(3) symmetry is part of a larger mathematical symmetry that incorporates quarks of several “flavours”—the term used to distinguish the different quarks. In addition to the up, down, and strange quarks, there are quarks known as charm (c), bottom (or beauty, b), and top (or truth, t). These quark flavours are all conserved during reactions that occur through the strong force; in other words, charm must be created in association with anticharm, bottom with antibottom, and so on. This implies that the quarks can change from one flavour to another only by way of the weak force, which is responsible for the decay of particles.
The up and down quarks are distinguished mainly by their differing electric charges, while the heavier quarks each carry a unique quantum number related to their flavour. The strange quark has strangeness, S = −1, the charm quark has charm, C = +1, and so on. Thus, three strange quarks together give a particle with an electric charge of −e and a strangeness of −3, just as is required for the omega-minus (Ω−) particle; and the neutral strange particle known as the lambda (Λ) particle contains uds, which gives the correct total charge of 0 and a strangeness of −1. Using this system, the lambda can be viewed as a neutron with one down quark changed to a strange quark; charge and spin remain the same, but the strange quark makes the lambda heavier than the neutron. Thus, the quark model reveals that nature is not arbitrary when it produces particles but is in some sense repeating itself on a more-massive scale.
Colour
The realization in the late 1960s that protons, neutrons, and even Yukawa's pions are all built from quarks changed the direction of thinking about the nuclear binding force. Although at the level of nuclei Yukawa's picture remained valid, at the more-minute quark level it could not satisfactorily explain what held the quarks together within the protons and pions or what prevented the quarks from escaping one at a time.
The answer to questions like these seems to lie in the property called colour. Colour was originally introduced to solve a problem raised by the exclusion principle that was formulated by the Austrian physicist Wolfgang Pauli in 1925. This rule does not allow particles with spin 1/2, such as quarks, to occupy the same quantum state. However, the omega-minus particle, for example, contains three quarks of the same flavour, sss, and has spin 3/2, so the quarks must also all be in the same spin state. The omega-minus particle, according to the Pauli exclusion principle, should not exist.
To resolve this paradox, in 1964–65 Oscar Greenberg in the United States and Yoichiro Nambu and colleagues in Japan proposed the existence of a new property with three possible states. In analogy to the three primary colours of light, the new property became known as colour and the three varieties as red, green, and blue.
The three colour states and the three anticolour states (ascribed to antiquarks) are comparable to the two states of electric charge and anticharge (positive and negative), and hadrons are analogous to atoms. Just as atoms contain constituents whose electric charges balance overall to give a neutral atom, hadrons consist of coloured quarks that balance to give a particle with no net colour. Moreover, nuclei can be built from colourless protons and neutrons, rather as molecules form from electrically neutral atoms. Even Yukawa's pion exchange can be compared to exchange models of chemical bonding.
This analogy between electric charge and colour led to the idea that colour could be the source of the force between quarks, just as electric charge is the source of the electromagnetic force between charged particles. The colour force was seen to be working not between nucleons, as in Yukawa's theory, but between quarks. In the late 1960s and early 1970s, theorists turned their attention to developing a quantum field theory based on coloured quarks. In such a theory colour would take the role of electric charge in QED.
It was obvious that the field theory for coloured quarks had to be fundamentally different from QED because there are three kinds of colour as opposed to two states of electric charge. To give neutral objects, electric charges combine with an equal number of anticharges, as in atoms where the number of negative electrons equals the number of positive protons. With colour, however, three different charges must add together to give zero. In addition, because SU(3) symmetry (the same type of mathematical symmetry that Gell-Mann and Neʾeman used for three flavours) applies to the three colours, quarks of one colour must be able to transform into another colour. This implies that a quark can emit something—the quantum of the field due to colour—that itself carries colour. And if the field quanta are coloured, then they can interact between themselves, unlike the photons of QED, which are electrically neutral.
Despite these differences, the basic framework for a field theory based on colour already existed by the late 1960s, owing in large part to the work of theorists, particularly Chen Ning Yang and Robert Mills in the United States, who had studied similar theories in the 1950s. The new theory of the strong force was called quantum chromodynamics, or QCD, in analogy to quantum electrodynamics, or QED. In QCD the source of the field is the property of colour, and the field quanta are called gluons. Eight gluons are necessary in all to make the changes between the coloured quarks according to the rules of SU(3).
Asymptotic freedom
In the early 1970s the American physicists David J. Gross and Frank Wilczek (working together) and H. David Politzer (working independently) discovered that the strong force between quarks becomes weaker at smaller distances and that it becomes stronger as the quarks move apart, thus preventing the separation of an individual quark. This is completely unlike the behaviour of the electromagnetic force. The quarks have been compared to prisoners on a chain gang. When they are close together, they can move freely and do not notice the chains binding them. If one quark/prisoner tries to move away, however, the strength of the chains is felt, and escape is prevented. This behaviour has been attributed to the fact that the virtual gluons that flit between the quarks within a hadron are not neutral but carry mixtures of colour and anticolour. The farther away a quark moves, the more gluons appear, each contributing to the net force. When the quarks are close together, they exchange fewer gluons, and the force is weaker. Only at infinitely close distances are quarks free, an effect known as asymptotic freedom. For their discovery of this effect, Gross, Wilczek, and Politzer were awarded the 2004 Nobel Prize for Physics.
The strong coupling between the quarks and gluons makes QCD a difficult theory to study. Mathematical procedures that work in QED cannot be used in QCD. The theory has nevertheless had a number of successes in describing the observed behaviour of particles in experiments, and theorists are confident that it is the correct theory to use for describing the strong force.
Electroweak theory: Describing the weak force
Beta decay
The strong force binds particles together; by binding quarks within protons and neutrons, it indirectly binds protons and neutrons together to form nuclei. Nuclei can, however, break apart, or decay, naturally in the process known as radioactivity. One type of radioactivity, called beta decay, in which a nucleus emits an electron and thereby increases its net positive charge by one unit, has been known since the late 1890s; but it was only with the discovery of the neutron in 1932 that physicists could begin to understand correctly what happens in this radioactive process.
The most basic form of beta decay involves the transmutation of a neutron into a proton, accompanied by the emission of an electron to keep the balance of electric charge. In addition, as Wolfgang Pauli realized in 1930, the neutron emits a neutral particle that shares the energy released by the decay. This neutral particle has little or no mass and is now known to be an antineutrino, the antiparticle of the neutrino. On its own, a neutron will decay in this way after an average lifetime of 15 minutes; only within the confines of certain nuclei does the balance of forces prevent neutrons from decaying and thereby keep the entire nucleus stable.
A universal weak force
The rates of nuclear decay indicate that any force involved in beta decay must be much weaker than the force that binds nuclei together. It may seem counterintuitive to think of a nuclear force that can disrupt the nucleus; however, the transformation of a neutron into a proton that occurs in neutron decay is comparable to the transformations by exchange of pions that Yukawa suggested to explain the nuclear binding force. Indeed, Yukawa's theory originally tried to explain both kinds of phenomena—weak decay and strong binding—with the exchange of a single type of particle. To give the different strengths, he proposed that the exchange particle couples strongly to the heavy neutrons and protons and weakly to the light electrons and neutrinos.
Yukawa was foreshadowing future developments in unifying the two nuclear forces in this way; however, as is explained below, he had chosen the wrong two forces. He was also bold in incorporating two “new” particles in his theory—the necessary exchange particle and the neutrino predicted by Pauli only five years previously.
Pauli had been hesitant in suggesting that a second particle must be emitted in beta decay, even though that would explain why the electron could leave with a range of energies. Such was the prejudice against the prediction of new particles that theorists as eminent as Danish physicist Niels Bohr preferred to suggest that the law of conservation of energy might break down at subnuclear distances.
By 1935, however, Pauli's new particle had found a champion in Enrico Fermi. Fermi named the particle the neutrino and incorporated it into his theory for beta decay, published in 1934. Like Yukawa, Fermi drew on an analogy with QED; but Fermi regarded the emission of the neutrino and electron by the neutron as the direct analog of the emission of a photon by a charged particle, and he did not invoke a new exchange particle. Only later did it become clear that, strictly speaking, the neutron emits an antineutrino.
Fermi's theory, rather than Yukawa's, proved highly successful in describing nuclear beta decay, and it received added support in the late 1940s with the discovery of the pion and its relationship with the muon (see above Quantum chromodynamics). In particular, the muon decays to an electron, a neutrino, and an antineutrino in a process that has exactly the same basic strength as the neutron's decay to a proton. The idea of a “universal” weak interaction that, unlike the strong force, acts equally upon light and heavy particles (or leptons and hadrons) was born.
Early theories
The nature of the weak force began to be further revealed in 1956 as the result of work by two Chinese American theorists, Tsung-Dao Lee and Chen Ning Yang. Lee and Yang were trying to resolve some puzzles in the decays of the strange particles. They discovered that they could solve the mystery, provided that the weak force does not respect the symmetry known as parity.
The parity operation is like reflecting something in a mirror; it involves changing the coordinates (x, y, z) of each point to the “mirror” coordinates (−x, −y, −z). Physicists had always assumed that such an operation would make no difference to the laws of physics. Lee and Yang, however, proposed that the weak force is exceptional in this respect, and they suggested ways that parity violation might be observed in weak interactions. Early in 1957, just a few months after Lee and Yang's theory was published, experiments involving the decays of neutrons, pions, and muons showed that the weak force does indeed violate parity symmetry. Later that year Lee and Yang were awarded the Nobel Prize for Physics for their work.
Parity violation and the concept of a universal form of weak interaction were combined into one theory in 1958 by the American physicists Murray Gell-Mann and Richard Feynman. They established the mathematical structure of the weak interaction in what is known as V−A, or vector minus axial vector, theory. This theory proved highly successful experimentally, at least at the relatively low energies accessible to particle physicists in the 1960s. It was clear that the theory had the correct kind of mathematical structure to account for parity violation and related effects, but there were strong indications that, in describing particle interactions at higher energies than experiments could at the time access, the theory began to go badly wrong.
The problems with V−A theory were related to a basic requirement of quantum field theory—the existence of a gauge boson, or messenger particle, to carry the force. Yukawa had attempted to describe the weak force in terms of the same intermediary that is responsible for the nuclear binding force, but this approach did not work. A few years after Yukawa published his theory, a Swedish theorist, Oskar Klein, proposed a slightly different kind of carrier for the weak force.
In contrast to Yukawa's particle, which had spin 0, Klein's intermediary had spin 1 and therefore would give the correct spins for the antineutrino and the electron emitted in the beta decay of the neutron. Moreover, within the framework of Klein's concept, the known strength of the weak force in beta decay showed that the mass of the particle must be approximately 100 times the proton's mass, although the theory could not predict this value. All attempts to introduce such a particle into V−A theory, however, encountered severe difficulties, similar to those that had beset QED during the 1930s and early '40s. The theory gave infinite probabilities to various interactions, and it defied the renormalization process that had been the salvation of QED.
Hidden symmetry
Throughout the 1950s, theorists tried to construct field theories for the nuclear forces that would exhibit the same kind of gauge symmetry inherent in James Clerk Maxwell's theory of electrodynamics and in QED. There were two major problems, which were in fact related. One concerned the infinities and the difficulty in renormalizing these theories; the other concerned the mass of the intermediaries. Straightforward gauge theory requires particles of zero mass as carriers, such as the photon of QED, but Klein had shown that the short-ranged weak force requires massive carriers.
In short, physicists had to discover the correct mathematical symmetry group for describing the transformations between different subatomic particles and then identify for the known forces the messenger particles required by fields with the chosen symmetry. Early in the 1960s Sheldon Glashow in the United States and Abdus Salam and John Ward in England decided to work with a combination of two symmetry groups—namely, SU(2) × U(1). Such a symmetry requires four spin-1 messenger particles, two electrically neutral and two charged. One of the neutral particles could be identified with the photon, while the two charged particles could be the messengers responsible for beta decay, in which charge changes hands, as when the neutron decays into a proton. The fourth messenger, a second neutral particle, seemed at the time to have no obvious role; it apparently would permit weak interactions with no change of charge—so-called neutral current interactions—which had not yet been observed.
This theory, however, still required the messengers to be massless, which was all right for the photon but not for the messengers of the weak force. Toward the end of the 1960s, Salam and Steven Weinberg, an American theorist, independently realized how to introduce massive messenger particles into the theory while at the same time preserving its basic gauge symmetry properties. The answer lay in the work of the English theorist Peter Higgs and others, who had discovered the concept of symmetry breaking, or, more descriptively, hidden symmetry.
A physical field can be intrinsically symmetrical, although this may not be apparent in the state of the universe in which experiments are conducted. On the Earth's surface, for example, gravity seems asymmetrical—it always pulls down. From a distance, however, the symmetry of the gravitational field around the Earth becomes apparent. At a more-fundamental level, the fields associated with the electromagnetic and weak forces are not overtly symmetrical, as is demonstrated by the widely differing strengths of weak and electromagnetic interactions at low energies. Yet, according to Higgs's ideas, these forces can have an underlying symmetry. It is as if the universe lies at the bottom of a wine bottle; the symmetry of the bottle's base is clear from the top of the dimple in the centre, but it is hidden from any point in the valley surrounding the central dimple.
Higgs's mechanism for symmetry breaking provided Salam and Weinberg with a means of explaining the masses of the carriers of the weak force. Their theory, however, also predicted the existence of one or more new “Higgs” particles, which would carry additional fields needed for the symmetry breaking and would have spin 0. With this sole proviso the future of the electroweak theory began to look more promising. In 1971 a young Dutch theorist, Gerardus 't Hooft, building on work by Martinus Veltmann, proved that the theory is renormalizable (in other words, that all the infinities cancel out). Many particle physicists became convinced that the electroweak theory was, at last, an acceptable theory for the weak force.
Finding the messenger particles
In addition to the Higgs particle, or particles, electroweak theory also predicts the existence of an electrically neutral carrier for the weak force. This neutral carrier, called the Z0, should mediate the neutral current interactions—weak interactions in which electric charge is not transferred between particles. The search for evidence of such reactions, which would confirm the validity of the electroweak theory, began in earnest in the early 1970s.
The first signs of neutral currents came in 1973 from experiments at the European Organization for Nuclear Research (CERN) near Geneva. A team of more than 50 physicists from a variety of countries had diligently searched through the photographs taken of tracks produced when a large bubble chamber called Gargamelle was exposed to a beam of muon-antineutrinos. In a neutral current reaction an antineutrino would simply scatter from an electron in the liquid contents of the bubble chamber. The incoming antineutrino, being neutral, would leave no track, nor would it leave a track as it left the chamber after being scattered off an electron. But the effect of the neutral current—the passage of a virtual Z0 between the antineutrino and the electron—would set the electron in motion, and, being electrically charged, the electron would leave a track, which would appear as if from nowhere. Examining approximately 1.4 million pictures, the researchers found three examples of such a neutral current reaction. Although the reactions occurred only rarely, there were enough to set hopes high for the validity of electroweak theory.
In 1979 Glashow, Salam, and Weinberg, the theorists who had done much of the work in developing electroweak theory in the 1960s, were awarded the Nobel Prize for Physics; 't Hooft and Veltmann were similarly rewarded in 1999. By that time, enough information on charged and neutral current interactions had been compiled to predict that the masses of the weak messengers required by electroweak theory should be about 80 gigaelectron volts (GeV; 109 eV) for the charged W+ and W− particles and 90 GeV for the Z0. There was, however, still no sign of the direct production of the weak messengers, because no accelerator was yet capable of producing collisions energetic enough to create real particles of such large masses (nearly 100 times as massive as the proton).
A scheme to find the W and Z particles was under way at CERN, however. The plan was to accelerate protons in one direction around CERN's largest proton synchrotron (a circular accelerator) and antiprotons in the opposite direction. At an appropriate energy (initially 270 GeV per beam), the two sets of particles would be made to collide head-on. The total energy of the collision would be far greater than anything that could be achieved by directing a single beam at a stationary target, and physicists hoped it would be sufficient to produce a small but significant number of W and Z particles.
In 1983 the researchers at CERN, working on two experiments code-named UA1 and UA2, were rewarded with the discovery of the particles they sought. The Ws and Zs that were produced did not live long enough to leave tracks in the detectors, but they decayed to particles that did leave tracks. The total energy of those decay particles, moreover, equaled the energy corresponding to the masses of the transient W and Z particles, just as predicted by electroweak theory. It was a triumph both for CERN and for electroweak theory. Hundreds of physicists and engineers were involved in the project, and in 1984 the Italian physicist Carlo Rubbia and Dutch engineer Simon van der Meer received the Nobel Prize for Physics for their leading roles in making the discovery of the W and Z particles possible.
The W particles play a crucial role in interactions that turn one flavour of quark or lepton into another, as in the beta decay of a neutron, where a down quark turns into an up quark to form a proton. Such flavour-changing interactions occur only through the weak force and are described by the SU(2) symmetry that underlies electroweak theory along with U(1). The basic representation of this mathematical group is a pair, or doublet, and, according to electroweak theory, the quarks and leptons are each grouped into pairs of increasing mass: (u, d), (c, s), (t, b) and (e, ve), (μ, vμ), (τ, vτ). This underlying symmetry does not, however, indicate how many pairs of quarks and leptons should exist in total. This question was answered in experiments at CERN in 1989, when the colliding-beam storage ring particle accelerator known as the Large Electron-Positron (LEP) collider came into operation.
When LEP started up, it could collide electrons and positrons at total energies of about 90 GeV, producing copious numbers of Z particles. Through accurate measurements of the “width” of the Z—that is, the intrinsic variation in its mass, which is related to the number of ways the particle can decay—researchers at the LEP collider have found that the Z can decay to no more than three types of light neutrino. This in turn implies that there are probably no more than three pairs of leptons and three pairs of quarks.
Current research in particle physics
Experiments
Testing the Standard Model
Electroweak theory, which describes the electromagnetic and weak forces, and quantum chromodynamics, the gauge theory of the strong force, together form what particle physicists call the Standard Model. The Standard Model, which provides an organizing framework for the classification of all known subatomic particles, works well as far as can be measured by means of present technology, but several points still await experimental verification or clarification. Furthermore, the model is still incomplete.
Prior to 1994 one of the main missing ingredients of the Standard Model was the top quark, which was required to complete the set of three pairs of quarks. Searches for this sixth and heaviest quark failed repeatedly until in April 1994 a team working on the Collider Detector Facility (CDF) at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, announced tentative evidence for the top quark. This was confirmed the following year, when not only the CDF team but also an independent team working on a second experiment at Fermilab, code-named DZero, or D0, published more convincing evidence. The results indicated that the top quark has a mass between 170 and 190 gigaelectron volts (GeV; 109 eV). This is almost as heavy as a nucleus of lead, so it was not surprising that previous experiments had failed to find the top quark. The discovery had required the highest-energy particle collisions available—those at Fermilab's Tevatron, which collides protons with antiprotons at a total energy of 1,800 GeV, or 1.8 teraelectron volts (TeV; 1012 eV).
The discovery of the top quark in a sense completed another chapter in the history of particle physics; it also focused the attention of experimenters on other questions unanswered by the Standard Model. For instance, why are there six quarks and not more or less? It may be that only this number of quarks allows for the subtle difference between particles and antiparticles that occurs in the neutral K mesons (K0 and 0), which contain an s quark (or antiquark) bound with a d antiquark (or quark). This asymmetry between particle and antiparticle could in turn be related to the domination of matter over antimatter in the universe (see cosmos: Matter-antimatter asymmetry). Experiments studying neutral B mesons, which contain a b quark or its antiquark, may eventually reveal similar effects and so cast light on this fundamental problem that links particle physics with cosmology and the study of the origin of matter in the universe.
Testing supersymmetry
Much of current research, meanwhile, is centred on important precision tests that may reveal effects that lie outside the Standard Model—in particular, those that are due to supersymmetry. These studies include measurements based on millions of Z particles produced in the LEP collider at the European Organization for Nuclear Research (CERN) and in the Stanford Linear Collider (SLC) at the Stanford Linear Accelerator Center (SLAC) in Menlo Park, California, and on large numbers of W particles produced in the Tevatron synchrotron at Fermilab and later at the LEP collider. The precision of these measurements is such that comparisons with the predictions of the Standard Model constrain the allowed range of values for quantities that are otherwise unknown. The predictions depend, for example, on the mass of the top quark, and in this case comparison with the precision measurements indicates a value in good agreement with the mass measured at Fermilab. This agreement makes another comparison all the more interesting, for the precision data also provide hints as to the mass of the Higgs particle—a major ingredient of the Standard Model that has yet to be discovered.
The Higgs particle is the particle associated with the mechanism that allows the symmetry of the electroweak force to be broken, or hidden, at low energies and that gives the W and Z particles, the carriers of the weak force, their mass. The particle is necessary to electroweak theory because the Higgs mechanism requires a new field to break the symmetry, and, according to quantum field theory, all fields have particles associated with them. Researchers know that the Higgs particle must have spin 0, but that is virtually all that can be definitely predicted. Theory provides a poor guide as to the particle's mass or even the number of different varieties of Higgs particles involved. However, comparisons with the precision measurements from the LEP collider suggest that the mass of the Higgs particle may be quite light, perhaps less than 200 GeV, although the data do not rule out a much heavier Higgs particle with a mass greater than 1 TeV.
Further new particles are predicted by theories that include supersymmetry. This symmetry relates quarks and leptons, which have spin 1/2 and are collectively called fermions, with the bosons of the gauge fields, which have spins 1 or 2, and with the Higgs particle, which has spin 0. This symmetry appeals to theorists in particular because it allows them to bring together all the particles—quarks, leptons, and gauge bosons—in theories that unite the various forces (see below Theory). The price to pay is a doubling of the number of fundamental particles, as the new symmetry implies that the known particles all have supersymmetric counterparts with different spin. Thus, the leptons and quarks with spin 1/2 have supersymmetric partners, dubbed sleptons and squarks, with integer spin; and the photon, W, Z, gluon, and graviton have counterparts with half-integer spins, known as the photino, wino, zino, gluino, and gravitino, respectively.
If they indeed exist, all these new supersymmetric particles must be heavy to have escaped detection so far. Theory suggests that some of the lightest of them could be created in collisions at the particle accelerators with the highest energies—that is, at the Tevatron and at the Hadron-Electron Ring Accelerator (HERA) at the DESY (German Electron Synchrotron) laboratory in Hamburg, Germany. Experiments at HERA and at the Tevatron also hold the promise of revealing any substructure within quarks or electrons. There is still a chance of more discoveries, including that of one or more Higgs particles, at the Large Hadron Collider planned to start up at CERN about 2007. This machine, which will be built in the same tunnel that housed the LEP collider until 2000, is designed to collide protons at energies of 7 TeV per beam.
Investigating neutrinos
Other hints of physics beyond the present Standard Model concern the neutrinos. In the Standard Model these particles have zero mass, so any measurement of a nonzero mass, however small, would indicate the existence of processes that are outside the Standard Model. Experiments to measure directly the masses of the three neutrinos yield only limits; that is, they give no sign of a mass for the particular neutrino type but do rule out any values above the smallest mass the experiments can measure. Other experiments attempt to measure neutrino mass indirectly by investigating whether neutrinos can change from one type to another. Such neutrino “oscillations”—a quantum phenomenon due to the wavelike nature of the particles—can occur only if there is a difference in mass between the basic neutrino types.
The first indications that neutrinos might oscillate came from experiments to detect solar neutrinos. By the mid-1980s several different types of experiment, such as those conducted by the American physical chemist Raymond Davis, Jr., in a gold mine in South Dakota, had consistently observed only one-third to two-thirds the number of electron-neutrinos arriving at Earth from the Sun, where they are emitted by the nuclear reactions that convert hydrogen to helium in the solar core. A popular explanation was that the electron-neutrinos had changed to another type on their way through the Sun—for example, to muon-neutrinos. Muon-neutrinos would not have been detected by the original experiments, which were designed to capture electron-neutrinos. Then in 2002 the Sudbury Neutrino Observatory (SNO) in Ontario, Canada, announced the first direct evidence for neutrino oscillations in solar neutrinos. The experiment, which is based on 1,000 tons of heavy water, detects electron-neutrinos through one reaction, but it can also detect all types of neutrinos through another reaction. SNO finds that, while the number of neutrinos detected of any type is consistent with calculations based on the physics of the Sun's interior, the number of electron-neutrinos observed is about one-third the number expected. This implies that the “missing” electron-neutrinos have changed to one of the other types. According to theory, the amount of oscillation as neutrinos pass through matter (as in the Sun) depends on the difference between the squares of the masses of the basic neutrino types (which are in fact different from the observed electron-, muon-, and tau-neutrino “flavours”). Taking all available solar neutrino data together (as of 2002) and fitting them to a theoretical model based on oscillations between two basic types indicate a difference in the mass-squared of 5 × 10−5 eV2.
Earlier evidence for neutrino oscillations came in 1998 from the Super-Kamiokande detector in the Kamioka Mine, Gifu prefecture, Japan, which was studying neutrinos created in cosmic-ray interactions on the opposite side of the Earth. The detector found fewer muon-neutrinos relative to electron-neutrinos coming up through the Earth than coming down through the atmosphere. This suggested the possibility that, as they travel through the Earth, muon-neutrinos change to tau-neutrinos, which could not be detected in Super-Kamiokande. These efforts won a Nobel Prize for Physics in 2002 for Super-Kamiokande's director, Koshiba Masatoshi. Davis was awarded a share of the prize for his earlier efforts in South Dakota.
Experiments at particle accelerators and nuclear reactors have found no conclusive evidence for oscillations over much-shorter distance scales, from tens to hundreds of metres. Since 2000 three “long-baseline” experiments have searched over longer distances of a few hundred kilometres for oscillations of muon-neutrinos created at accelerators. The aim is to build up a self-consistent picture that indicates clearly the values of neutrino masses.
Linking to the cosmos
Massive neutrinos and supersymmetric particles both provide possible explanations for the nonluminous, or “dark,” matter that is believed to constitute 90 percent or more of the mass of the universe. This dark matter must exist if the motions of stars and galaxies are to be understood, but it has not been observed through radiation of any kind. It is possible that some, if not all, of the dark matter may be due to normal matter that has failed to ignite as stars, but most theories favour more-exotic explanations, in particular those involving new kinds of particles. Such particles would have to be both massive and very weakly interacting; otherwise, they would already be known. A variety of experiments, set up underground to shield them from other effects, are seeking to detect such “weakly interacting massive particles,” or WIMPs, as the Earth moves through the dark matter that may exist in the Milky Way Galaxy.
Other current research involves the search for a new state of matter called the quark-gluon plasma. This should have existed for only 10 microseconds or so after the birth of the universe in the big bang, when the universe was too hot and energetic for quarks to coalesce into particles such as neutrons and protons. The quarks, and the gluons through which they interact, should have existed freely as a plasma, akin to the more-familiar plasma of ions and electrons that forms when conditions are too energetic for electrons to remain attached to atomic nuclei, as, for example, in the Sun. In experiments at CERN and at the Brookhaven National Laboratory in Upton, New York, physicists collide heavy nuclei at high energies in order to achieve temperatures and densities that may be high enough for the matter in the nuclei to change phase from the normal state, with quarks confined within protons and neutrons, to a plasma of free quarks and gluons. One way that this new state of matter should reveal itself is through the creation of more strange quarks, and hence more strange particles, than in normal collisions. CERN has claimed to have observed hints of quark-gluon plasma, but clear evidence will come only from experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider at CERN. These experiments, together with those that search for particles of dark matter and those that investigate the differences between matter and antimatter, illustrate the growing interdependence between particle physics and cosmology—the sciences of the very small and the very large.
Theory
Limits of quantum chromodynamics and the Standard Model
While electroweak theory allows extremely precise calculations to be made, problems arise with the theory of the strong force, quantum chromodynamics (QCD), despite its similar structure as a gauge theory. As mentioned in the section Asymptotic freedom, at short distances or equivalently high energies, the effects of the strong force become weaker. This means that complex interactions between quarks, involving many gluon exchanges, become highly improbable, and the basic interactions can be calculated from relatively few exchanges, just as in electroweak theory. As the distance between quarks increases, however, the increasing effect of the strong force means that the multiple interactions must be taken into account, and the calculations quickly become intractable. The outcome is that it is difficult to calculate the properties of hadrons, in particular their masses, which depend on the energy tied up in the interactions between the quarks they contain.
Since the 1980s, however, the advent of supercomputers with increased processing power has enabled theorists to make some progress in calculations that are based on a lattice of points in space-time. This is clearly an approximation to the continuously varying space-time of the real gauge theory, but it reduces the amount of calculation required. The greater the number of points in the lattice, the better the approximation. The computation times involved are still long, even for the most powerful computers available, but theorists are beginning to have some success in calculating the masses of hadrons from the underlying interactions between the quarks.
Meanwhile, the Standard Model combining electroweak theory and quantum chromodynamics provides a satisfactory way of understanding most experimental results in particle physics, yet it is far from satisfying as a theory. In addition to the missing Higgs particle, many problems and gaps in the model have been explained in a rather ad hoc manner. Values for such basic properties as the fractional charges of quarks or the masses of quarks and leptons must be inserted “by hand” into the model—that is, they are determined by experiment and observation rather than by theoretical predictions.
Toward a grand unified theory
Many theorists working in particle physics are therefore looking beyond the Standard Model in an attempt to find a more-comprehensive theory. One important approach has been the development of grand unified theories, or GUTs, which seek to unify the strong, weak, and electromagnetic forces in the way that electroweak theory does for two of these forces.
Such theories were initially inspired by evidence that the strong force is weaker at shorter distances or, equivalently, at higher energies. This suggests that at a sufficiently high energy the strengths of the weak, electromagnetic, and strong interactions may become the same, revealing an underlying symmetry between the forces that is hidden at lower energies. This symmetry must incorporate the symmetries of both QCD and electroweak theory, which are manifest at lower energies. There are various possibilities, but the simplest and most-studied GUTs are based on the mathematical symmetry group SU(5).
As all GUTs link the strong interactions of quarks with the electroweak interactions between quarks and leptons, they generally bring the quarks and leptons together into the overall symmetry group. This implies that a quark can convert into a lepton (and vice versa), which in turn leads to the conclusion that protons, the lightest stable particles built from quarks, are not in fact stable but can decay to lighter leptons. These interactions between quarks and leptons occur through new gauge bosons, generally called X, which must have masses comparable to the energy scale of grand unification. The mean life for the proton, according to the GUTs, depends on this mass; in the simplest GUTs based on SU(5), the mean life varies as the fourth power of the mass of the X boson.
Experimental results, principally from the LEP collider at CERN, suggest that the strengths of the strong, weak, and electromagnetic interactions should converge at energies of about 1016 GeV. This tremendous mass means that proton decay should occur only rarely, with a mean life of about 1035 years. (This result is fortunate, as protons must be stable on timescales of at least 1017 years; otherwise, all matter would be measurably radioactive.) It might seem that verifying such a lifetime experimentally would be impossible; however, particle lifetimes are only averages. Given a large-enough collection of protons, there is a chance that a few may decay within an observable time. This encouraged physicists in the 1980s to set up a number of proton-decay experiments in which large quantities of inexpensive material—usually water, iron, or concrete—were surrounded by detectors that could spot the particles produced should a proton decay. Such experiments confirmed that the proton lifetime must be greater than 1032 years, but detectors capable of measuring a lifetime of 1035 years have yet to be established.
The experimental results from the LEP collider also provide clues about the nature of a realistic GUT. The detailed extrapolation from the LEP collider's energies of about 100 GeV to the grand unification energies of about 1016 GeV depends on the particular GUT used in making the extrapolation. It turns out that, for the strengths of the strong, weak, and electromagnetic interactions to converge properly, the GUT must include supersymmetry—the symmetry between fermions (quarks and leptons) and the gauge bosons that mediate their interactions. Supersymmetry, which predicts that every known particle should have a partner with different spin, also has the attraction of relieving difficulties that arise with the masses of particles, particularly in GUTs. The problem in a GUT is that all particles, including the quarks and leptons, tend to acquire masses of about 1016 GeV, the unification energy. The introduction of the additional particles required by supersymmetry helps by canceling out other contributions that lead to the high masses and thus leaves the quarks and leptons with the masses measured in experiment. This important effect has led to the strong conviction among theorists that supersymmetry should be found in nature, although evidence for the supersymmetric particles has yet to be found.
A theory of everything
While GUTs resolve some of the problems with the Standard Model, they remain inadequate in a number of respects. They give no explanation, for example, for the number of pairs of quarks and leptons; they even raise the question of why such an enormous gap exists between the masses of the W and Z bosons of the electroweak force and the X bosons of lepton-quark interactions. Most important, they do not include the fourth force, gravity.
The dream of theorists is to find a totally unified theory—a theory of everything, or TOE. Attempts to derive a quantum field theory containing gravity always ran aground, however, until a remarkable development in 1984 first hinted that a quantum theory that includes gravity might be possible. The new development brought together two ideas that originated in the 1970s. One was supersymmetry, with its abilities to remove nonphysical infinite values from theories; the other was string theory, which regards all particles—quarks, leptons, and bosons—not as points in space, as in conventional field theories, but as extended one-dimensional objects, or “strings.”
The incorporation of supersymmetry with string theory is known as superstring theory, and its importance was recognized in the mid-1980s when an English theorist, Michael Green, and an American theoretical physicist, John Schwarz, showed that in certain cases superstring theory is entirely self-consistent. All potential problems cancel out, despite the fact that the theory requires a massless particle of spin 2—in other words, the gauge boson of gravity, the graviton—and thus automatically contains a quantum description of gravity. It soon seemed, however, that there were many superstring theories that included gravity, and this appeared to undermine the claim that superstrings would yield a single theory of everything. In the late 1980s new ideas emerged concerning two-dimensional membranes or higher-dimensional “branes,” rather than strings, that also encompass supergravity. Among the many efforts to resolve these seemingly disparate treatments of superstring space in a coherent and consistent manner was that of Edward Witten of the Institute for Advanced Study in Princeton, New Jersey. Witten proposed that the existing superstring theories are actually limits of a more-general underlying 11-dimensional “M-theory” that offers the promise of a self-consistent quantum treatment of all particles and forces.
Christine Sutton
Additional Reading
Books for the nonphysicist
Frank Close, Michael Marten, and Christine Sutton, The Particle Odyssey (2002), is a full-colour illustrated guide to developments and discoveries in particle physics from 1895 to the beginning of the 21st century. Barry Parker, Search for a Supertheory: From Atoms to Superstrings (1987), describes the search for a unified theory of the fundamental forces and elementary particles. Leon M. Lederman and David N. Schramm, From Quarks to the Cosmos: Tools of Discovery (1989), is an illustrated account of modern particle physics and cosmology that links the small and large scales in the universe. Gordon Fraser, Egil Lillestøl, and Inge Sellevåg, The Search for Infinity: Solving the Mysteries of the Universe (1994), is a beautifully illustrated tour from the smallest particles of matter to the vast expanses of the universe. Gordon Kane, The Particle Garden: Our Universe as Understood by Particle Physicists (1995), describes how particle physicists have come to understand underlying laws of the universe and where future developments may lie. Gordon Fraser, The Quark Machines (1997), tells the story of the transatlantic “race” for discoveries in particle physics, with emphasis on the role of CERN. Gerard 't Hooft, In Search of the Ultimate Building Blocks (1997), is a Nobel laureate's firsthand account of particle physics from the 1960s to the 1990s.
Textbooks
Jonathan Allday, Quarks, Leptons, and the Big Bang (1998), is an introductory textbook aimed at high-school students with no previous knowledge of particle physics. G.D. Coughlan and J.E. Dodd, The Ideas of Particle Physics: An Introduction for Scientists (1994), bridges the gap between popular accounts and detailed textbooks for readers with some background in the physical sciences. Robert N. Cahn and Gerson Goldhaber, The Experimental Foundations of Particle Physics (1991), a more-technical introductory text, is a collection of important papers on discoveries in particle physics, together with commentary aimed at physics students.
Historical accounts
Abraham Pais, Inward Bound: Of Matter and Forces in the Physical World (1986), is a detailed scholarly account of major developments in subatomic physics, in particular from 1895 to the 1960s. Laurie M. Brown and Lillian Hoddeson (eds.), The Birth of Particle Physics (1983); Laurie M. Brown, Max Dresden, Lillian Hoddeson, and May West (eds.), Pions to Quarks (1989); and Lillian Hoddeson, Laurie M. Brown, Michael Riordan, and Max Dresden (eds.), The Rise of the Standard Model (1997), are three books based on symposia held to consider developments during three major eras in the history of particle physics, from the 1930s to the 1990s, with many firsthand accounts from the scientists involved. Lochlainn O'Raifeartaigh, The Dawning of Gauge Theory (1997), is a study of the development of gauge theory, with commentary on important papers in the field. Gordon Fraser (ed.), The Particle Century (1998), is a collection of essays highlighting the major developments in particle physics, including firsthand accounts from Nobel Prize winners. An interesting collection of important papers on electroweak theory is contained in C.H. Lai (ed.), Selected Papers on Gauge Theory of Weak and Electromagnetic Interactions (1981).
برچسبها:
SUBATOMIC PARTICLE
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment