id
stringlengths 77
537
| text
stringlengths 101
1.47M
| source
stringclasses 1
value | added
stringdate 2025-03-17 19:52:06
2025-03-17 22:27:57
| created
timestamp[s]date 2013-05-16 21:27:22
2025-03-14 14:40:33
| metadata
dict |
---|---|---|---|---|---|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.02%3A_Vision_Correction
|
26.2: Vision Correction
Learning Objectives
By the end of this section, you will be able to:
- Identify and discuss common vision defects.
- Explain nearsightedness and farsightedness corrections.
- Explain laser vision correction.
The need for some type of vision correction is very common. Common vision defects are easy to understand, and some are simple to correct. Figure \(\PageIndex{1}\) illustrates two common vision defects. Nearsightedness , or myopia , is the inability to see distant objects clearly while close objects are clear. The eye overconverges the nearly parallel rays from a distant object, and the rays cross in front of the retina. More divergent rays from a close object are converged on the retina for a clear image. The distance to the farthest object that can be seen clearly is called the far point of the eye (normally infinity). Farsightedness , or hyperopia , is the inability to see close objects clearly while distant objects may be clear. A farsighted eye does not converge sufficient rays from a close object to make the rays meet on the retina. Less diverging rays from a distant object can be converged for a clear image. The distance to the closest object that can be seen clearly is called the near point of the eye (normally 25 cm).
Since the nearsighted eye over converges light rays, the correction for nearsightedness is to place a diverging spectacle lens in front of the eye. This reduces the power of an eye that is too powerful. Another way of thinking about this is that a diverging spectacle lens produces a case 3 image, which is closer to the eye than the object (Figure \(\PageIndex{2}\)). To determine the spectacle power needed for correction, you must know the person’s far point -- that is, you must know the greatest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or closer for the nearsighted person to be able to see it clearly. It is worth noting that wearing glasses does not change the eye in any way. The eyeglass lens is simply used to create an image of the object at a distance where the nearsighted person can see it clearly. Whereas someone not wearing glasses can see clearly objects that fall between their near point and their far point, someone wearing glasses can see images that fall between their near point and their far point.
Example \(\PageIndex{1}\): Correcting Nearsightedness
What power of spectacle lens is needed to correct the vision of a nearsighted person whose far point is 30.0 cm? Assume the spectacle (corrective) lens is held 1.50 cm away from the eye by eyeglass frames.
Strategy:
You want this nearsighted person to be able to see very distant objects clearly. That means the spectacle lens must produce an image 30.0 cm from the eye for an object very far away. An image 30.0 cm from the eye will be 28.5 cm to the left of the spectacle lens (Figure \(\PageIndex{2}\)). Therefore, we must get \(d_{i} = -28.5 cm\) when \(d_{o} \approx \infty \). The image distance is negative, because it is on the same side of the spectacle as the object.
Solution
Since \(d_{i}\) and \(d_{o}\) are known, the power of the spectacle lens can be found using \(P = \frac{1}{d_{o}} + \frac{1}{d_{i}}\) as written earlier:
\[P = \frac{1}{d_{o}} + \frac{1}{d_{i}} = \frac{1}{\infty} + \frac{1}{-0.285 m}.\]
Since \(1/ \infty = 0\), we obtain: \[P = 0 - 3.51/m = -3.51 D.\]
Discussion:
The negative power indicates a diverging (or concave) lens, as expected. The spectacle produces a case 3 image closer to the eye, where the person can see it. If you examine eyeglasses for nearsighted people, you will find the lenses are thinnest in the center. Additionally, if you examine a prescription for eyeglasses for nearsighted people, you will find that the prescribed power is negative and given in units of diopters.
Since the farsighted eye under converges light rays, the correction for farsightedness is to place a converging spectacle lens in front of the eye. This increases the power of an eye that is too weak. Another way of thinking about this is that a converging spectacle lens produces a case 2 image, which is farther from the eye than the object (Figure \(\PageIndex{3}\)). To determine the spectacle power needed for correction, you must know the person’s near point -- that is, you must know the smallest distance at which the person can see clearly. Then the image produced by a spectacle lens must be at this distance or farther for the farsighted person to be able to see it clearly.
Example \(\PageIndex{2}\):Correcting Farsightedness
What power of spectacle lens is needed to allow a farsighted person, whose near point is 1.00 m, to see an object clearly that is 25.0 cm away? Assume the spectacle (corrective) lens is held 1.50 cm away from the eye by eyeglass frames.
Strategy
When an object is held 25.0 cm from the person’s eyes, the spectacle lens must produce an image 1.00 m away (the near point). An image 1.00 m from the eye will be 98.5 cm to the left of the spectacle lens because the spectacle lens is 1.50 cm from the eye (Figure \(\PageIndex{3}\)). Therefore, \(d_{i} = -98.5 cm\). The image distance is negative, because it is on the same side of the spectacle as the object. The object is 23.5 cm to the left of the spectacle, so that \(d_{o} = 23.5 cm\).
Solution
Since \(d_{i}\) and \(d_{o}\) are known, the power of the spectacle lens can be found using \(P = \frac{1}{d_{o}} + \frac{1}{d_{i}}\): \[P = \frac{1}{d_{o}} + \frac{1}{d_{i}} = \frac{1}{0.235 m} + \frac{1}{-0.985 m}\] \[4.26D - 1.02D = 3.24D.\]
Discussion
The positive power indicates a converging (convex) lens, as expected. The convex spectacle produces a case 2 image farther from the eye, where the person can see it. If you examine eyeglasses of farsighted people, you will find the lenses to be thickest in the center. In addition, a prescription of eyeglasses for farsighted people has a prescribed power that is positive.
Another common vision defect is astigmatism an unevenness or asymmetry in the focus of the eye. For example, rays passing through a vertical region of the eye may focus closer than rays passing through a horizontal region, resulting in the image appearing elongated. This is mostly due to irregularities in the shape of the cornea but can also be due to lens irregularities or unevenness in the retina. Because of these irregularities, different parts of the lens system produce images at different locations. The eye-brain system can compensate for some of these irregularities, but they generally manifest themselves as less distinct vision or sharper images along certain axes. Figure \(\PageIndex{4}\) shows a chart used to detect astigmatism. Astigmatism can be at least partially corrected with a spectacle having the opposite irregularity of the eye. If an eyeglass prescription has a cylindrical correction, it is there to correct astigmatism. The normal corrections for short- or farsightedness are spherical corrections, uniform along all axes.
Contact lenses have advantages over glasses beyond their cosmetic aspects. One problem with glasses is that as the eye moves, it is not at a fixed distance from the spectacle lens. Contacts rest on and move with the eye, eliminating this problem. Because contacts cover a significant portion of the cornea, they provide superior peripheral vision compared with eyeglasses. Contacts also correct some corneal astigmatism caused by surface irregularities. The tear layer between the smooth contact and the cornea fills in the irregularities. Since the index of refraction of the tear layer and the cornea are very similar, you now have a regular optical surface in place of an irregular one. If the curvature of a contact lens is not the same as the cornea (as may be necessary with some individuals to obtain a comfortable fit), the tear layer between the contact and cornea acts as a lens. If the tear layer is thinner in the center than at the edges, it has a negative power, for example. Skilled optometrists will adjust the power of the contact to compensate.
Laser vision correction has progressed rapidly in the last few years. It is the latest and by far the most successful in a series of procedures that correct vision by reshaping the cornea. As noted at the beginning of this section, the cornea accounts for about two-thirds of the power of the eye. Thus, small adjustments of its curvature have the same effect as putting a lens in front of the eye. To a reasonable approximation, the power of multiple lenses placed close together equals the sum of their powers. For example, a concave spectacle lens (for nearsightedness) having \(P = -3.00 D\) has the same effect on vision as reducing the power of the eye itself by 3.00 D. So to correct the eye for nearsightedness, the cornea is flattened to reduce its power. Similarly, to correct for farsightedness, the curvature of the cornea is enhanced to increase the power of the eye -- the same effect as the positive power spectacle lens used for farsightedness. Laser vision correction uses high intensity electromagnetic radiation to ablate (to remove material from the surface) and reshape the corneal surfaces.
Today, the most commonly used laser vision correction procedure is Laser in situ Keratomileusis (LASIK) . The top layer of the cornea is surgically peeled back and the underlying tissue ablated by multiple bursts of finely controlled ultraviolet radiation produced by an excimer laser. Lasers are used because they not only produce well-focused intense light, but they also emit very pure wavelength electromagnetic radiation that can be controlled more accurately than mixed wavelength light. The 193 nm wavelength UV commonly used is extremely and strongly absorbed by corneal tissue, allowing precise evaporation of very thin layers. A computer controlled program applies more bursts, usually at a rate of 10 per second, to the areas that require deeper removal. Typically a spot less than 1 mm in diameter and about \(0.3 \mu m\) in thickness is removed by each burst. Nearsightedness, farsightedness, and astigmatism can be corrected with an accuracy that produces normal distant vision in more than 90% of the patients, in many cases right away. The corneal flap is replaced; healing takes place rapidly and is nearly painless. More than 1 million Americans per year undergo LASIK (Figure \(\PageIndex{5}\)).
Summary
- Nearsightedness, or myopia, is the inability to see distant objects and is corrected with a diverging lens to reduce power.
- Farsightedness, or hyperopia, is the inability to see close objects and is corrected with a converging lens to increase power.
- In myopia and hyperopia, the corrective lenses produce images at a distance that the person can see clearly—the far point and near point, respectively.
Glossary
- nearsightedness
- another term for myopia, a visual defect in which distant objects appear blurred because their images are focused in front of the retina rather than being focused on the retina
- myopia
- a visual defect in which distant objects appear blurred because their images are focused in front of the retina rather than being focused on the retina
- far point
- the object point imaged by the eye onto the retina in an unaccommodated eye
- farsightedness
- another term for hyperopia, the condition of an eye where incoming rays of light reach the retina before they converge into a focused image
- hyperopia
- the condition of an eye where incoming rays of light reach the retina before they converge into a focused image
- near point
- the point nearest the eye at which an object is accurately focused on the retina at full accommodation
- astigmatism
- the result of an inability of the cornea to properly focus an image onto the retina
- laser vision correction
- a medical procedure used to correct astigmatism and eyesight deficiencies such as myopia and hyperopia
|
libretexts
|
2025-03-17T19:53:43.793024
| 2016-07-24T08:06:38 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.02%3A_Vision_Correction",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.2: Vision Correction",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.03%3A_Color_and_Color_Vision
|
26.3: Color and Color Vision
Learning Objectives
By the end of this section, you will be able to:
- Explain the simple theory of color vision.
- Outline the coloring properties of light sources.
- Describe the retinex theory of color vision.
The gift of vision is made richer by the existence of color. Objects and lights abound with thousands of hues that stimulate our eyes, brains, and emotions. Two basic questions are addressed in this brief treatment -- what does color mean in scientific terms, and how do we, as humans, perceive it?
Simple Theory of Color Vision
We have already noted that color is associated with the wavelength of visible electromagnetic radiation. When our eyes receive pure-wavelength light, we tend to see only a few colors. Six of these (most often listed) are red, orange, yellow, green, blue, and violet. These are the rainbow of colors produced when white light is dispersed according to different wavelengths. There are thousands of other hues that we can perceive. These include brown, teal, gold, pink, and white. One simple theory of color vision implies that all these hues are our eye’s response to different combinations of wavelengths. This is true to an extent, but we find that color perception is even subtler than our eye’s response for various wavelengths of light.
The two major types of light-sensing cells (photoreceptors) in the retina are rods and cones Rods are more sensitive than cones by a factor of about 1000 and are solely responsible for peripheral vision as well as vision in very dark environments. They are also important for motion detection. There are about 120 million rods in the human retina. Rods do not yield color information. You may notice that you lose color vision when it is very dark, but you retain the ability to discern grey scales.
TAKE-HOME EXPERIMENT: RODS AND CONES
- Go into a darkened room from a brightly lit room, or from outside in the Sun. How long did it take to start seeing shapes more clearly? What about color? Return to the bright room. Did it take a few minutes before you could see things clearly?
- Demonstrate the sensitivity of foveal vision . Look at the letter G in the word ROGERS. What about the clarity of the letters on either side of G?
Cones are most concentrated in the fovea, the central region of the retina. There are no rods here. The fovea is at the center of the macula, a 5 mm diameter region responsible for our central vision. The cones work best in bright light and are responsible for high resolution vision. There are about 6 million cones in the human retina. There are three types of cones, and each type is sensitive to different ranges of wavelengths, as illustrated in Figure \(\PageIndex{1}\). A simplified theory of color vision is that there are three primary colors corresponding to the three types of cones. The thousands of other hues that we can distinguish among are created by various combinations of stimulations of the three types of cones. Color television uses a three-color system in which the screen is covered with equal numbers of red, green, and blue phosphor dots. The broad range of hues a viewer sees is produced by various combinations of these three colors. For example, you will perceive yellow when red and green are illuminated with the correct ratio of intensities. White may be sensed when all three are illuminated. Then, it would seem that all hues can be produced by adding three primary colors in various proportions. But there is an indication that color vision is more sophisticated. There is no unique set of three primary colors. Another set that works is yellow, green, and blue. A further indication of the need for a more complex theory of color vision is that various different combinations can produce the same hue. Yellow can be sensed with yellow light, or with a combination of red and green, and also with white light from which violet has been removed. The three-primary-colors aspect of color vision is well established; more sophisticated theories expand on it rather than deny it.
Consider why various objects display color -- that is, why are feathers blue and red in a crimson rosella? The true color of an object is defined by its absorptive or reflective characteristics. Figure \(\PageIndex{2}\) shows white light falling on three different objects, one pure blue, one pure red, and one black, as well as pure red light falling on a white object. Other hues are created by more complex absorption characteristics. Pink, for example on a galah cockatoo, can be due to weak absorption of all colors except red. An object can appear a different color under non-white illumination. For example, a pure blue object illuminated with pure red light will appear black, because it absorbs all the red light falling on it. But, the true color of the object is blue, which is independent of illumination.
Similarly, light sources have colors that are defined by the wavelengths they produce. A helium-neon laser emits pure red light. In fact, the phrase “pure red light” is defined by having a sharp constrained spectrum, a characteristic of laser light. The Sun produces a broad yellowish spectrum, fluorescent lights emit bluish-white light, and incandescent lights emit reddish-white hues as seen in Figure 3. As you would expect, you sense these colors when viewing the light source directly or when illuminating a white object with them. All of this fits neatly into the simplified theory that a combination of wavelengths produces various hues.
TAKE-HOME EXPERIMENT: EXPLORING COLOR ADDITION
This activity is best done with plastic sheets of different colors as they allow more light to pass through to our eyes. However, thin sheets of paper and fabric can also be used. Overlay different colors of the material and hold them up to a white light. Using the theory described above, explain the colors you observe. You could also try mixing different crayon colors.
Color Constancy and a Modified Theory of Color Vision
The eye-brain color-sensing system can, by comparing various objects in its view, perceive the true color of an object under varying lighting conditions -- an ability that is called color constancy . We can sense that a white tablecloth, for example, is white whether it is illuminated by sunlight, fluorescent light, or candlelight. The wavelengths entering the eye are quite different in each case, as the graphs in Figure 3 imply, but our color vision can detect the true color by comparing the tablecloth with its surroundings.
Theories that take color constancy into account are based on a large body of anatomical evidence as well as perceptual studies. There are nerve connections among the light receptors on the retina, and there are far fewer nerve connections to the brain than there are rods and cones. This means that there is signal processing in the eye before information is sent to the brain. For example, the eye makes comparisons between adjacent light receptors and is very sensitive to edges as seen in Figure 4. Rather than responding simply to the light entering the eye, which is uniform in the various rectangles in this figure, the eye responds to the edges and senses false darkness variations.
One theory that takes various factors into account was advanced by Edwin Land (1909 – 1991), the creative founder of the Polaroid Corporation. Land proposed, based partly on his many elegant experiments, that the three types of cones are organized into systems called retinexes . Each retinex forms an image that is compared with the others, and the eye-brain system thus can compare a candle-illuminated white table cloth with its generally reddish surroundings and determine that it is actually white. This retinex theory of color vision is an example of modified theories of color vision that attempt to account for its subtleties. One striking experiment performed by Land demonstrates that some type of image comparison may produce color vision. Two pictures are taken of a scene on black-and-white film, one using a red filter, the other a blue filter. Resulting black-and-white slides are then projected and superimposed on a screen, producing a black-and-white image, as expected. Then a red filter is placed in front of the slide taken with a red filter, and the images are again superimposed on a screen. You would expect an image in various shades of pink, but instead, the image appears to humans in full color with all the hues of the original scene. This implies that color vision can be induced by comparison of the black-and-white and red images. Color vision is not completely understood or explained, and the retinex theory is not totally accepted. It is apparent that color vision is much subtler than what a first look might imply.
PHET EXPLORATIONS: COLOR VISION
Make a whole rainbow by mixing red, green, and blue light. Change the wavelength of a monochromatic beam or filter white light. View the light as a solid beam, or see the individual photons.
Summary
- The eye has four types of light receptors -- rods and three types of color-sensitive cones.
- The rods are good for night vision, peripheral vision, and motion changes, while the cones are responsible for central vision and color.
- We perceive many hues, from light having mixtures of wavelengths.
- A simplified theory of color vision states that there are three primary colors, which correspond to the three types of cones, and that various combinations of the primary colors produce all the hues.
- The true color of an object is related to its relative absorption of various wavelengths of light. The color of a light source is related to the wavelengths it produces.
- Color constancy is the ability of the eye-brain system to discern the true color of an object illuminated by various light sources.
- The retinex theory of color vision explains color constancy by postulating the existence of three retinexes or image systems, associated with the three types of cones that are compared to obtain sophisticated information.
Glossary
- hues
- identity of a color as it relates specifically to the spectrum
- rods and cones
- two types of photoreceptors in the human retina; rods are responsible for vision at low light levels, while cones are active at higher light levels
- simplified theory of color vision
- a theory that states that there are three primary colors, which correspond to the three types of cones
- color constancy
- a part of the visual perception system that allows people to perceive color in a variety of conditions and to see some consistency in the color
- retinex
- a theory proposed to explain color and brightness perception and constancies; is a combination of the words retina and cortex, which are the two areas responsible for the processing of visual information
- retinex theory of color vision
- the ability to perceive color in an ambient-colored environment
|
libretexts
|
2025-03-17T19:53:43.868307
| 2016-07-24T08:07:21 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.03%3A_Color_and_Color_Vision",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.3: Color and Color Vision",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.04%3A_Microscopes
|
26.4: Microscopes
Learning Objectives
By the end of this section, you will be able to:
- Investigate different types of microscopes.
- Learn how image is formed in a compound microscope.
Although the eye is marvelous in its ability to see objects large and small, it obviously has limitations to the smallest details it can detect. Human desire to see beyond what is possible with the naked eye led to the use of optical instruments. In this section we will examine microscopes, instruments for enlarging the detail that we cannot see with the unaided eye. The microscope is a multiple-element system having more than a single lens or mirror (Figure \(\PageIndex{1}\)). A microscope can be made from two convex lenses. The image formed by the first element becomes the object for the second element. The second element forms its own image, which is the object for the third element, and so on. Ray tracing helps to visualize the image formed. If the device is composed of thin lenses and mirrors that obey the thin lens equations, then it is not difficult to describe their behavior numerically.
Microscopes were first developed in the early 1600s by eyeglass makers in The Netherlands and Denmark. The simplest compound microscope is constructed from two convex lenses as shown schematically in Figure 2. The first lens is called the objective lens , and has typical magnification values from \(5 \times\) to \(100 \times\). In standard microscopes, the objectives are mounted such that when you switch between objectives, the sample remains in focus. Objectives arranged in this way are described as parfocal. The second, the eyepiece , also referred to as the ocular, has several lenses which slide inside a cylindrical barrel. The focusing ability is provided by the movement of both the objective lens and the eyepiece. The purpose of a microscope is to magnify small objects, and both lenses contribute to the final magnification. Additionally, the final enlarged image is produced in a location far enough from the observer to be easily viewed, since the eye cannot focus on objects or images that are too close.
To see how the microscope in Figure 2 forms an image, we consider its two lenses in succession. The object is slightly farther away from the objective lens than its focal length \(f_{o}\), producing a case 1 image that is larger than the object. This first image is the object for the second lens, or eyepiece. The eyepiece is intentionally located so it can further magnify the image. The eyepiece is placed so that the first image is closer to it than its focal length \(f_{e}\). Thus the eyepiece acts as a magnifying glass, and the final image is made even larger. The final image remains inverted, but it is farther from the observer, making it easy to view (the eye is most relaxed when viewing distant objects and normally cannot focus closer than 25 cm). Since each lens produces a magnification that multiplies the height of the image, it is apparent that the overall magnification \(m\) is the product of the individual magnifications: \[m = m_{o}m_{e} \label{26.5.1},\] where \(m_{o}\) is the magnification of the objective and \(m_{e}\) is the magnification of the eyepiece. This equation can be generalized for any combination of thin lenses and mirrors that obey the thin lens equations.
OVERALL MAGNIFICATION
The overall magnification of a multiple-element system is the product of the individual magnifications of its elements.
Example \(\PageIndex{1}\):Microscope Magnification
Calculate the magnification of an object placed 6.20 mm from a compound microscope that has a 6.00 mm focal length objective and a 50.0 mm focal length eyepiece. The objective and eyepiece are separated by 23.0 cm.
Strategy and Concept:
This situation is similar to that shown in Figure 2. To find the overall magnification, we must find the magnification of the objective, then the magnification of the eyepiece. This involves using the thin lens equation.
Solution
The magnification of the objective lens is given as \[m_{o} = -\frac{d_{i}}{d_{o}}\label{26.5.2},\] where \(d_{o}\) and \(d_{i}\) are the object and image distances, respectively, for the objective lens as labeled in Figure 2. The object distance is given to be \(d_{o} = 6.20 mm\), but the image distance \(d_{i}\) is not known. Isolating \(d_{i}\), we have \[\frac{1}{d_{i}} = \frac{1}{f_{o}} - \frac{1}{d_{o}} \label{26.5.3},\] where \(f_{o}\) is the focal length of the objective lens. Substituting known values gives \[\frac{1}{d_{i}} = \frac{1}{6.00 mm} - \frac{1}{6.20 mm} = \frac{0.00538}{mm} .\] We invert this to find \(d_{i}\): \[d_{i} = 186 mm.\] Substituting this into the expression for \(m_{o}\) gives \[m_{o} = - \frac{d_{i}}{d_{o}} = - \frac{186 mm}{6.20 mm} = -30.0.\] Now we must find the magnification of the eyepiece, which is given by \[m_{e} = -\frac{d_{i}'}{d_{o}'},\label{26.5.4}\] where \(d_{i}'\) and \(d_{o}'\) are the image and object distances for the eyepiece (see Figure 2). The object distance is the distance of the first image from the eyepiece. Since the first image is 186 mm to the right of the objective and the eyepiece is 230 mm to the right of the objective, the object distance is \(d_{o}' = 230 mm - 186 mm = 44.0 mm\). This places the first image closer to the eyepiece than its focal length, so that the eyepiece will form a case 2 image as shown in the figure. We still need to find the location of the final image \(d_{i}'\) in order to find the magnification. This is done as before to obtain a value for \(1/d_{i}'\): \[\frac{1}{d_{i}'} = \frac{1}{f_{e}} - \frac{1}{d_{o}'} = \frac{1}{50.0 mm} - \frac{1}{44.0 mm} = - \frac{0.00273}{mm}.\] Inverting gives \[d_{i}' = - \frac{mm}{0.00273} = -367 mm.\] The eyepiece’s magnification is thus \[m_{e} = - \frac{d_{i}'}{d_{o}'} = - \frac{-367 mm}{44.0 mm} = 8.33.\] So the overall magnification is \[m = m_{o}m_{e} = \left( -30.0 \right) \left( 8.33 \right) = -250.\]
Discussion:
Both the objective and the eyepiece contribute to the overall magnification, which is large and negative, consistent with Figure 2, where the image is seen to be large and inverted. In this case, the image is virtual and inverted, which cannot happen for a single element (case 2 and case 3 images for single elements are virtual and upright). The final image is 367 mm (0.367 m) to the left of the eyepiece. Had the eyepiece been placed farther from the objective, it could have formed a case 1 image to the right. Such an image could be projected on a screen, but it would be behind the head of the person in the figure and not appropriate for direct viewing. The procedure used to solve this example is applicable in any multiple-element system. Each element is treated in turn, with each forming an image that becomes the object for the next element. The process is not more difficult than for single lenses or mirrors, only lengthier.
Normal optical microscopes can magnify up to \(1500 \times \) with a theoretical resolution of \(-0.2 \mu m\). The lenses can be quite complicated and are composed of multiple elements to reduce aberrations. Microscope objective lenses are particularly important as they primarily gather light from the specimen. Three parameters describe microscope objectives: the numerical aperture (\(NA\)), the magnification (\(m\)), and the working distance. The \(NA\) is related to the light gathering ability of a lens and is obtained using the angle of acceptance \(\theta\) formed by the maximum cone of rays focusing on the specimen (see Figure 3a) and is given by \[NA = n \sin{\alpha} \label{26.5.5},\] where \(n\) is the refractive index of the medium between the lens and the specimen and \(\alpha = \theta / 2\). As the angle of acceptance given by \(\theta\) increases, \(NA\) becomes larger and more light is gathered from a smaller focal region giving higher resolution. A \(0.75 NA\) objective gives more detail than a \(0.10 NA\) objective.
While the numerical aperture can be used to compare resolutions of various objectives, it does not indicate how far the lens could be from the specimen. This is specified by the “working distance,” which is the distance (in mm usually) from the front lens element of the objective to the specimen, or cover glass. The higher the \(NA\) the closer the lens will be to the specimen and the more chances there are of breaking the cover slip and damaging both the specimen and the lens. The focal length of an objective lens is different than the working distance. This is because objective lenses are made of a combination of lenses and the focal length is measured from inside the barrel. The working distance is a parameter that microscopists can use more readily as it is measured from the outermost lens. The working distance decreases as the \(NA\) and magnification both increase.
The term \(f/ \#\) in general is called the \(f\)-number and is used to denote the light per unit area reaching the image plane. In photography, an image of an object at infinity is formed at the focal point and the \(f\)-number is given by the ratio of the focal length \(f\) of the lens and the diameter \(D\) of the aperture controlling the light into the lens (see Figure 3b). If the acceptance angle is small the \(NA\) of the lens can also be used as given below. \[f/ \# = \frac{f}{D} \approx \frac{1}{2NA} \label{26.5.6}.\] As the \(f\)-number decreases, the camera is able to gather light from a larger angle, giving wide-angle photography. As usual there is a trade-off. A greater \(f/ \#\) means less light reaches the image plane. A setting of \(f/16\) usually allows one to take pictures in bright sunlight as the aperture diameter is small. In optical fibers, light needs to be focused into the fiber. Figure 4 shows the angle used in calculating the \(NA\) of an optical fiber.
Can the \(NA\) be larger than 1.00? The answer is ‘yes’ if we use immersion lenses in which a medium such as oil, glycerine or water is placed between the objective and the microscope cover slip. This minimizes the mismatch in refractive indices as light rays go through different media, generally providing a greater light-gathering ability and an increase in resolution. Figure 5 shows light rays when using air and immersion lenses.
When using a microscope we do not see the entire extent of the sample. Depending on the eyepiece and objective lens we see a restricted region which we say is the field of view. The objective is then manipulated in two-dimensions above the sample to view other regions of the sample. Electronic scanning of either the objective or the sample is used in scanning microscopy. The image formed at each point during the scanning is combined using a computer to generate an image of a larger region of the sample at a selected magnification.
When using a microscope, we rely on gathering light to form an image. Hence most specimens need to be illuminated, particularly at higher magnifications, when observing details that are so small that they reflect only small amounts of light. To make such objects easily visible, the intensity of light falling on them needs to be increased. Special illuminating systems called condensers are used for this purpose. The type of condenser that is suitable for an application depends on how the specimen is examined, whether by transmission, scattering or reflecting. See Figure 6 for an example of each. White light sources are common and lasers are often used. Laser light illumination tends to be quite intense and it is important to ensure that the light does not result in the degradation of the specimen.
We normally associate microscopes with visible light but x ray and electron microscopes provide greater resolution. The focusing and basic physics is the same as that just described, even though the lenses require different technology. The electron microscope requires vacuum chambers so that the electrons can proceed unheeded. Magnifications of 50 million times provide the ability to determine positions of individual atoms within materials. An electron microscope is shown in Figure 7. We do not use our eyes to form images; rather images are recorded electronically and displayed on computers. In fact observing and saving images formed by optical microscopes on computers is now done routinely. Video recordings of what occurs in a microscope can be made for viewing by many people at later dates. Physics provides the science and tools needed to generate the sequence of time-lapse images of meiosis similar to the sequence sketched in Figure 8.
TAKE-HOME EXPERIMENT: MAKE A LENS
Look through a clear glass or plastic bottle and describe what you see. Now fill the bottle with water and describe what you see. Use the water bottle as a lens to produce the image of a bright object and estimate the focal length of the water bottle lens. How is the focal length a function of the depth of water in the bottle?
Summary
- The microscope is a multiple-element system having more than a single lens or mirror.
- Many optical devices contain more than a single lens or mirror. These are analysed by considering each element sequentially. The image formed by the first is the object for the second, and so on. The same ray tracing and thin lens techniques apply to each lens element.
- The overall magnification of a multiple-element system is the product of the magnifications of its individual elements. For a two-element system with an objective and an eyepiece, this is \[m = m_{o}m_{e},\] where \(m_{o}\) is the magnification of the objective and \(m_{e}\) is the magnification of the eyepiece, such as for a microscope.
- Microscopes are instruments for allowing us to see detail we would not be able to see with the unaided eye and consist of a range of components.
- The eyepiece and objective contribute to the magnification. The numerical aperture (\(NA\)) of an objective is given by \[NA = n \sin{\alpha}\] where \(n\) is the refractive index and \(\alpha\) the angle of acceptance.
- Immersion techniques are often used to improve the light gathering ability of microscopes. The specimen is illuminated by transmitted, scattered or reflected light though a condenser.
- The \(f / \#\) describes the light gathering ability of a lens. It is given by \[f / \# = \frac{f}{D} \approx \frac{1}{2NA}.\]
Glossary
- compound microscope
- a microscope constructed from two convex lenses, the first serving as the ocular lens(close to the eye) and the second serving as the objective lens
- objective lens
- the lens nearest to the object being examined
- eyepiece
- the lens or combination of lenses in an optical instrument nearest to the eye of the observer
- numerical aperture
- a number or measure that expresses the ability of a lens to resolve fine detail in an object being observed. Derived by mathematical formula \[NA = n \sin{\alpha},\] where \(n\) is the refractive index of the medium between the lens and the specimen and \(\alpha = \theta /2\)
|
libretexts
|
2025-03-17T19:53:43.942068
| 2016-07-24T08:08:01 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.04%3A_Microscopes",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.4: Microscopes",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.05%3A_Telescopes
|
26.5: Telescopes
Learning Objectives
By the end of this section, you will be able to:
- Outline the invention of a telescope.
- Describe the working of a telescope.
Telescopes are meant for viewing distant objects, producing an image that is larger than the image that can be seen with the unaided eye. Telescopes gather far more light than the eye, allowing dim objects to be observed with greater magnification and better resolution. Although Galileo is often credited with inventing the telescope, he actually did not. What he did was more important. He constructed several early telescopes, was the first to study the heavens with them, and made monumental discoveries using them. Among these are the moons of Jupiter, the craters and mountains on the Moon, the details of sunspots, and the fact that the Milky Way is composed of vast numbers of individual stars.
Figure \(\PageIndex{1a}\) shows a telescope made of two lenses, the convex objective and the concave eyepiece, the same construction used by Galileo. Such an arrangement produces an upright image and is used in spyglasses and opera glasses.
The most common two-lens telescope, like the simple microscope, uses two convex lenses and is shown in Figure 1b. The object is so far away from the telescope that it is essentially at infinity compared with the focal lengths of the lenses (\(d \approx \infty \)). The first image is thus produced at \(d_{i} = f_{o}\), as shown in the figure. To prove this, note that
\[\frac{1}{d_{i}} = \frac{1}{f_{o}} - \frac{1}{d_{o}} = \frac{1}{f_{o}} - \frac{1}{\infty}.\]
Because \(1/\infty = 0\), this simplifies to
\[\frac{1}{d_{i}} = \frac{1}{f_{o}},\]
which implies that \(d_{i} = f_{o}\), as claimed. It is true that for any distant object and any lens or mirror, the image is at the focal length.
The first image formed by a telescope objective as seen in Figure \(\PageIndex{1b}\) will not be large compared with what you might see by looking at the object directly. For example, the spot formed by sunlight focused on a piece of paper by a magnifying glass is the image of the Sun, and it is small. The telescope eyepiece (like the microscope eyepiece) magnifies this first image. The distance between the eyepiece and the objective lens is made slightly less than the sum of their focal lengths so that the first image is closer to the eyepiece than its focal length. That is, \(d_{o}'\) is less that \(f_{e}\), and so the eyepiece forms a case 2 image that is large and to the left for easy viewing. If the angle subtended by an object as viewed by the unaided eye is \(\theta\), and the angle subtended by the telescope image is \(\theta '\), then the angular magnification \(M\) is defined to be their ratio. That is, \(M = \theta ' / \theta \). It can be shown that the angular magnification of a telescope is related to the focal lengths of the objective and eyepiece; and is given by
\[M = \frac{\theta '}{\theta} = - \frac{f_{o}}{f_{e}}.\label{26.6.1}\]
The minus sign indicates the image is inverted. To obtain the greatest angular magnification, it is best to have a long focal length objective and a short focal length eyepiece. The greater the angular magnification \(M\), the larger an object will appear when viewed through a telescope, making more details visible. Limits to observable details are imposed by many factors, including lens quality and atmospheric disturbance.
The image in most telescopes is inverted, which is unimportant for observing the stars but a real problem for other applications, such as telescopes on ships or telescopic gun sights. If an upright image is needed, Galileo’s arrangement in Figure \(\PageIndex{1a}\) can be used. But a more common arrangement is to use a third convex lens as an eyepiece, increasing the distance between the first two and inverting the image once again as seen in Figure \(\PageIndex{2}\).
A telescope can also be made with a concave mirror as its first element or objective, since a concave mirror acts like a convex lens as seen in Figure \(\PageIndex{3}\). Flat mirrors are often employed in optical instruments to make them more compact or to send light to cameras and other sensing devices. There are many advantages to using mirrors rather than lenses for telescope objectives. Mirrors can be constructed much larger than lenses and can, thus, gather large amounts of light, as needed to view distant galaxies, for example. Large and relatively flat mirrors have very long focal lengths, so that great angular magnification is possible.
Telescopes, like microscopes, can utilize a range of frequencies from the electromagnetic spectrum. Figure 4a shows the Australia Telescope Compact Array, which uses six 22-m antennas for mapping the southern skies using radio waves. Figure \(\PageIndex{4b}\) shows the focusing of x rays on the Chandra X-ray Observatory -- a satellite orbiting earth since 1999 and looking at high temperature events as exploding stars, quasars, and black holes. X rays, with much more energy and shorter wavelengths than RF and light, are mainly absorbed and not reflected when incident perpendicular to the medium. But they can be reflected when incident at small glancing angles, much like a rock will skip on a lake if thrown at a small angle. The mirrors for the Chandra consist of a long barrelled pathway and 4 pairs of mirrors to focus the rays at a point 10 meters away from the entrance. The mirrors are extremely smooth and consist of a glass ceramic base with a thin coating of metal (iridium). Four pairs of precision manufactured mirrors are exquisitely shaped and aligned so that x rays ricochet off the mirrors like bullets off a wall, focusing on a spot.
A current exciting development is a collaborative effort involving 17 countries to construct a Square Kilometre Array (SKA) of telescopes capable of covering from 80 MHz to 2 GHz. The initial stage of the project is the construction of the Australian Square Kilometre Array Pathfinder in Western Australia (see Figure 5). The project will use cutting-edge technologies such as adaptive optics in which the lens or mirror is constructed from lots of carefully aligned tiny lenses and mirrors that can be manipulated using computers. A range of rapidly changing distortions can be minimized by deforming or tilting the tiny lenses and mirrors. The use of adaptive optics in vision correction is a current area of research.
Summary
- Simple telescopes can be made with two lenses. They are used for viewing objects at large distances and utilize the entire range of the electromagnetic spectrum.
- The angular magnification M for a telescope is given by \[M = \frac{\theta '}{\theta} = - \frac{f_{o}}{f_{e}},\] where \(\theta\) is the angle subtended by an object viewed by the unaided eye, \(\theta ' \) is the angle subtended by a magnified image, and \(f_{o}\) and \(f_{e}\) are the focal lengths of the objective and the eyepiece.
Glossary
- adaptive optics
- optical technology in which computers adjust the lenses and mirrors in a device to correct for image distortions
- angular magnification
- a ratio related to the focal lengths of the objective and eyepiece and given as \(M = - \frac{f_{o}}{f_{e}}\)
|
libretexts
|
2025-03-17T19:53:44.008792
| 2016-07-24T08:09:08 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.05%3A_Telescopes",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.5: Telescopes",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.06%3A_Aberrations
|
26.6: Aberrations
Learning Objectives
By the end of this section, you will be able to:
- Describe optical aberration.
Real lenses behave somewhat differently from how they are modeled using the thin lens equations, producing aberrations . An aberration is a distortion in an image. There are a variety of aberrations due to a lens size, material, thickness, and position of the object. One common type of aberration is chromatic aberration, which is related to color. Since the index of refraction of lenses depends on color or wavelength, images are produced at different places and with different magnifications for different colors. (The law of reflection is independent of wavelength, and so mirrors do not have this problem. This is another advantage for mirrors in optical systems such as telescopes.) Figure \(\PageIndex{1a}\) shows chromatic aberration for a single convex lens and its partial correction with a two-lens system. Violet rays are bent more than red, since they have a higher index of refraction and are thus focused closer to the lens. The diverging lens partially corrects this, although it is usually not possible to do so completely. Lenses of different materials and having different dispersions may be used. For example an achromatic doublet consisting of a converging lens made of crown glass and a diverging lens made of flint glass in contact can dramatically reduce chromatic aberration (Figure \(\PageIndex{1b}\)).
Quite often in an imaging system the object is off-center. Consequently, different parts of a lens or mirror do not refract or reflect the image to the same point. This type of aberration is called a coma and is shown in Figure \(\PageIndex{2}\). The image in this case often appears pear-shaped. Another common aberration is spherical aberration where rays converging from the outer edges of a lens converge to a focus closer to the lens and rays closer to the axis focus further (Figure \(\PageIndex{3}\)). Aberrations due to astigmatism in the lenses of the eyes are discussed in "Vision Correction," and a chart used to detect astigmatism is shown in the link . Such aberrations and can also be an issue with manufactured lenses.
The image produced by an optical system needs to be bright enough to be discerned. It is often a challenge to obtain a sufficiently bright image. The brightness is determined by the amount of light passing through the optical system. The optical components determining the brightness are the diameter of the lens and the diameter of pupils, diaphragms or aperture stops placed in front of lenses. Optical systems often have entrance and exit pupils to specifically reduce aberrations but they inevitably reduce brightness as well. Consequently, optical systems need to strike a balance between the various components used. The iris in the eye dilates and constricts, acting as an entrance pupil. You can see objects more clearly by looking through a small hole made with your hand in the shape of a fist. Squinting, or using a small hole in a piece of paper, also will make the object sharper.
So how are aberrations corrected? The lenses may also have specially shaped surfaces, as opposed to the simple spherical shape that is relatively easy to produce. Expensive camera lenses are large in diameter, so that they can gather more light, and need several elements to correct for various aberrations. Further, advances in materials science have resulted in lenses with a range of refractive indices -- technically referred to as graded index (GRIN) lenses. Spectacles often have the ability to provide a range of focusing ability using similar techniques. GRIN lenses are particularly important at the end of optical fibers in endoscopes. Advanced computing techniques allow for a range of corrections on images after the image has been collected and certain characteristics of the optical system are known. Some of these techniques are sophisticated versions of what are available on commercial packages like Adobe Photoshop.
Summary
- Aberrations or image distortions can arise due to the finite thickness of optical instruments, imperfections in the optical components, and limitations on the ways in which the components are used.
- The means for correcting aberrations range from better components to computational techniques.
Glossary
- aberration
- failure of rays to converge at one focus because of limitations or defects in a lens or mirror
|
libretexts
|
2025-03-17T19:53:44.071654
| 2016-07-24T08:10:07 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.06%3A_Aberrations",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.6: Aberrations",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.E%3A_Vision_and_Optical_Instruments_(Exercise)
|
26.E: Vision and Optical Instruments (Exercise)
-
- Last updated
- Save as PDF
Conceptual Questions
26.1: Physics of the Eye
1. If the lens of a person’s eye is removed because of cataracts (as has been done since ancient times), why would you expect a spectacle lens of about 16 D to be prescribed?
2. A cataract is cloudiness in the lens of the eye. Is light dispersed or diffused by it?
3. When laser light is shone into a relaxed normal-vision eye to repair a tear by spot-welding the retina to the back of the eye, the rays entering the eye must be parallel. Why?
4. How does the power of a dry contact lens compare with its power when resting on the tear layer of the eye? Explain.
5. Why is your vision so blurry when you open your eyes while swimming under water? How does a face mask enable clear vision?
26.2: Vision Correction
6. It has become common to replace the cataract-clouded lens of the eye with an internal lens. This intraocular lens can be chosen so that the person has perfect distant vision. Will the person be able to read without glasses? If the person was nearsighted, is the power of the intraocular lens greater or less than the removed lens?
7. If the cornea is to be reshaped (this can be done surgically or with contact lenses) to correct myopia, should its curvature be made greater or smaller? Explain. Also explain how hyperopia can be corrected.
8. If there is a fixed percent uncertainty in LASIK reshaping of the cornea, why would you expect those people with the greatest correction to have a poorer chance of normal distant vision after the procedure?
9. A person with presbyopia has lost some or all of the ability to accommodate the power of the eye. If such a person’s distant vision is corrected with LASIK, will she still need reading glasses? Explain.
26.3: Color and Color Vision
10. A pure red object on a black background seems to disappear when illuminated with pure green light. Explain why.
11. What is color constancy, and what are its limitations?
12. There are different types of color blindness related to the malfunction of different types of cones. Why would it be particularly useful to study those rare individuals who are color blind only in one eye or who have a different type of color blindness in each eye?
13. Propose a way to study the function of the rods alone, given they can sense light about 1000 times dimmer than the cones.
26.4: Microscopes
14. Geometric optics describes the interaction of light with macroscopic objects. Why, then, is it correct to use geometric optics to analyse a microscope’s image?
15. The image produced by the microscope in Figure cannot be projected. Could extra lenses or mirrors project it? Explain.
16. Why not have the objective of a microscope form a case 2 image with a large magnification? (Hint: Consider the location of that image and the difficulty that would pose for using the eyepiece as a magnifier.)
17. What advantages do oil immersion objectives offer?
18. How does the \(\displaystyle NA\) of a microscope compare with the \(\displaystyle NA\) of an optical fiber?
26.5: Telescopes
19. If you want your microscope or telescope to project a real image onto a screen, how would you change the placement of the eyepiece relative to the objective?
26.6: Aberrations
20. List the various types of aberrations. What causes them and how can each be reduced?
Problem & Exercises
26.1: Physics of the Eye
Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
21. What is the power of the eye when viewing an object 50.0 cm away?
Solution
52.0 D
22. Calculate the power of the eye when viewing an object 3.00 m away.
(a) The print in many books averages 3.50 mm in height. How high is the image of the print on the retina when the book is held 30.0 cm from the eye?
(b) Compare the size of the print to the sizes of rods and cones in the fovea and discuss the possible details observable in the letters. (The eye-brain system can perform better because of interconnections and higher order image processing.)
Solution
(a) −0.233 mm
(b) The size of the rods and the cones is smaller than the image height, so we can distinguish letters on a page.
23. Suppose a certain person’s visual acuity is such that he can see objects clearly that form an image 4.00 μm high on his retina. What is the maximum distance at which he can read the 75.0 cm high letters on the side of an airplane?
24. People who do very detailed work close up, such as jewellers, often can see objects clearly at much closer distance than the normal 25 cm.
(a) What is the power of the eyes of a woman who can see an object clearly at a distance of only 8.00 cm?
(b) What is the size of an image of a 1.00 mm object, such as lettering inside a ring, held at this distance?
(c) What would the size of the image be if the object were held at the normal 25.0 cm distance?
Solution
(a) +62.5 D
(b) –0.250 mm
(c) –0.0800 mm
26.2: Vision Correction
25. What is the far point of a person whose eyes have a relaxed power of 50.5 D?
Solution
2.00 m
26. What is the near point of a person whose eyes have an accommodated power of 53.5 D?
27. (a) A laser vision correction reshaping the cornea of a myopic patient reduces the power of his eye by 9.00 D, with a \(\displaystyle ±5.0%\) uncertainty in the final correction. What is the range of diopters for spectacle lenses that this person might need after LASIK procedure?
(b) Was the person nearsighted or farsighted before the procedure? How do you know?
Solution
(a) ±0.45 D
(b) The person was nearsighted because the patient was myopic and the power was reduced.
28. In a LASIK vision correction, the power of a patient’s eye is increased by 3.00 D. Assuming this produces normal close vision, what was the patient’s near point before the procedure?
29. What was the previous far point of a patient who had laser vision correction that reduced the power of her eye by 7.00 D, producing normal distant vision for her?
Solution
0.143 m
30. A severely myopic patient has a far point of 5.00 cm. By how many diopters should the power of his eye be reduced in laser vision correction to obtain normal distant vision for him?
31. A student’s eyes, while reading the blackboard, have a power of 51.0 D. How far is the board from his eyes?
Solution
1.00 m
32. The power of a physician’s eyes is 53.0 D while examining a patient. How far from her eyes is the feature being examined?
33. A young woman with normal distant vision has a 10.0% ability to accommodate (that is, increase) the power of her eyes. What is the closest object she can see clearly?
Solution
20.0 cm
34. The far point of a myopic administrator is 50.0 cm. (a) What is the relaxed power of his eyes? (b) If he has the normal 8.00% ability to accommodate, what is the closest object he can see clearly?
35. A very myopic man has a far point of 20.0 cm. What power contact lens (when on the eye) will correct his distant vision?
Solution
–5.00 D
36. Repeat the previous problem for eyeglasses held 1.50 cm from the eyes.
37. A myopic person sees that her contact lens prescription is –4.00 D. What is her far point?
Solution
25.0 cm
38. Repeat the previous problem for glasses that are 1.75 cm from the eyes.
39. The contact lens prescription for a mildly farsighted person is 0.750 D, and the person has a near point of 29.0 cm. What is the power of the tear layer between the cornea and the lens if the correction is ideal, taking the tear layer into account?
Solution
–0.198 D
40. A nearsighted man cannot see objects clearly beyond 20 cm from his eyes. How close must he stand to a mirror in order to see what he is doing when he shaves?
41. A mother sees that her child’s contact lens prescription is 0.750 D. What is the child’s near point?
Solution
30.8 cm
42. Repeat the previous problem for glasses that are 2.20 cm from the eyes.
43. The contact lens prescription for a nearsighted person is \(\displaystyle –4.00 D\) and the person has a far point of 22.5 cm. What is the power of the tear layer between the cornea and the lens if the correction is ideal, taking the tear layer into account?
Solution
–0.444 D
44. Unreasonable Results
A boy has a near point of 50 cm and a far point of 500 cm. Will a \(\displaystyle –4.00 D\) lens correct his far point to infinity?
26.4: Microscopes
45. A microscope with an overall magnification of 800 has an objective that magnifies by 200.
(a) What is the magnification of the eyepiece?
(b) If there are two other objectives that can be used, having magnifications of 100 and 400, what other total magnifications are possible?
Solution
(a) 4.00
(b) 1600
46. (a) What magnification is produced by a 0.150 cm focal length microscope objective that is 0.155 cm from the object being viewed?
(b) What is the overall magnification if an 8× eyepiece (one that produces a magnification of 8.00) is used?
47. (a) Where does an object need to be placed relative to a microscope for its 0.500 cm focal length objective to produce a magnification of \(\displaystyle –400\)?
(b) Where should the 5.00 cm focal length eyepiece be placed to produce a further fourfold (4.00) magnification?
Solution
(a) 0.501 cm
(b) Eyepiece should be 204 cm behind the objective lens.
48. You switch from a \(\displaystyle 1.40NA60×\) oil immersion objective to a \(\displaystyle 1.40NA60×\) oil immersion objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on your specimen?
49. An amoeba is 0.305 cm away from the 0.300 cm focal length objective lens of a microscope.
(a) Where is the image formed by the objective lens?
(b) What is this image’s magnification?
(c) An eyepiece with a 2.00 cm focal length is placed 20.0 cm from the objective. Where is the final image?
(d) What magnification is produced by the eyepiece?
(e) What is the overall magnification? (See Figure.)
Solution
(a) +18.3 cm (on the eyepiece side of the objective lens)
(b) -60.0
(c) -11.3 cm (on the objective side of the eyepiece)
(d) +6.67
(e) -400
50. You are using a standard microscope with a \(\displaystyle 0.10NA4×\) objective and switch to a \(\displaystyle 0.65NA40×\) objective. What are the acceptance angles for each? Compare and comment on the values. Which would you use first to locate the target area on of your specimen? (See Figure.)
51. Unreasonable Results
Your friends show you an image through a microscope. They tell you that the microscope has an objective with a 0.500 cm focal length and an eyepiece with a 5.00 cm focal length. The resulting overall magnification is 250,000. Are these viable values for a microscope?
26.5: Telescopes
Unless otherwise stated, the lens-to-retina distance is 2.00 cm.
52. What is the angular magnification of a telescope that has a 100 cm focal length objective and a 2.50 cm focal length eyepiece?
Solution
−40.0
53. Find the distance between the objective and eyepiece lenses in the telescope in the above problem needed to produce a final image very far from the observer, where vision is most relaxed. Note that a telescope is normally used to view very distant objects.
54. A large reflecting telescope has an objective mirror with a 10.0 m radius of curvature. What angular magnification does it produce when a 3.00 cm focal length eyepiece is used?
Solution
−167
55. A small telescope has a concave mirror with a 2.00 m radius of curvature for its objective. Its eyepiece is a 4.00 cm focal length lens.
(a) What is the telescope’s angular magnification?
(b) What angle is subtended by a 25,000 km diameter sunspot?
(c) What is the angle of its telescopic image?
56. A \(\displaystyle 7.5×\) binocular produces an angular magnification of \(\displaystyle −7.50\), acting like a telescope. (Mirrors are used to make the image upright.) If the binoculars have objective lenses with a 75.0 cm focal length, what is the focal length of the eyepiece lenses?
Solution
+10.0 cm
57. Construct Your Own Problem
Consider a telescope of the type used by Galileo, having a convex objective and a concave eyepiece as illustrated in Figure(a). Construct a problem in which you calculate the location and size of the image produced. Among the things to be considered are the focal lengths of the lenses and their relative placements as well as the size and location of the object. Verify that the angular magnification is greater than one. That is, the angle subtended at the eye by the image is greater than the angle subtended by the object.
26.6: Aberrations
58. Integrated Concepts
(a) During laser vision correction, a brief burst of 193 nm ultraviolet light is projected onto the cornea of the patient. It makes a spot 1.00 mm in diameter and deposits 0.500 mJ of energy. Calculate the depth of the layer ablated, assuming the corneal tissue has the same properties as water and is initially at \(\displaystyle 34.0ºC\). The tissue’s temperature is increased to \(\displaystyle 100ºC\) and evaporated without further temperature increase.
(b) Does your answer imply that the shape of the cornea can be finely controlled?
Solution
(a) \(\displaystyle 0.251μm\)
(b) Yes, this thickness implies that the shape of the cornea can be very finely controlled, producing normal distant vision in more than 90% of patients.
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:44.168333
| 2018-05-04T03:07:16 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/26%3A_Vision_and_Optical_Instruments/26.E%3A_Vision_and_Optical_Instruments_(Exercise)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "26.E: Vision and Optical Instruments (Exercise)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics
|
27: Wave Optics
-
- 27.2: Huygens's Principle - Diffraction
- An accurate technique for determining how and where waves propagate is given by Huygens’s principle: Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets. Diffraction is the bending of a wave around the edges of an opening or other obstacle.
-
- 27.3: Young’s Double Slit Experiment
- Young’s double slit experiment gave definitive proof of the wave character of light. An interference pattern is obtained by the superposition of light from two slits. There is constructive interference and destructive interference depending on the angle of probing.
-
- 27.4: Multiple Slit Diffraction
- A diffraction grating is a large collection of evenly spaced parallel slits that produces an interference pattern similar to but sharper than that of a double slit. There is constructive interference for a diffraction grating when \(d\sin{\theta} = m \lambda \left( for m = 0,1,-1,2,-2,...\right)\), where \(d\) is the distance between slits in the grating, \(\lambda\) is the wavelength of light, and \(m\) is the order of the maximum.
-
- 27.5: Single Slit Diffraction
- A single slit produces an interference pattern characterized by a broad central maximum with narrower and dimmer maxima to the sides. There is destructive interference for a single slit when \(D \sin{\theta} = m \lambda,~ \left(for~m = 1, -1, 2, -2, 3, ...\right)\) where \(D\) is the slit width, \(\lambda\) is the light's wavelength, \(\theta\) is the angle relative to the original direction of the light, and \(m\) is the order of the minimum. Note that there is no \(m = 0\) minimum.
-
- 27.6: Limits of Resolution- The Rayleigh Criterion
- Diffraction limits resolution. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. This occurs for two point objects separated by the angle \(\theta = 1.22 \frac{\lambda}{D}\), where \(\lambda\) is the wavelength of light (or other electromagnetic radiation) and \(D\) is the diameter of the aperture, lens, mirror, et
-
- 27.7: Thin Film Interference
- The bright colors seen in an oil slick floating on water or in a sunlit soap bubble are caused by interference. The brightest colors are those that interfere constructively. This interference is between light reflected from different surfaces of a thin film; this effect is known as thin film interference. Interference effects are most prominent when light interacts with something having a size similar to its wavelength. A thin film is one having a thickness smaller than a few times the wavelengt
-
- 27.8: Polarization
- Polarization is the attribute that a wave’s oscillations have a definite direction relative to the direction of propagation of the wave. (This is not the same type of polarization as that discussed for the separation of charges.) Waves having such a direction are said to be polarized. For an EM wave, we define the direction of polarization to be the direction parallel to the electric field. Thus we can think of the electric field arrows as showing the direction of polarization.
-
- 27.9: Microscopy Enhanced by the Wave Characteristics of Light
- Physics research underpins the advancement of developments in microscopy. As we gain knowledge of the wave nature of electromagnetic waves and methods to analyze and interpret signals, new microscopes that enable us to “see” more are being developed. It is the evolution and newer generation of microscopes that are described in this section.
Thumbnail: Physical optics is used to explain effects such as diffraction; t his photo shows diffraction from a single pinhole . (CC-SA-BY-3.0; Wisky).
|
libretexts
|
2025-03-17T19:53:44.236236
| 2015-11-01T04:21:26 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27: Wave Optics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.00%3A_Introduction_to_Wave_Optics
|
27.0: Introduction to Wave Optics
Examine a compact disc under white light, noting the colors observed and locations of the colors. Determine if the spectra are formed by diffraction from circular lines centered at the middle of the disc and, if so, what is their spacing. If not, determine the type of spacing. Also with the CD, explore the spectra of a few light sources, such as a candle flame, incandescent bulb, halogen light, and fluorescent light. Knowing the spacing of the rows of pits in the compact disc, estimate the maximum spacing that will allow the given number of megabytes of information to be stored.
If you have ever looked at the reds, blues, and greens in a sunlit soap bubble and wondered how straw-colored soapy water could produce them, you have hit upon one of the many phenomena that can only be explained by the wave character of light (see Figure 2). The same is true for the colors seen in an oil slick or in the light reflected from a compact disc. These and other interesting phenomena, such as the dispersion of white light into a rainbow of colors when passed through a narrow slit, cannot be explained fully by geometric optics. In these cases, light interacts with small objects and exhibits its wave characteristics. The branch of optics that considers the behavior of light when it exhibits wave characteristics (particularly when it interacts with small objects) is called wave optics (sometimes called physical optics). It is the topic of this chapter.
|
libretexts
|
2025-03-17T19:53:44.293424
| 2016-07-24T08:11:32 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.00%3A_Introduction_to_Wave_Optics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.0: Introduction to Wave Optics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.01%3A_The_Wave_Aspect_of_Light-_Interference
|
27.1: The Wave Aspect of Light- Interference
Learning Objectives
By the end of this section, you will be able to:
- Discuss the wave character of light.
- Identify the changes when light enters a medium.
We know that visible light is the type of electromagnetic wave to which our eyes respond. Like all other electromagnetic waves, it obeys the equation
\[c = f \lambda \label{27.2.1}\]
where \(c = 3 \times 10^{8} m/s\) is the speed of light in vacuum, \(f\) is the frequency of the electromagnetic waves, and \(\lambda\) is its wavelength. The range of visible wavelengths is approximately 380 to 760 nm. As is true for all waves, light travels in straight lines and acts like a ray when it interacts with objects several times as large as its wavelength. However, when it interacts with smaller objects, it displays its wave characteristics prominently. Interference is the hallmark of a wave, and in Figure 1 both the ray and wave characteristics of light can be seen. The laser beam emitted by the observatory epitomizes a ray, traveling in a straight line. However, passing a pure-wavelength beam through vertical slits with a size close to the wavelength of the beam reveals the wave character of light, as the beam spreads out horizontally into a pattern of bright and dark regions caused by systematic constructive and destructive interference. Rather than spreading out, a ray would continue traveling straight ahead after passing through slits.
MAKING CONNECTIONS: WAVES
The most certain indication of a wave is interference. This wave characteristic is most prominent when the wave interacts with an object that is not large compared with the wavelength. Interference is observed for water waves, sound waves, light waves, and (as we will see in "Special Relativity") for matter waves, such as electrons scattered from a crystal.
Light has wave characteristics in various media as well as in a vacuum. When light goes from a vacuum to some medium, like water, its speed and wavelength change, but its frequency \(f\) remains the same. (We can think of light as a forced oscillation that must have the frequency of the original source.) The speed of light in a medium is \(v = c/n\), where \(n\) is its index of refraction. If we divide both sides of equation \(c = f \lambda\) by \(n\), we get \(c/n = v = f \lambda / n\). This implies that \(v = f \lambda_{n}\), where \(\lambda_{n}\) is the wavelength in a medium and that
\[\lambda_{n} = \frac{\lambda}{n}, \label{27.2.2}\]
where \(\lambda\) is the wavelength in vacuum and is the medium’s index of refraction. Therefore, the wavelength of light is smaller in any medium than it is in vacuum. In water, for example, which has \(n = 1.333\), the range of visible wavelenghts is \(\left( 380 mm \right) / 1.33\) to \(\left( 760 nm \right) / 1.333\), or \(\lambda_{n} = 286 ~ to ~ 570 nm\). Although wavelengths change while traveling from one medium to another, colors do not, since colors are associated with frequency.
Summary
- Wave optics is the branch of optics that must be used when light interacts with small objects or whenever the wave characteristics of light are considered.
- Wave characteristics are those associated with interference and diffraction.
- Visible light is the type of electromagnetic wave to which our eyes respond and has a wavelength in the range of 380 to 760 nm.
- Like all EM waves, the following relationship is valid in vacuum: \(c = f \lambda\), where \(c = 3 \times 10^{8} m/s\) is the speed of light, \(f\) is the frequency of the electromagnetic wave, and \(\lambda\) is its wavelength in vacuum.
- The wavelength \(\lambda_{n}\) of light in a medium with index of refraction \(n\) is \(\lambda_{n} = \lambda / n\). Its frequency is the same as in vacuum.
Glossary
- wavelength in a medium
- \(\lambda_{n} = \lambda / n\), where \(\lambda\) is the wavelength in vacuum, and \(n\) is the index of refraction of the medium
|
libretexts
|
2025-03-17T19:53:44.355404
| 2016-07-24T08:12:02 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.01%3A_The_Wave_Aspect_of_Light-_Interference",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.1: The Wave Aspect of Light- Interference",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.02%3A_Huygens's_Principle_-_Diffraction
|
27.2: Huygens's Principle - Diffraction
Learning Objectives
By the end of this section, you will be able to:
- Discuss the propagation of transverse waves.
- Discuss Huygens’s principle.
- Explain the bending of light.
Figure \(\PageIndex{1}\) shows how a transverse wave looks as viewed from above and from the side. A light wave can be imagined to propagate like this, although we do not actually see it wiggling through space. From above, we view the wavefronts (or wave crests) as we would by looking down on the ocean waves. The side view would be a graph of the electric or magnetic field. The view from above is perhaps the most useful in developing concepts about wave optics.
The Dutch scientist Christiaan Huygens (1629–1695) developed a useful technique for determining in detail how and where waves propagate, which is known as Huygen's principle :
Huygen's Principle
Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets.
Figure \(\PageIndex{2}\) shows how Huygens’s principle is applied. A wavefront is the long edge that moves, for example, the crest or the trough. Each point on the wavefront emits a semicircular wave that moves at the propagation speed \(v\). These are drawn at a time \(t\) later, so that they have moved a distance \( s= vt\). The new wavefront is a line tangent to the wavelets and is where we would expect the wave to be a time \(t\) later. Huygens’s principle works for all types of waves, including water waves, sound waves, and light waves. We will find it useful not only in describing how light waves propagate, but also in explaining the laws of reflection and refraction. In addition, we will see that Huygens’s principle tells us how and where light rays interfere.
Figure \(\PageIndex{3}\) shows how a mirror reflects an incoming wave at an angle equal to the incident angle, verifying the law of reflection. As the wavefront strikes the mirror, wavelets are first emitted from the left part of the mirror and then the right. The wavelets closer to the left have had time to travel farther, producing a wavefront traveling in the direction shown.
The law of refraction can be explained by applying Huygens’s principle to a wavefront passing from one medium to another (Figure \(\PageIndex{4}\)). Each wavelet in the figure was emitted when the wavefront crossed the interface between the media. Since the speed of light is smaller in the second medium, the waves do not travel as far in a given time, and the new wavefront changes direction as shown. This explains why a ray changes direction to become closer to the perpendicular when light slows down. Snell’s law can be derived from the geometry in Figure \(\PageIndex{4}\) but this is left as an exercise for ambitious readers.
What happens when a wave passes through an opening, such as light shining through an open door into a dark room? For light, we expect to see a sharp shadow of the doorway on the floor of the room, and we expect no light to bend around corners into other parts of the room. When sound passes through a door, we expect to hear it everywhere in the room and, thus, expect that sound spreads out when passing through such an opening (Figure \(\PageIndex{5}\)). What is the difference between the behavior of sound waves and light waves in this case? The answer is that light has very short wavelengths and acts like a ray. Sound has wavelengths on the order of the size of the door and bends around corners (for frequency of 1000 Hz, \(\lambda = c/f = \left( 330 m/s \right) / \left( 1000 s^{-1} \right) = 0.33m\), about three times smaller than the width of the doorway).
If we pass light through smaller openings, often called slits, we can use Huygens’s principle to see that light bends as sound does (Figure \(\PageIndex{6}\)). The bending of a wave around the edges of an opening or an obstacle is called diffraction . Diffraction is a wave characteristic and occurs for all types of waves. If diffraction is observed for some phenomenon, it is evidence that the phenomenon is a wave.
Summary
- An accurate technique for determining how and where waves propagate is given by Huygens’s principle: Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets.
- Diffraction is the bending of a wave around the edges of an opening or other obstacle.
Glossary
- diffraction
- the bending of a wave around the edges of an opening or an obstacle
- Huygens’s principle
- every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets
|
libretexts
|
2025-03-17T19:53:44.420671
| 2016-07-24T08:12:48 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.02%3A_Huygens's_Principle_-_Diffraction",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.2: Huygens's Principle - Diffraction",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.03%3A_Youngs_Double_Slit_Experiment
|
27.3: Young’s Double Slit Experiment
Learning Objectives
By the end of this section, you will be able to:
- Explain the phenomena of interference.
- Define constructive interference for a double slit and destructive interference for a double slit.
Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were observable at the time. Owing to Newton’s tremendous stature, his view generally prevailed. The fact that Huygens’s principle worked was not considered evidence that was direct enough to prove that light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit experiment (Figure \(\PageIndex{1}\)).
Why do we not ordinarily observe wave behavior for light, such as observed in Young’s double slit experiment? First, light must interact with something small, such as the closely spaced slits used by Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source (the Sun) through a single slit to make the light somewhat coherent. By coherent , we mean waves are in phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is that two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more difficult to see. We illustrate the double slit experiment with monochromatic (single \(\lambda\)) light to clarify the effect. Figure \(\PageIndex{2}\) shows the pure constructive and destructive interference of two waves having the same wavelength and amplitude.
When light passes through narrow slits, it is diffracted into semicircular waves, as shown in Figure \(\PageIndex{3a}\). Pure constructive interference occurs where the waves are crest to crest or trough to trough. Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water waves is shown in Figure \(\PageIndex{3b}\). Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on wavelength and the distance between the slits, as we shall see below.
To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in Figure \(\PageIndex{4}\). Each slit is a different distance from a given point on the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they may end up out of phase (crest to trough) at the screen if the paths differ in length by half a wavelength, interfering destructively as shown in Figure \(\PageIndex{4a}\). If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen, interfering constructively as shown in Figure \(\PageIndex{4b}\). More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths [ \( \left( 1/2 \right) \lambda , \left(3/2 \right) \lambda , \left( 5/2 \right) \lambda,\) etc.], then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths (\( \lambda , 2\lambda , 3\lambda \), etc.), then constructive interference occurs.
TAKE-HOME EXPERIMENT: USING FINGERS AS SLITS
Look at a light, such as a street lamp or incandescent bulb, through the narrow gap between two fingers held close together. What type of pattern do you see? How does it change when you allow the fingers to move a little farther apart? Is it more distinct for a monochromatic source, such as the yellow light from a sodium vapor lamp, than for an incandescent bulb?
Figure \(\PageIndex{5}\) shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between the slits, then the angle \(\theta\) between the path and a line from the slits to the screen is nearly the same for each path. The difference between the paths is shown in the figure; simple trigonometry shows it to be \(d \sin{\theta}\), where \(d\) is the distance between the slits. To obtain constructive interference for a double slit , the path length difference must be an integral multiple of the wavelength, or
\[d\sin{\theta} = m \lambda, ~for~ m=0,1,-1,2,-2 ... \left( constructive \right). \label{27.4.1}\]
Similarly, to obtain destructive interference for a double slit , the path length difference must be a half-integral multiple of the wavelength, or
\[d \sin{\theta} = \left( m + \frac{1}{2} \right) \lambda, ~for~ m=0,1,-1,2,-2 ... \left( destructive \right). \label{27.4.2}\]
where \(\lambda\) is the wavelength of the light, \(d\) is the distance between slits, and \(\theta\) is the angle from the original direction of the beam as discussed above. We call \(m\) the order of the interference. For example, \(m=4\) is fourth-order interference.
The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a pattern called interference fringes, illustrated in Figure \(\PageIndex{6}\). The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is the spreading of the bright fringes. We can see this by examining the Equation \ref{27.4.1}.
For fixed \(\lambda\) and \(m\), the smaller \(d\) is, the larger \(\theta\) must be, since \(\sin{\theta} = m \lambda / d\). This is consistent with our contention that wave effects are most noticeable when the object the wave encounters (here, slits a distance \(d\) apart) is small. Small \(d\) gives large \(\theta\), hence a large effect.
Example \(\PageIndex{1}\): Finding a Wavelength from an Interference Pattern
Suppose you pass light from a He-Ne laser through two slits separated by 0.0100 mm and find that the third bright line on a screen is formed at an angle of \(10.95^{\circ}\) relative to the incident beam. What is the wavelength of the light?
Strategy:
The third bright line is due to third-order constructive interference, which means that \(m=3\). We are given \(d = 0.0100 mm\) and \(\theta = 10.95^{\circ}\). The wavelength can thus be found using the equation \(d \sin{\theta} = m \lambda\) for constructive interference.
Solution
The equation is \(d \sin{\theta} = m \lambda\). Solving for the wavelength \(\lambda\) gives
\[\lambda = \frac{d \sin{\theta}}{m}. \nonumber\]
Substituting known values yields
\[\begin{align*} \lambda &= \frac{\left(0.0100 mm \right) \left(\sin{10.95^{\circ}} \right)}{3} \\[4pt] &= 6.33 \times 10^{-4} mm = 633\, nm. \end{align*}\]
Discussion:
To three digits, this is the wavelength of light emitted by the common He-Ne laser. Not by coincidence, this red color is similar to that emitted by neon lights. More important, however, is the fact that interference patterns can be used to measure wavelength. Young did this for visible wavelengths. This analytical technique is still widely used to measure electromagnetic spectra. For a given order, the angle for constructive interference increases with \(\lambda\), so that spectra (measurements of intensity versus wavelength) can be obtained.
Example \(\PageIndex{1}\): Calculating Highest Order Possible
Interference patterns do not have an infinite number of lines, since there is a limit to how big \(m\) can be. What is the highest-order constructive interference possible with the system described in the preceding example?
Strategy and Concept:
The equation \(d \sin{\theta} = m \lambda \left( for m = 0,1,-1,2,-2,...\right)\) describes constructive interference. For fixed values of \(d\) and \(\lambda\), the larger \(m\) is, the larger \(\sin{\theta}\) is. However, the maximum value that \(\sin{\theta}\) can have is 1, for an angle of \(90^{\circ}\). (Larger angles imply that light goes backward and does not reach the screen at all.) Let us find which \(m\) corresponds to this maximum diffraction angle.
Solution
Solving the equation \(d\sin{\theta} = m \lambda\) for \(m\) gives:
\[m = \frac{d \sin{\theta}}{\lambda}. \nonumber \]
Taking \(\sin{\theta} = 1\) and substituting the values of \(d\) and \(\lambda\) from the preceding example gives
\[m = \frac{\left(0.0100 mm\right) \left(1\right)}{633 nm} \approx 15.8. \nonumber\]
Therefore, the largest integer \(m\) can be is 15, or
\[m = 15. \nonumber\]
Discussion:
The number of fringes depends on the wavelength and slit separation. The number of fringes will be very large for large slit separations. However, if the slit separation becomes much greater than the wavelength, the intensity of the interference pattern changes so that the screen has two bright lines cast by the slits, as expected when light behaves like a ray. We also note that the fringes get fainter further away from the center. Consequently, not all 15 fringes may be observable.
Summary
- Young’s double slit experiment gave definitive proof of the wave character of light.
- An interference pattern is obtained by the superposition of light from two slits.
- There is constructive interference when \(d\sin{\theta} = m \lambda \left(for m = 0,1,-2,2,-2,...\right)\), where \(d\) is the distance between the slits, \(\theta\) is the angle relative to the incident direction, and \(m\) is the order of the interference.
- There is destructive interference when \(d \sin{\theta} = \left( m + \frac{1}{2} \right) \lambda \left( for m = 0,1,-1,2,-2,...\right)\).
Glossary
- coherent
- waves are in phase or have a definite phase relationship
- constructive interference for a double slit
- the path length difference must be an integral multiple of the wavelength
- destructive interference for a double slit
- the path length difference must be a half-integral multiple of the wavelength
- incoherent
- waves have random phase relationships
- order
- the integer \(m\) used in the equations for constructive and destructive interference for a double slit
|
libretexts
|
2025-03-17T19:53:44.496604
| 2016-07-24T08:13:27 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.03%3A_Youngs_Double_Slit_Experiment",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.3: Young’s Double Slit Experiment",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.04%3A_Multiple_Slit_Diffraction
|
27.4: Multiple Slit Diffraction
Learning Objectives
By the end of this section, you will be able to:
- Discuss the pattern obtained from diffraction grating.
- Explain diffraction grating effects.
An interesting thing happens if you pass light through a large number of evenly spaced parallel slits, called a diffraction grating . An interference pattern is created that is very similar to the one formed by a double slit (see Figure 1). A diffraction grating can be manufactured by scratching glass with a sharp tool in a number of precisely positioned parallel lines, with the untouched regions acting like slits. These can be photographically mass produced rather cheaply. Diffraction gratings work both for transmission of light, as in Figure 1, and for reflection of light, as on butterfly wings and the Australian opal in Figure 2 or the CD pictured in the opening photograph of this chapter. In addition to their use as novelty items, diffraction gratings are commonly used for spectroscopic dispersion and analysis of light. What makes them particularly useful is the fact that they form a sharper pattern than double slits do. That is, their bright regions are narrower and brighter, while their dark regions are darker. Figure 3 shows idealized graphs demonstrating the sharper pattern. Natural diffraction gratings occur in the feathers of certain birds. Tiny, finger-like structures in regular patterns act as reflection gratings, producing constructive interference that gives the feathers colors not solely due to their pigmentation. This is called iridescence.
The analysis of a diffraction grating is very similar to that for a double slit (see Figure 4). As we know from our discussion of double slits in "Young's Double Slit Experiment," light is diffracted by each slit and spreads out after passing through. Rays traveling in the same direction (at an angle \(\theta\) relative to the incident direction) are shown in the figure. Each of these rays travels a different distance to a common point on a screen far away. The rays start in phase, and they can be in or out of phase when they reach a screen, depending on the difference in the path lengths traveled. As seen in the figure, each ray travels a distance \(d\sin{\theta}\) different from that of its neighbor, where \(d\) is the distance between slits. If this distance equals an integral number of wavelengths, the rays all arrive in phase, and constructive interference (a maximum) is obtained. Thus, the condition necessary to obtain constructive interference for a diffraction grating is \[d\sin{\theta} = m \lambda, for m=0,1,-1,2,-2,...\left(constructive\right)\label{27.5.1}\] where \(d\) is the distance between slits in the grating, \(\lambda\) is the wavelength of light, and \(m\) is the order of the maximum. Note that this is exactly the same equation as for double slits separated by \(d\). However, the slits are usually closer in diffraction gratings than in double slits, producing fewer maxima at larger angles.
Where are diffraction gratings used? Diffraction gratings are key components of monochromators used, for example, in optical imaging of particular wavelengths from biological or medical samples. A diffraction grating can be chosen to specifically analyze a wavelength emitted by molecules in diseased cells in a biopsy sample or to help excite strategic molecules in the sample with a selected frequency of light. Another vital use is in optical fiber technologies where fibers are designed to provide optimum performance at specific wavelengths. A range of diffraction gratings are available for selecting specific wavelengths for such use.
TAKE-HOME EXPERIMENT: RAINBOWS ON A CD
The spacing \(d\)) of the grooves in a CD or DVD can be well determined by using a laser and the equation \(d\sin{\theta} = m \lambda, for m = 0,1,-1,2,-2,...\) However, we can still make a good estimate of this spacing by using white light and the rainbow of colors that comes from the interference. Reflect sunlight from a CD onto a wall and use your best judgment of the location of a strongly diffracted color to find the separation \(d\).
Example \(\PageIndex{1}\): Calculating Typical Diffraction Grating Effects
Diffraction gratings with 10,000 lines per centimeter are readily available. Suppose you have one, and you send a beam of white light through it to a screen 2.00 m away. (a) Find the angles for the first-order diffraction of the shortest and longest wavelengths of visible light (380 and 760 nm). (b) What is the distance between the ends of the rainbow of visible light produced on the screen for first-order interference? (See Figure.)
Strategy:
The angles can be found using the equation \[d\sin{\theta} = m \lambda, for m = 0,1,-1,2,-2,...\label{27.5.1}\] once a value for the slit spacing \(d\) has been determined. Since there are 10,000 lines per centimeter, each line is separated by \(1/10,000\) of a centimeter. Once the angles are found, the distances along the screen can be found using simple trigonometry.
Solution for (a):
The distance between slits is \(d = \left(1 cm\right) / 10,000 = 1.00 \times 10^{-4} cm\) or \(1.00 \times 10^{-6} m\). Let us call the two angles \(\theta_{V}\) for violet (380 nm) and \(\theta_{R}\) for red (760 nm). Solving the equation \(d\sin{\theta_{V}} = m \lambda\) for \(\sin{\theta_{V}}\), \[\sin{\theta_{V}} = \frac{m \lambda v}{d}, \label{27.5.2}\] where \(m = 1\) for first order and \(\lambda_{v} = 380 nm = 3.80 \times 10^{-7} m\). Substituting these values gives \[\sin{\theta_{v}} = \frac{3.80 \times 10^{-7} m}{1.00 \times 10^{-6} m} = 0.380.\] Thus the angle \(\theta_{v}\) is \[\theta_{v} = \sin^{-1}{0.380} = 22.33^{\circ}.\] Similarly, \[\sin{\theta_{R}} = \frac{7.60 \times 10^{-7} m}{1.00 \times 10^{-6} m}.\] Thus the angle \(\theta_{R}\) is \[\theta_{R} = \sin^{-1}{0.760} = 49.46^{\circ}.\] Notice that in both equations, we reported the results of these intermediate calculations to four significant figures to use with the calculation in part (b).
Solution for (b):
The distances on the screen are labeled \(y_{v}\) and \(y_{R}\) in the figure. Noting that \(\tan{\theta} = y/x\), we can solve for \(y_{v}\) and \(y_{R}\). That is, \[y_{v} = x \tan{\theta_{v}} = \left( 2.00 m \right) \left(\tan{22.33^{\circ}}\right) = 0.815m \label{25.7.3}\] and \[y_{R} = x \tan{\theta_{R}} = \left( 2.00 m \right) \left( \tan{49.46^{\circ}} \right) = 2.338 m \label{25.7.4}.\] The distance between them is therefore \[y_{R} - y_{v} = 1.52 m.\]
Discussion:
The large distance between the red and violet ends of the rainbow produced from the white light indicates the potential this diffraction grating has as a spectroscopic tool. The more it can spread out the wavelengths (greater dispersion), the more detail can be seen in a spectrum. This depends on the quality of the diffraction grating—it must be very precisely made in addition to having closely spaced lines.
Summary
- A diffraction grating is a large collection of evenly spaced parallel slits that produces an interference pattern similar to but sharper than that of a double slit.
- There is constructive interference for a diffraction grating when \(d\sin{\theta} = m \lambda \left( for m = 0,1,-1,2,-2,...\right)\), where \(d\) is the distance between slits in the grating, \(\lambda\) is the wavelength of light, and \(m\) is the order of the maximum.
Glossary
- constructive interference for a diffraction grating
- occurs when the condition \(d \sin{\theta} = m \lambda \left(for~m = 0,1,-1,2,-2,...\right)\) is satisfied, where \(d\) is the distance between slits in the grating, \(\lambda\) is the wavelength of light, and \(m\) is the order of the maximum
- diffraction grating
- a large number of evenly spaced parallel slits
|
libretexts
|
2025-03-17T19:53:44.564414
| 2016-07-24T08:13:59 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.04%3A_Multiple_Slit_Diffraction",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.4: Multiple Slit Diffraction",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.05%3A_Single_Slit_Diffraction
|
27.5: Single Slit Diffraction
Learning Objectives
By the end of this section, you will be able to:
- Discuss the single slit diffraction pattern.
Light passing through a single slit forms a diffraction pattern somewhat different from those formed by double slits or diffraction gratings. Figure 1 shows a single slit diffraction pattern. Note that the central maximum is larger than those on either side, and that the intensity decreases rapidly on either side. In contrast, a diffraction grating produces evenly spaced lines that dim slowly on either side of center.
The analysis of single slit diffraction is illustrated in Figure 2. Here we consider light coming from different parts of the same slit. According to Huygens’s principle, every part of the wavefront in the slit emits wavelets. These are like rays that start out in phase and head in all directions. (Each ray is perpendicular to the wavefront of a wavelet.) Assuming the screen is very far away compared with the size of the slit, rays heading toward a common destination are nearly parallel. When they travel straight ahead, as in Figure 2a, they remain in phase, and a central maximum is obtained. However, when rays travel at an angle \(\theta\) relative to the original direction of the beam, each travels a different distance to a common location, and they can arrive in or out of phase. In Figure 2b, the ray from the bottom travels a distance of one wavelength \(\lambda\) farther than the ray from the top. Thus a ray from the center travels a distance \(\lambda / 2\) farther than the one on the left, arrives out of phase, and interferes destructively. A ray from slightly above the center and one from slightly above the bottom will also cancel one another. In fact, each ray from the slit will have another to interfere destructively, and a minimum in intensity will occur at this angle. There will be another minimum at the same angle to the right of the incident direction of the light.
At the larger angle shown in Figure 2c, the path lengths differ by \(3 \lambda / 2\) for rays from the top and bottom of the slit. One ray travels a distance \(\lambda\) different from the ray from the bottom and arrives in phase, interfering constructively. Two rays, each from slightly above those two, will also add constructively. Most rays from the slit will have another to interfere with constructively, and a maximum in intensity will occur at this angle. However, all rays do not interfere constructively for this situation, and so the maximum is not as intense as the central maximum. Finally, in Figure 2d, the angle shown is large enough to produce a second minimum. As seen in the figure, the difference in path length for rays from either side of the slit is \(D \sin{\theta}\), and we see that a destructive minimum is obtained when this distance is an integral multiple of the wavelength.
Thus, to obtain destructive interference for a single slit , \[D \sin{\theta} = m \lambda,~for~m = 1, -1, 2, -2, 3,... \left(destructive\right), \label{27.6.1}\] where \(D\) is the slit width, \(\lambda\) is the light's wavelength, \(\theta\) is the angle relative to the original direction of the light, and \(m\) is the order of the minimum. Figure 3 shows a graph of intensity for single slit interference, and it is apparent that the maxima on either side of the central maximum are much less intense and not as wide. This is consistent with the illustration in Figure 1b.
Example \(\PageIndex{1}\): Calculating Single Slit Diffraction
Visible light of wavelength 550 nm falls on a single slit and produces its second diffraction minimum at an angle of \(45.0^{\circ}\) relative to the incident direction of the light.
- What is the width of the slit?
- At what angle is the first minimum produced?
Strategy:
From the given information, and assuming the screen is far away from the slit, we can use the equation \(D \sin{\theta} = m \lambda\) first to find \(D\), and again to find the angle for the first minimum \(\theta_{1}\).
Solution (a):
We are given that \(\lambda = 550 nm\), \(m =2\), and \(\theta_{2} = 45.0^{\circ}\). Solving the equation \(D = \sin{\theta} = m \lambda\) for \(D\) and substituting known values gives \[D = \frac{m \lambda}{\sin{\theta_{2}}} = \frac{2\left(550 nm\right)}{\sin{45.0^{\circ}}} \label{27.6.2}\] \[= \frac{1100 \times 10^{-9}}{0.707}\] \[=1.56 \times 10^{-6}.\]
Solution (b):
Solving the equation \(D = \sin{\theta} = m \lambda\) for \(\sin{\theta_{1}}\) and substituting known values gives \[\sin_{\theta_{1}} = \frac{m \lambda}{D} = \frac{1 \left(550 \times 10^{-9} m \right)}{1.56 \times 10^{-6}}. \label{27.6.3}\] Thus the angle \(\theta_{1}\) is \[\theta_{1} = \sin{0.354}^{-1} = 20.7^{\circ} \label{27.6.4}\]
Discussion:
We see that the slit is narrow (it is only a few times greater than the wavelength of light). This is consistent with the fact that light must interact with an object comparable in size to its wavelength in order to exhibit significant wave effects such as this single slit diffraction pattern. We also see that the central maximum extends \(20.7^{\circ}\) on either side of the original beam, for a width of about \(41^{\circ}\). The angle between the first and second minima is only about \(24^{\circ} \left(45.0^{\circ} - 20.7^{\circ}\right)\). Thus the second maximum is only about half as wide as the central maximum.
Summary
- A single slit produces an interference pattern characterized by a broad central maximum with narrower and dimmer maxima to the sides.
- There is destructive interference for a single slit when \(D \sin{\theta} = m \lambda,~ \left(for~m = 1, -1, 2, -2, 3, ...\right)\) where \(D\) is the slit width, \(\lambda\) is the light's wavelength, \(\theta\) is the angle relative to the original direction of the light, and \(m\) is the order of the minimum. Note that there is no \(m = 0\) minimum.
Glossary
- destructive interference for a single slit
- occurs when \(D \sin{\theta} = m \lambda, \left(for~m = 1, -1, 2, -2, 3, ...\right)\), where \(D\) is the slit width, \(\lambda\) is the light's wavelength, \(\theta\) is the angle relative to the original direction of the light, and \(m\) is the order of the minimum
|
libretexts
|
2025-03-17T19:53:44.629654
| 2016-07-24T08:14:38 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.05%3A_Single_Slit_Diffraction",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.5: Single Slit Diffraction",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.06%3A_Limits_of_Resolution-_The_Rayleigh_Criterion
|
27.6: Limits of Resolution- The Rayleigh Criterion
Learning Objectives
By the end of this section, you will be able to:
- Discuss the Rayleigh criterion.
Light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. While this can be used as a spectroscopic tool -- a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra -- diffraction also limits the detail we can obtain in images. Figure \(\PageIndex{1a}\) shows the effect of passing light through a small circular aperture. Instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. This pattern is caused by diffraction similar to that produced by a single slit. Light from different parts of the circular aperture interferes constructively and destructively. The effect is most noticeable when the aperture is small, but the effect is there for large apertures, too.
How does diffraction affect the detail that can be observed when light passes through an aperture? Figure \(\PageIndex{1b}\) shows the diffraction pattern produced by two point light sources that are close to one another. The pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. If they were closer together, as in Figure \(\PageIndex{1}\), we could not distinguish them, thus limiting the detail or resolution we can obtain. This limit is an inescapable consequence of the wave nature of light.
There are many situations in which diffraction limits the resolution. The acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. Be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. Thus light passing through a lens with a diameter \(D\) shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter \(D\) does. So diffraction limits the resolution of any system having a lens or mirror. Telescopes are also limited by diffraction, because of the finite diameter \(D\) of their primary mirror.
TAKE-HOME EXPERIMENT: Resolution of the Eye
Draw two lines on a white sheet of paper (several mm apart). How far away can you be and still distinguish the two lines? What does this tell you about the size of the eye’s pupil? Can you be quantitative? (The size of an adult’s pupil is discussed in "Physics of the Eye."
Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) (Figure \(\PageIndex{2a}\)). It can be shown that, for a circular aperture of diameter \(D\), the first minimum in the diffraction pattern occurs at \(\theta = 1.22 \lambda / D\) (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See Figure 2b. The first minimum is at an angle of \(\theta = 1.22 \lambda / D\), so that two point objects are just resolvable if they are separated by the angle
\[\theta = 1.22 \frac{\lambda}{D}, \label{27.7.1}\]
where \(\lambda\) is the wavelength of light (or other electromagnetic radiation) and \(D\) is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. In this expression, \(\theta\) has units of radians.
Limits to Knowledge
All attempts to observe the size and shape of objects are limited by the wavelength of the probe. Even the small wavelength of light prohibits exact precision. When extremely small wavelength probes as with an electron microscope are used, the system is disturbed, still limiting our knowledge, much as making an electrical measurement alters a circuit. Heisenberg’s uncertainty principle asserts that this limit is fundamental and inescapable, as we shall see in quantum mechanics.
Example \(\PageIndex{1}\): Calculating Diffraction Limits of the Hubble Space Telescope
The primary mirror of the orbiting Hubble Space Telescope has a diameter of 2.40 m. Being in orbit, this telescope avoids the degrading effects of atmospheric distortion on its resolution.
- What is the angle between two just-resolvable point light sources (perhaps two stars)? Assume an average light wavelength of 550 nm.
- If these two stars are at the 2 million light year distance of the Andromeda galaxy, how close together can they be and still be resolved? (A light year, or ly, is the distance light travels in 1 year.)
Strategy:
The Rayleigh criterion stated in the equation \(\theta = 1.22 \frac{\lambda}{D}\) gives the smallest possible angle \(\theta\) between point sources, or the best obtainable resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are.
Solution (a):
The Rayleigh criterion for the minimum resolvable angle is given by Equation \ref{27.7.1}
\[\theta = 1.22 \frac{\lambda}{D}. \nonumber\]
Entering known values gives
\[\begin{align*}\theta &= 1.22 \frac{550 \times 10^{-9}}{2.440 m} \\[4pt] &= 2.80 \times 10^{-7}\, rad. \end{align*}\]
Solution (b):
The distance \(s\) between two objects a distance \(r\) away and separated by an angle \(\theta\) is
\[s = r\theta. \nonumber\]
Substituting known values gives
\[\begin{align*} s &= \left(2.0 \times 10^{6} ly \right) \left(2.80 \times 10^{-7} rad \right) \\[4pt] &= 0.56 \,ly. \end{align*}\]
Discussion:
The angle found in part (a) is extraordinarily small (less than 1/50,000 of a degree), because the primary mirror is so large compared with the wavelength of light. As noticed, diffraction effects are most noticeable when light interacts with objects having sizes on the order of the wavelength of light. However, the effect is still there, and there is a diffraction limit to what is observable. The actual resolution of the Hubble Telescope is not quite as good as that found here. As with all instruments, there are other effects, such as non-uniformities in mirrors or aberrations in lenses that further limit resolution. However, Figure \(\PageIndex{3}\) gives an indication of the extent of the detail observable with the Hubble because of its size and quality and especially because it is above the Earth’s atmosphere.
The answer in part (b) indicates that two stars separated by about half a light year can be resolved. The average distance between stars in a galaxy is on the order of 5 light years in the outer parts and about 1 light year near the galactic center. Therefore, the Hubble can resolve most of the individual stars in Andromeda galaxy, even though it lies at such a huge distance that its light takes 2 million years for its light to reach us. Figure \(\PageIndex{4}\) shows another mirror used to observe radio waves from outer space.
Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. Any beam of light having a finite diameter \(D\) and a wavelength \(\lambda\) exhibits diffraction spreading. The beam spreads out with an angle \(\theta\) given by the equation \(\theta = 1.22 \frac{\lambda}{D}\). Take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to \(\theta = 0^{\circ}\) as possible) instead spreads out at an angle \(\theta = 1.22 \lambda / D\), where \(D\) is the diameter of the beam and \(lambda\) is its wavelength. This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. However, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (Figure \(\PageIndex{5}\)). To avoid this, we can increase \(D\). This is done for laser light sent to the Moon to measure its distance from the Earth. The laser beam is expanded through a telescope to make \(D\) much larger and \(\theta\) smaller.
In most biology laboratories, resolution is presented when the use of the microscope is introduced. The ability of a lens to produce sharp images of two closely spaced point objects is called resolution. The smaller the distance \(x\) by which two objects can be separated and still be seen as distinct, the greater the resolution. The resolving power of a lens is defined as that distance \(x\). An expression for resolving power is obtained from the Rayleigh criterion. In Figure \(\PageIndex{6a}\) we have two point objects separated by a distance \(x\). According to the Rayleigh criterion, resolution is possible when the minimum angular separation is
\[\theta = 1.22 \frac{\lambda}{D} = \frac{x}{d}, \label{27.7.3}\]
where \(d\) is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that \(x\) is much smaller than \(d\)), so that \(\tan{\theta} \approx \sin{\theta} \approx \theta\).
Therefore, the resolving power is
\[x = 1.22 \frac{\lambda d}{D}. \label{27.7.4}\]
Another way to look at this is by re-examining the concept of Numerical Aperture (\(NA\)) discussed previously. There, \(NA\) is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. Figure \(\PageIndex{6b}\) shows a lens and an object at point P. The \(NA\) here is a measure of the ability of the lens to gather light and resolve fine detail. The angle subtended by the lens at its focus is defined to be \(\theta = 2\alpha\). From the figure and again using the small angle approximation, we can write
\[\sin{\alpha} = \frac{D/2}{d} = \frac{D}{2d}. \label{27.7.5}\]
The \(NA\) for a lens is \(NA = n \sin{\alpha}\), where \(n\) is the index of refraction of the medium between the objective lens and the object at point P.
From this definition for \(NA\), we can see that
\[x = 1.22\frac{\lambda d}{D} = 1.22 \frac{\lambda}{2 \sin{\alpha}} = 0.61 \frac{\lambda n}{NA}. \label{27.7.6}\]
In a microscope, \(NA\) is important because it relates to the resolving power of a lens. A lens with a large \(NA\) will be able to resolve finer details. Lenses with larger \(NA\) will also be able to collect more light and so give a brighter image. Another way to describe this situation is that the larger the \(NA\), the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. Thus the microscope has more information to form a clear image, and so its resolving power will be higher.
One of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. Consider focusing when only considering geometric optics, shown in Figure \(\PageIndex{6a}\). The focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the \(NA\) of the objective lens. For wave optics, due to diffraction, the focal point spreads to become a focal spot (Figure \(\PageIndex{7b}\)) with the size of the spot decreasing with increasing \(NA\). Consequently, the intensity in the focal spot increases with increasing \(NA\). The higher the \(NA\), the greater the chances of photodegrading the specimen. However, the spot never becomes a true point.
Summary
- Diffraction limits resolution.
- For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other.
- This occurs for two point objects separated by the angle \(\theta = 1.22 \frac{\lambda}{D}\), where \(\lambda\) is the wavelength of light (or other electromagnetic radiation) and \(D\) is the diameter of the aperture, lens, mirror, etc. This equation also gives the angular spreading of a source of light having a diameter \(D\).
Glossary
- Rayleigh criterion
- two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other
|
libretexts
|
2025-03-17T19:53:44.704052
| 2016-07-24T08:15:17 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.06%3A_Limits_of_Resolution-_The_Rayleigh_Criterion",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.6: Limits of Resolution- The Rayleigh Criterion",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.07%3A_Thin_Film_Interference
|
27.7: Thin Film Interference
Learning Objectives
By the end of this section, you will be able to:
- Discuss the rainbow formation by thin films.
The bright colors seen in an oil slick floating on water or in a sunlit soap bubble are caused by interference. The brightest colors are those that interfere constructively. This interference is between light reflected from different surfaces of a thin film; thus, the effect is known as thin film interference . As noticed before, interference effects are most prominent when light interacts with something having a size similar to its wavelength. A thin film is one having a thickness \(t\) smaller than a few times the wavelength of light, \(\lambda\). Since color is associated indirectly with \(\lambda\) and since all interference depends in some way on the ratio of \(\lambda\) to the size of the object involved, we should expect to see different colors for different thicknesses of a film like in Figure \(\PageIndex{1}\).
What causes thin film interference? Figure 2 shows how light reflected from the top and bottom surfaces of a film can interfere. Incident light is only partially reflected from the top surface of the film (ray 1). The remainder enters the film and is itself partially reflected from the bottom surface. Part of the light reflected from the bottom surface can emerge from the top of the film (ray 2) and interfere with light reflected from the top (ray 1). Since the ray that enters the film travels a greater distance, it may be in or out of phase with the ray reflected from the top. However, consider for a moment, again, the bubbles in Figure \(\PageIndex{1}\). The bubbles are darkest where they are thinnest. Furthermore, if you observe a soap bubble carefully, you will note it gets dark at the point where it breaks. For very thin films, the difference in path lengths of ray 1 and ray 2 in Figure \(\PageIndex{2}\) is negligible; so why should they interfere destructively and not constructively? The answer is that a phase change can occur upon reflection. The rule is as follows:
When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a \(180^{\circ}\) phase change (or a \(\lambda /2\)) occurs.
In the film in Figure \(\PageIndex{2}\) is a soap bubble (essentially water with air on both sides), then there is a \(\lambda / 2\) shift for ray 1 and none for ray 2. Thus, when the film is very thin, the path length difference between the two rays is negligible, they are exactly out of phase, and destructive interference will occur at all wavelengths and so the soap bubble will be dark here.
The thickness of the film relative to the wavelength of light is the other crucial factor in thin film interference. Ray 2 in Figure \(\PageIndex{2}\) travels a greater distance than ray 1. For light incident perpendicular to the surface, ray 2 travels a distance approximately \(2t\) farther than Ray 1. When this distance is an integral or half-integral multiple of the wavelength in the medium (\(\lambda_{n} = \lambda / n\), where \(\lambda\) is the wavelength in vacuum and \(n\) is the index of refraction), constructive or destructive interference occurs, depending also on whether there is a phase change in either ray.
Example \(\PageIndex{1}\): Calculating Non-reflective Lens Coating Using Thin Film Interference
Sophisticated cameras use a series of several lenses. Light can reflect from the surfaces of these various lenses and degrade image clarity. To limit these reflections, lenses are coated with a thin layer of magnesium fluoride that causes destructive thin film interference. What is the thinnest this film can be, if its index of refraction is 1.38 and it is designed to limit the reflection of 550-nm light, normally the most intense visible wavelength? The index of refraction of glass is 1.52.
Strategy
Refer to Figure \(\PageIndex{2}\) and use \(n_{1} = 1.00\) for air, \(n_{2} = 1.38\), and \(n_{3} = 1.52\). Both ray 1 and ray 2 will have a \(\lambda / 2\) shift upon reflection. Thus, to obtain destructive interference, ray 2 will need to travel a half wavelength farther than ray 1. For rays incident perpendicularly, the path length difference is \(2t\).
Solution
To obtain destructive interference here,
\[2t = \frac{\lambda_{n_{2}}}{2}, \nonumber\]
where \(\lambda_{n_{2}}\) is the wavelength in the film and is given by \(\lambda_{n_{2}} = \frac{\lambda}{n_{2}}\).
Thus,
\[2t = \frac{\lambda / n_{2}}{2}.\nonumber\]
Solving for \(t\) and entering known values yields
\[ \begin{align*} t &= \frac{\lambda / n_{2}}{4} \\[4pt] &= \frac{\left(550 nm \right) / 1.38}{4} \\[4pt] &= 99.6 nm. \end{align*}\]
Discussion
Films such as the one in this example are most effective in producing destructive interference when the thinnest layer is used, since light over a broader range of incident angles will be reduced in intensity. These films are called non-reflective coatings; this is only an approximately correct description, though, since other wavelengths will only be partially cancelled. Non-reflective coatings are used in car windows and sunglasses.
Thin film interference is most constructive or most destructive when the path length difference for the two rays is an integral or half-integral wavelength, respectively. That is, for rays incident perpendicularly, \(2t = \lambda_{n}, 2\lambda_{n}, 3\lambda_{n},...\) or \(2t = \lambda_{n}/2, 3\lambda_{n}/2, 5\lambda_{n}/2,...\) To know whether interference is constructive or destructive, you must also determine if there is a phase change upon reflection. Thin film interference thus depends on film thickness, the wavelength of light, and the refractive indices. For white light incident on a film that varies in thickness, you will observe rainbow colors of constructive interference for various wavelengths as the thickness varies.
Example \(\PageIndex{2}\): Soap Bubbles: More Than One Thickness can be Constructive
- What are the three smallest thicknesses of a soap bubble that produce constructive interference for red light with a wavelength of 650 nm? The index of refraction of soap is taken to be the same as that of water.
- What three smallest thicknesses will give destructive interference?
Strategy and Concept:
Use Figure \(\PageIndex{2}\) to visualize the bubble. Note that \(n_{1} = n_{3} = 1.00\) for air, and \(n_{2} = 1.333\) for soap (equivalent to water). There is a \(\lambda / 2\) shift for ray 1 reflected from the top surface of the bubble, and no shift for ray 2 reflected from the bottom surface. To get constructive interference, then, the path length difference (\(2t\)) must be a half-integral multiple of the wavelength -- the first three being \(\lambda_{n}/2, 3\lambda_{n}/2\), and \(5\lambda_{n}/2\). To get destructive interference, the path length difference must be an integral multiple of the wavelength -- the first three being \(0, \lambda_{n},\) and \(2\lambda_{n}\).
Solution (a):
Constructive interference occurs here when
\[2t_{c} = \frac{\lambda_{n}}{2}, \frac{3\lambda_{n}}{2}, \frac{5\lambda_{n}}{2},...\label{27.8.4}\]
The smallest constructive thickness \(t_{c}\) thus is
\[ \begin{align*} t_{c} &= \frac{\lambda_{n}}{4} = \frac{\lambda / n}{4} \\[4pt] &= \frac{\left(650 nm\right) / 1.333}{4} \label{27.8.5} \\[4pt] &= 122 nm. \end{align*}\]
The next thickness that gives constructive interference is \(t'_{c} = 3\lambda_{n}/4\), so that
\[t'_{c} = 366 nm. \label{27.8.6} \nonumber\]
Finally, the third thickness producing constructive interference is \(t''_{c} \lt 5\lambda_{n} / 4\), so that
\[t''_{c} = 610 nm. \label{27.8.7} \nonumber\]
Solution (b):
For destructive interference, the path length difference here is an integral multiple of the wavelength. The first occurs for zero thickness, since there is a phase change at the top surface. That is,
\[t_{d} = 0. \label{27.8.8}\]
The first non-zero thickness producing destructive interference is
\[2t'_{d} = \lambda_{n}. \label{27.8.9}\]
Substituting known values gives
\[t'_{d} = \frac{\lambda_{n}}{2} = \frac{\lambda / n}{2} = \frac{\left(650 nm \right) / 1.333}{2} \label{27.8.10}\] \[= 244 nm.\] Finally, the third destructive thickness is \(2t''_{d} = 2\lambda_{n}\), so that
\[t''_{d} = \lambda_{n} = \frac{\lambda}{n} = \frac{650 nm}{1.333} \label{27.8.11}\] \[= 488 nm.\]
Discussion:
If the bubble was illuminated with pure red light, we would see bright and dark bands at very uniform increases in thickness. First would be a dark band at 0 thickness, then bright at 122 nm thickness, then dark at 244 nm, bright at 366 nm, dark at 488 nm, and bright at 610 nm. If the bubble varied smoothly in thickness, like a smooth wedge, then the bands would be evenly spaced.
Another example of thin film interference can be seen when microscope slides are separated (Figure \(\PageIndex{3}\)). The slides are very flat, so that the wedge of air between them increases in thickness very uniformly. A phase change occurs at the second surface but not the first, and so there is a dark band where the slides touch. The rainbow colors of constructive interference repeat, going from violet to red again and again as the distance between the slides increases. As the layer of air increases, the bands become more difficult to see, because slight changes in incident angle have greater effects on path length differences. If pure-wavelength light instead of white light is used, then bright and dark bands are obtained rather than repeating rainbow colors.
An important application of thin film interference is found in the manufacturing of optical instruments. A lens or mirror can be compared with a master as it is being ground, allowing it to be shaped to an accuracy of less than a wavelength over its entire surface. Figure \(\PageIndex{4}\) illustrates the phenomenon called Newton’s rings, which occurs when the plane surfaces of two lenses are placed together. (The circular bands are called Newton’s rings because Isaac Newton described them and their use in detail. Newton did not discover them; Robert Hooke did, and Newton did not believe they were due to the wave character of light.) Each successive ring of a given color indicates an increase of only one wavelength in the distance between the lens and the blank, so that great precision can be obtained. Once the lens is perfect, there will be no rings.
The wings of certain moths and butterflies have nearly iridescent colors due to thin film interference. In addition to pigmentation, the wing’s color is affected greatly by constructive interference of certain wavelengths reflected from its film-coated surface. Car manufacturers are offering special paint jobs that use thin film interference to produce colors that change with angle. This expensive option is based on variation of thin film path length differences with angle. Security features on credit cards, banknotes, driving licenses and similar items prone to forgery use thin film interference, diffraction gratings, or holograms. Australia led the way with dollar bills printed on polymer with a diffraction grating security feature making the currency difficult to forge. Other countries such as New Zealand and Taiwan are using similar technologies, while the United States currency includes a thin film interference effect.
MAKING CONNECTIONS: TAKE-HOME EXPERIMENT -- THIN FILM INTERFERENCE
One feature of thin film interference and diffraction gratings is that the pattern shifts as you change the angle at which you look or move your head. Find examples of thin film interference and gratings around you. Explain how the patterns change for each specific example. Find examples where the thickness changes giving rise to changing colors. If you can find two microscope slides, then try observing the effect shown in Figure 3. Try separating one end of the two slides with a hair or maybe a thin piece of paper and observe the effect.
Problem-Solving Strategies for Wave Optics
- Step 1. Examine the situation to determine that interference is involved. Identify whether slits or thin film interference are considered in the problem.
- Step 2. If slits are involved, note that diffraction gratings and double slits produce very similar interference patterns, but that gratings have narrower (sharper) maxima. Single slit patterns are characterized by a large central maximum and smaller maxima to the sides.
- Step 3. If thin film interference is involved, take note of the path length difference between the two rays that interfere. Be certain to use the wavelength in the medium involved, since it differs from the wavelength in vacuum. Note also that there is an additional \(\lambda / 2\) phase shift when light reflects from a medium with a greater index of refraction.
- Step 4. Identify exactly what needs to be determined in the problem (identify the unknowns). A written list is useful. Draw a diagram of the situation. Labeling the diagram is useful.
- Step 5. Make a list of what is given or can be inferred from the problem as stated (identify the knowns).
- Step 6. Solve the appropriate equation for the quantity to be determined (the unknown), and enter the knowns. Slits, gratings, and the Rayleigh limit involve equations.
- Step 7. For thin film interference, you will have constructive interference for a total shift that is an integral number of wavelengths. You will have destructive interference for a total shift of a half-integral number of wavelengths. Always keep in mind that crest to crest is constructive whereas crest to trough is destructive.
- Step 8. Check to see if the answer is reasonable: Does it make sense? Angles in interference patterns cannot be greater than \(90^{\circ}\), for example.
Summary
- Thin film interference occurs between the light reflected from the top and bottom surfaces of a film. In addition to the path length difference, there can be a phase change.
- When light reflects from a medium having an index of refraction greater than that of the medium in which it is traveling, a \(180^{\circ}\) phase change (or a \(\lambda /2\) shift) occurs.
Glossary
- thin film interference
- interference between light reflected from different surfaces of a thin film
|
libretexts
|
2025-03-17T19:53:44.782299
| 2016-07-24T08:16:06 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.07%3A_Thin_Film_Interference",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.7: Thin Film Interference",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.08%3A_Polarization
|
27.8: Polarization
Learning Objectives
By the end of this section, you will be able
- Discuss the meaning of polarization.
- Discuss the property of optical activity of certain materials.
Polaroid sunglasses are familiar to most of us. They have a special ability to cut the glare of light reflected from water or glass (Figure \(\PageIndex{1}\)). Polaroids have this ability because of a wave characteristic of light called polarization. What is polarization? How is it produced? What are some of its uses? The answers to these questions are related to the wave character of light.
Light is one type of electromagnetic (EM) wave. As noted earlier, EM waves are transverse waves consisting of varying electric and magnetic fields that oscillate perpendicular to the direction of propagation (Figure \(\PageIndex{2}\)). There are specific directions for the oscillations of the electric and magnetic fields. Polarization is the attribute that a wave’s oscillations have a definite direction relative to the direction of propagation of the wave. (This is not the same type of polarization as that discussed for the separation of charges.) Waves having such a direction are said to be polarized . For an EM wave, we define the direction of polarization to be the direction parallel to the electric field. Thus we can think of the electric field arrows as showing the direction of polarization, as in Figure \(\PageIndex{2}\).
To examine this further, consider the transverse waves in the ropes shown in Figure \(\PageIndex{3}\). The oscillations in one rope are in a vertical plane and are said to be vertically polarized . Those in the other rope are in a horizontal plane and are horizontally polarized . If a vertical slit is placed on the first rope, the waves pass through. However, a vertical slit blocks the horizontally polarized waves. For EM waves, the direction of the electric field is analogous to the disturbances on the ropes.
The Sun and many other light sources produce waves that are randomly polarized (Figure \(\PageIndex{4}\)). Such light is said to be unpolarized because it is composed of many waves with all possible directions of polarization.
Polaroid materials, invented by the founder of Polaroid Corporation, Edwin Land, act as a polarizing slit for light, allowing only polarization in one direction to pass through. Polarizing filters are composed of long molecules aligned in one direction. Thinking of the molecules as many slits, analogous to those for the oscillating ropes, we can understand why only light with a specific polarization can get through. The axis of a polarizing filter is the direction along which the filter passes the electric field of an EM wave (Figure \(\PageIndex{5}\)).
Figure \(\PageIndex{6}\) shows the effect of two polarizing filters on originally unpolarized light. The first filter polarizes the light along its axis. When the axes of the first and second filters are aligned (parallel), then all of the polarized light passed by the first filter is also passed by the second. If the second polarizing filter is rotated, only the component of the light parallel to the second filter’s axis is passed. When the axes are perpendicular, no light is passed by the second.
Only the component of the EM wave parallel to the axis of a filter is passed. Let us call the angle between the direction of polarization and the axis of a filter \(\theta\). If the electric field has an amplitude \(E\), then the transmitted part of the wave has an amplitude \(E \cos{\theta}\) (Figure \(\PageIndex{7}\)). Since the intensity of a wave is proportional to its amplitude squared, the intensity \(I\) of the transmitted wave is related to the incident wave by
\[I = I_{0}\cos{\theta}^{2}, \label{27.9.1}\]
where \(I_{0}\) is the intensity of the polarized wave before passing through the filter. Equation \ref{27.9.1} is known as Malus’s law.
Example \(\PageIndex{1}\): Calculating Intensity Reduction by a Polarizing Filter
What angle is needed between the direction of polarized light and the axis of a polarizing filter to reduce its intensity by \(90.0 \% \)?
Strategy:
When the intensity is reduced by \(90.0 \%\), it is \(10.0 \%\) or 0.100 times its original value. That is, \(I = 0.100 I_{0}\). Using this information, the equation \(I = I_{0}\cos{\theta}^{2}\) can be used to solve for the needed angle.
Solution
Solving the equation \(I = I_{0} \cos{\theta}^{2}\) for \(\cos{\theta}\) and substituting with the relationship between \(I\) and \(I_{0}\) gives
\[\cos{\theta} = \sqrt{\frac{I}{I_{0}}} = \sqrt{\frac{0.100I_{0}}{I_{0}}} = 0.3162.\label{27.9.2}\]
Solving for \(\theta\) yields
\[\theta = \cos{0.3162}^{-2} = 71.6^{\circ} \label{27.9.3}\]
Discussion:
A fairly large angle between the direction of polarization and the filter axis is needed to reduce the intensity to \(10.0 \%\) of its original value. This seems reasonable based on experimenting with polarizing films. It is interesting that, at an angle of \(45^{\circ}\), the intensity is reduced to \(50\%\) of its original value (as you will show in this section’s Problems & Exercises). Note that \(71.6^{\circ}\) is \(18.4^{\circ}\) from reducing the intensity to zero, and that at an angle of \(18.4^{\circ}\) the intensity is reduced to \(90.0\%\) of its original value (as you will also show in Problems & Exercises), giving evidence of symmetry.
Polarization by Reflection
By now you can probably guess that Polaroid sunglasses cut the glare in reflected light because that light is polarized. You can check this for yourself by holding Polaroid sunglasses in front of you and rotating them while looking at light reflected from water or glass. As you rotate the sunglasses, you will notice the light gets bright and dim, but not completely black. This implies the reflected light is partially polarized and cannot be completely blocked by a polarizing filter.
Figure 8 illustrates what happens when unpolarized light is reflected from a surface. Vertically polarized light is preferentially refracted at the surface, so that the reflected light is left more horizontally polarized. The reasons for this phenomenon are beyond the scope of this text, but a convenient mnemonic for remembering this is to imagine the polarization direction to be like an arrow. Vertical polarization would be like an arrow perpendicular to the surface and would be more likely to stick and not be reflected. Horizontal polarization is like an arrow bouncing on its side and would be more likely to be reflected. Sunglasses with vertical axes would then block more reflected light than unpolarized light from other sources.
Since the part of the light that is not reflected is refracted, the amount of polarization depends on the indices of refraction of the media involved. It can be shown that reflected light is completely polarized at a angle of reflection \(\theta_{b}\), given by \[\tan{\theta_{b}} = \frac{n_{2}}{n_{1}}, \label{27.9.4}\] where \(n_{1}\) is the medium in which the incident and reflected light travel and \(n_{2}\) is the index of refraction of the medium that forms the interface that reflects the light. This equation is known as Brewster 's law , and \(\theta_{b}\) is known as Brewster's angle , named after the 19th-century Scottish physicist who discovered them.
THINGS GREAT AND SMALL: ATOMIC EXPLANATION OF POLARIZING FILTERS:
Polarizing filters have a polarization axis that acts as a slit. This slit passes electromagnetic waves (often visible light) that have an electric field parallel to the axis. This is accomplished with long molecules aligned perpendicular to the axis as shown in Figure 9.
Figure 10 illustrates how the component of the electric field parallel to the long molecules is absorbed. An electromagnetic wave is composed of oscillating electric and magnetic fields. The electric field is strong compared with the magnetic field and is more effective in exerting force on charges in the molecules. The most affected charged particles are the electrons in the molecules, since electron masses are small. If the electron is forced to oscillate, it can absorb energy from the EM wave. This reduces the fields in the wave and, hence, reduces its intensity. In long molecules, electrons can more easily oscillate parallel to the molecule than in the perpendicular direction. The electrons are bound to the molecule and are more restricted in their movement perpendicular to the molecule. Thus, the electrons can absorb EM waves that have a component of their electric field parallel to the molecule. The electrons are much less responsive to electric fields perpendicular to the molecule and will allow those fields to pass. Thus the axis of the polarizing filter is perpendicular to the length of the molecule.
Example \(\PageIndex{2}\): Calculating Polarization by Reflection
- At what angle will light traveling in air be completely polarized horizontally when reflected from water?
- From glass?
Strategy:
All we need to solve these problems are the indices of refraction. Air has \(n_{1} = 100\), water has \(n_{2} = 1.333\), and crown glass has \(n'_{2} = 1.520\). The equation \(\tan{\theta_{b}} = \frac{n_{2}}{n_{1}}\) can be directly applied to find \(\theta_{b}\) in each case.
Solution (a):
Putting the known quantities into the equation
\[\tan{\theta_{b}} = \frac{n_{2}}{n_{1}} \label{27.9.4}\]
gives
\[\tan{\theta_{b}} = \frac{n_{2}}{n_{1}} = \frac{1.333}{1.00} = 1.333.\]
Solving for the angle \(\theta_{b}\) yields
\[\theta_{b} = \tan{1.333}^{-1} = 53.1^{\circ}.\]
Solution (b):
Similarly, for crown glass and air, \[\tan{\theta_{b}'} = \frac{n'_{2}}{n_{1}} = \frac{1.520}{1.00} = 1.52.\] Thus, \[\theta_{b}' = \tan{1.52}^{-1} = 56.7^{\circ}.\]
Discussion:
Light reflected at these angles could be completely blocked by a good polarizing filter held with its axis vertical. Brewster’s angle for water and air are similar to those for glass and air, so that sunglasses are equally effective for light reflected from either water or glass under similar circumstances. Light not reflected is refracted into these media. So at an incident angle equal to Brewster’s angle, the refracted light will be slightly polarized vertically. It will not be completely polarized vertically, because only a small fraction of the incident light is reflected, and so a significant amount of horizontally polarized light is refracted.
Polarization by Scattering
If you hold your Polaroid sunglasses in front of you and rotate them while looking at blue sky, you will see the sky get bright and dim. This is a clear indication that light scattered by air is partially polarized. Figure \(\PageIndex{11}\) helps illustrate how this happens. Since light is a transverse EM wave, it vibrates the electrons of air molecules perpendicular to the direction it is traveling. The electrons then radiate like small antennae. Since they are oscillating perpendicular to the direction of the light ray, they produce EM radiation that is polarized perpendicular to the direction of the ray. When viewing the light along a line perpendicular to the original ray, as in Figure 11, there can be no polarization in the scattered light parallel to the original ray, because that would require the original ray to be a longitudinal wave. Along other directions, a component of the other polarization can be projected along the line of sight, and the scattered light will only be partially polarized. Furthermore, multiple scattering can bring light to your eyes from other directions and can contain different polarizations.
Photographs of the sky can be darkened by polarizing filters, a trick used by many photographers to make clouds brighter by contrast. Scattering from other particles, such as smoke or dust, can also polarize light. Detecting polarization in scattered EM waves can be a useful analytical tool in determining the scattering source.
There is a range of optical effects used in sunglasses. Besides being Polaroid, other sunglasses have colored pigments embedded in them, while others use non-reflective or even reflective coatings. A recent development is photochromic lenses, which darken in the sunlight and become clear indoors. Photochromic lenses are embedded with organic microcrystalline molecules that change their properties when exposed to UV in sunlight, but become clear in artificial lighting with no UV.
TAKE-HOME EXPERIMENT: POLARIZATION
Find Polaroid sunglasses and rotate one while holding the other still and look at different surfaces and objects. Explain your observations. What is the difference in angle from when you see a maximum intensity to when you see a minimum intensity? Find a reflective glass surface and do the same. At what angle does the glass need to be oriented to give minimum glare?
Liquid Crystals and Other Polarization Effects in Materials
While you are undoubtedly aware of liquid crystal displays (LCDs) found in watches, calculators, computer screens, cellphones, flat screen televisions, and other myriad places, you may not be aware that they are based on polarization. Liquid crystals are so named because their molecules can be aligned even though they are in a liquid. Liquid crystals have the property that they can rotate the polarization of light passing through them by \(90^{\circ}\). Furthermore, this property can be turned off by the application of a voltage, as illustrated in Figure \(\PageIndex{12}\). It is possible to manipulate this characteristic quickly and in small well-defined regions to create the contrast patterns we see in so many LCD devices.
In flat screen LCD televisions, there is a large light at the back of the TV. The light travels to the front screen through millions of tiny units called pixels (picture elements). One of these is shown in Figure \(\PageIndex{12a}\) and \(\PageIndex{12b}\). Each unit has three cells, with red, blue, or green filters, each controlled independently. When the voltage across a liquid crystal is switched off, the liquid crystal passes the light through the particular filter. One can vary the picture contrast by varying the strength of the voltage applied to the liquid crystal.
Many crystals and solutions rotate the plane of polarization of light passing through them. Such substances are said to be optically active . Examples include sugar water, insulin, and collagen (Figure \(\PageIndex{13}\)). In addition to depending on the type of substance, the amount and direction of rotation depends on a number of factors. Among these is the concentration of the substance, the distance the light travels through it, and the wavelength of light. Optical activity is due to the asymmetric shape of molecules in the substance, such as being helical. Measurements of the rotation of polarized light passing through substances can thus be used to measure concentrations, a standard technique for sugars. It can also give information on the shapes of molecules, such as proteins, and factors that affect their shapes, such as temperature and pH.
Glass and plastic become optically active when stressed; the greater the stress, the greater the effect. Optical stress analysis on complicated shapes can be performed by making plastic models of them and observing them through crossed filters, as seen in Figure 14. It is apparent that the effect depends on wavelength as well as stress. The wavelength dependence is sometimes also used for artistic purposes.
Another interesting phenomenon associated with polarized light is the ability of some crystals to split an unpolarized beam of light into two. Such crystals are said to be berefringent (see Figure 15). Each of the separated rays has a specific polarization. One behaves normally and is called the ordinary ray, whereas the other does not obey Snell’s law and is called the extraordinary ray. Birefringent crystals can be used to produce polarized beams from unpolarized light. Some birefringent materials preferentially absorb one of the polarizations. These materials are called dichroic and can produce polarization by this preferential absorption. This is fundamentally how polarizing filters and other polarizers work. The interested reader is invited to further pursue the numerous properties of materials related to polarization.
Summary
- Polarization is the attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave.
- EM waves are transverse waves that may be polarized.
- The direction of polarization is defined to be the direction parallel to the electric field of the EM wave.
- Unpolarized light is composed of many rays having random polarization directions.
- Light can be polarized by passing it through a polarizing filter or other polarizing material.The intensity \(I\) of polarized light after passing through a polarizing filter is \(I = I_{0}\cos{\theta}^{2}\), where \(I_{0}\) is the original intensity and \(\theta\) is the angle between the direction of polarization and the axis of the filter.
- Polarization is also produced by reflection.
- Brewster’s law states that reflected light will be completely polarized at the angle of reflection \(\theta_{b}\), known as Brewster’s angle, given by a statement known as Brewster’s law: \(\tan{\theta_{b}} = \frac{n_{2}}{n_{1}}\), where \(n_{1}\) is the medium in which the incident and reflected light travel and \(n_{2}\) is the index of refraction of the medium that forms the interface that reflects the light.
- Polarization can also be produced by scattering.
- There are a number of types of optically active substances that rotate the direction of polarization of light passing through them.
Glossary
- axis of a polarizing filter
- the direction along which the filter passes the electric field of an EM wave
- birefringent
- crystals that split an unpolarized beam of light into two beams
- Brewster’s angle
- \(\theta_{b} = \tan{\left(\frac{n_{2}}{n_{1}}\right)}^{-1}\), where \(n_{2}\) is the index of refraction of the medium from which the light is reflected and \(n_{1}\) is the index of refraction of the medium in which the reflected light travels
- Brewster’s law
- \(\tan{\theta_{b}} = \frac{n_{2}}{n_{1}}\), where \(n_{1}\) is the medium in which the incident and reflected light travel and \(n_{2}\) is the index of refraction of the medium that forms the interface that reflects the light
- direction of polarization
- the direction parallel to the electric field for EM waves
- horizontally polarized
- the oscillations are in a horizontal plane
- optically active
- substances that rotate the plane of polarization of light passing through them
- polarization
- the attribute that wave oscillations have a definite direction relative to the direction of propagation of the wave
- polarized
- waves having the electric and magnetic field oscillations in a definite direction
- reflected light that is completely polarized
- light reflected at the angle of reflection \(\theta_{b}\), known as Brewster’s angle
- unpolarized
- waves that are randomly polarized
- vertically polarized
- the oscillations are in a vertical plane
|
libretexts
|
2025-03-17T19:53:44.875327
| 2016-07-24T08:16:55 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.08%3A_Polarization",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.8: Polarization",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.09%3A_Microscopy_Enhanced_by_the_Wave_Characteristics_of_Light
|
27.9: Microscopy Enhanced by the Wave Characteristics of Light
Learning Objectives
By the end of this section, you will be able to:
- Discuss the different types of microscopes.
Physics research underpins the advancement of developments in microscopy. As we gain knowledge of the wave nature of electromagnetic waves and methods to analyze and interpret signals, new microscopes that enable us to “see” more are being developed. It is the evolution and newer generation of microscopes that are described in this section.
The use of microscopes (microscopy) to observe small details is limited by the wave nature of light. Owing to the fact that light diffracts significantly around small objects, it becomes impossible to observe details significantly smaller than the wavelength of light. One rule of thumb has it that all details smaller than about \(\lambda\) are difficult to observe. Radar, for example, can detect the size of an aircraft, but not its individual rivets, since the wavelength of most radar is several centimeters or greater. Similarly, visible light cannot detect individual atoms, since atoms are about 0.1 nm in size and visible wavelengths range from 380 to 760 nm. Ironically, special techniques used to obtain the best possible resolution with microscopes take advantage of the same wave characteristics of light that ultimately limit the detail.
MAKING CONNECTIONS: Waves
All attempts to observe the size and shape of objects are limited by the wavelength of the probe. Sonar and medical ultrasound are limited by the wavelength of sound they employ. We shall see that this is also true in electron microscopy, since electrons have a wavelength. Heisenberg’s uncertainty principle asserts that this limit is fundamental and inescapable, as we shall see in quantum mechanics.
The most obvious method of obtaining better detail is to utilize shorter wavelengths. Ultraviolet (UV) microscopes have been constructed with special lenses that transmit UV rays and utilize photographic or electronic techniques to record images. The shorter UV wavelengths allow somewhat greater detail to be observed, but drawbacks, such as the hazard of UV to living tissue and the need for special detection devices and lenses (which tend to be dispersive in the UV), severely limit the use of UV microscopes. Elsewhere, we will explore practical uses of very short wavelength EM waves, such as x rays, and other short-wavelength probes, such as electrons in electron microscopes, to detect small details.
Another difficulty in microscopy is the fact that many microscopic objects do not absorb much of the light passing through them. The lack of contrast makes image interpretation very difficult. Contrast is the difference in intensity between objects and the background on which they are observed. Stains (such as dyes, fluorophores, etc.) are commonly employed to enhance contrast, but these tend to be application specific. More general wave interference techniques can be used to produce contrast. Figure \(\PageIndex{1}\) shows the passage of light through a sample. Since the indices of refraction differ, the number of wavelengths in the paths differs. Light emerging from the object is thus out of phase with light from the background and will interfere differently, producing enhanced contrast, especially if the light is coherent and monochromatic -- as in laser light.
Inference microscopes enhance contrast between objects and background by superimposing a reference beam of light upon the light emerging from the sample. Since light from the background and objects differ in phase, there will be different amounts of constructive and destructive interference, producing the desired contrast in final intensity. Figure \(\PageIndex{2}\) shows schematically how this is done. Parallel rays of light from a source are split into two beams by a half-silvered mirror. These beams are called the object and reference beams. Each beam passes through identical optical elements, except that the object beam passes through the object we wish to observe microscopically. The light beams are recombined by another half-silvered mirror and interfere. Since the light rays passing through different parts of the object have different phases, interference will be significantly different and, hence, have greater contrast between them.
Another type of microscope utilizing wave interference and differences in phases to enhance contrast is called the phase-contrast microscope . While its principle is the same as the interference microscope, the phase-contrast microscope is simpler to use and construct. Its impact (and the principle upon which it is based) was so important that its developer, the Dutch physicist Frits Zernike (1888–1966), was awarded the Nobel Prize in 1953. Figure \(\PageIndex{3}\) shows the basic construction of a phase-contrast microscope. Phase differences between light passing through the object and background are produced by passing the rays through different parts of a phase plate (so called because it shifts the phase of the light passing through it). These two light rays are superimposed in the image plane, producing contrast due to their interference.
A polarization microscope also enhances contrast by utilizing a wave characteristic of light. Polarization microscopes are useful for objects that are optically active or birefringent, particularly if those characteristics vary from place to place in the object. Polarized light is sent through the object and then observed through a polarizing filter that is perpendicular to the original polarization direction. Nearly transparent objects can then appear with strong color and in high contrast. Many polarization effects are wavelength dependent, producing color in the processed image. Contrast results from the action of the polarizing filter in passing only components parallel to its axis.
Apart from the UV microscope, the variations of microscopy discussed so far in this section are available as attachments to fairly standard microscopes or as slight variations. The next level of sophistication is provided by commercial confocal microscopes , to obtain three-dimensional images rather than two-dimensional images. Here, only a single plane or region of focus is identified; out-of-focus regions above and below this plane are subtracted out by a computer so the image quality is much better. This type of microscope makes use of fluorescence, where a laser provides the excitation light. Laser light passing through a tiny aperture called a pinhole forms an extended focal region within the specimen. The reflected light passes through the objective lens to a second pinhole and the photomultiplier detector (Figure \(\PageIndex{4}\)). The second pinhole is the key here and serves to block much of the light from points that are not at the focal point of the objective lens. The pinhole is conjugate (coupled) to the focal point of the lens. The second pinhole and detector are scanned, allowing reflected light from a small region or section of the extended focal region to be imaged at any one time. The out-of-focus light is excluded. Each image is stored in a computer, and a full scanned image is generated in a short time. Live cell processes can also be imaged at adequate scanning speeds allowing the imaging of three-dimensional microscopic movement. Confocal microscopy enhances images over conventional optical microscopy, especially for thicker specimens, and so has become quite popular.
The next level of sophistication is provided by microscopes attached to instruments that isolate and detect only a small wavelength band of light -- monochromators and spectral analyzers. Here, the monochromatic light from a laser is scattered from the specimen. This scattered light shifts up or down as it excites particular energy levels in the sample. The uniqueness of the observed scattered light can give detailed information about the chemical composition of a given spot on the sample with high contrast -- like molecular fingerprints. Applications are in materials science, nanotechnology, and the biomedical field. Fine details in biochemical processes over time can even be detected. The ultimate in microscopy is the electron microscope -- to be discussed later. Research is being conducted into the development of new prototype microscopes that can become commercially available, providing better diagnostic and research capacities.
Summary
- To improve microscope images, various techniques utilizing the wave characteristics of light have been developed. Many of these enhance contrast with interference effects.
Glossary
- confocal microscopes
- microscopes that use the extended focal region to obtain three-dimensional images rather than two-dimensional images
- contrast
- the difference in intensity between objects and the background on which they are observed
- interference microscopes
- microscopes that enhance contrast between objects and background by superimposing a reference beam of light upon the light emerging from the sample
- phase-contrast microscope
- microscope utilizing wave interference and differences in phases to enhance contrast
- polarization microscope
- microscope that enhances contrast by utilizing a wave characteristic of light, useful for objects that are optically active
- ultraviolet (UV) microscopes
- microscopes constructed with special lenses that transmit UV rays and utilize photographic or electronic techniques to record images
|
libretexts
|
2025-03-17T19:53:45.020026
| 2016-07-24T08:17:53 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.09%3A_Microscopy_Enhanced_by_the_Wave_Characteristics_of_Light",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.9: Microscopy Enhanced by the Wave Characteristics of Light",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.E%3A_Vision_and_Optical_Instruments_(Exercise)
|
27.E: Vision and Optical Instruments (Exercise)
-
- Last updated
- Save as PDF
Conceptual Questions
27.1: The Wave Aspect of Light: Interference
1. What type of experimental evidence indicates that light is a wave?
2. Give an example of a wave characteristic of light that is easily observed outside the laboratory.
27.2: Huygens's Principle: Diffraction
3. How do wave effects depend on the size of the object with which the wave interacts? For example, why does sound bend around the corner of a building while light does not?
4. Under what conditions can light be modeled like a ray? Like a wave?
5. Go outside in the sunlight and observe your shadow. It has fuzzy edges even if you do not. Is this a diffraction effect? Explain.
6. Why does the wavelength of light decrease when it passes from vacuum into a medium? State which attributes change and which stay the same and, thus, require the wavelength to decrease.
7. Does Huygens’s principle apply to all types of waves?
27.3: Young’s Double Slit Experiment
8. Young’s double slit experiment breaks a single light beam into two sources. Would the same pattern be obtained for two independent sources of light, such as the headlights of a distant car? Explain.
9. Suppose you use the same double slit to perform Young’s double slit experiment in air and then repeat the experiment in water. Do the angles to the same parts of the interference pattern get larger or smaller? Does the color of the light change? Explain.
10. Is it possible to create a situation in which there is only destructive interference? Explain.
11. The figrue shows the central part of the interference pattern for a pure wavelength of red light projected onto a double slit. The pattern is actually a combination of single slit and double slit interference. Note that the bright spots are evenly spaced. Is this a double slit or single slit characteristic? Note that some of the bright spots are dim on either side of the center. Is this a single slit or double slit characteristic? Which is smaller, the slit width or the separation between slits? Explain your responses.
This double slit interference pattern also shows signs of single slit interference. (credit: PASCO)
27.4: Multiple Slit Diffraction
12. What is the advantage of a diffraction grating over a double slit in dispersing light into a spectrum?
13. What are the advantages of a diffraction grating over a prism in dispersing light for spectral analysis?
14. Can the lines in a diffraction grating be too close together to be useful as a spectroscopic tool for visible light? If so, what type of EM radiation would the grating be suitable for? Explain.
15. If a beam of white light passes through a diffraction grating with vertical lines, the light is dispersed into rainbow colors on the right and left. If a glass prism disperses white light to the right into a rainbow, how does the sequence of colors compare with that produced on the right by a diffraction grating?
16. Suppose pure-wavelength light falls on a diffraction grating. What happens to the interference pattern if the same light falls on a grating that has more lines per centimeter? What happens to the interference pattern if a longer-wavelength light falls on the same grating? Explain how these two effects are consistent in terms of the relationship of wavelength to the distance between slits.
17. Suppose a feather appears green but has no green pigment. Explain in terms of diffraction.
18. Suppose a feather appears green but has no green pigment. Explain in terms of diffraction.
27.5: Single Slit Diffraction
19. As the width of the slit producing a single-slit diffraction pattern is reduced, how will the diffraction pattern produced change?
27.6: Limits of Resolution: The Rayleigh Criterion
20. A beam of light always spreads out. Why can a beam not be created with parallel rays to prevent spreading? Why can lenses, mirrors, or apertures not be used to correct the spreading?
27.7: Thin Film Interference
21. What effect does increasing the wedge angle have on the spacing of interference fringes? If the wedge angle is too large, fringes are not observed. Why?
22. How is the difference in paths taken by two originally in-phase light waves related to whether they interfere constructively or destructively? How can this be affected by reflection? By refraction?
23. Is there a phase change in the light reflected from either surface of a contact lens floating on a person’s tear layer? The index of refraction of the lens is about 1.5, and its top surface is dry.
24. In placing a sample on a microscope slide, a glass cover is placed over a water drop on the glass slide. Light incident from above can reflect from the top and bottom of the glass cover and from the glass slide below the water drop. At which surfaces will there be a phase change in the reflected light?
25. Answer the above question if the fluid between the two pieces of crown glass is carbon disulfide.
26. While contemplating the food value of a slice of ham, you notice a rainbow of color reflected from its moist surface. Explain its origin.
27. An inventor notices that a soap bubble is dark at its thinnest and realizes that destructive interference is taking place for all wavelengths. How could she use this knowledge to make a non-reflective coating for lenses that is effective at all wavelengths? That is, what limits would there be on the index of refraction and thickness of the coating? How might this be impractical?
28. A non-reflective coating like the one described in Example works ideally for a single wavelength and for perpendicular incidence. What happens for other wavelengths and other incident directions? Be specific.
29. Why is it much more difficult to see interference fringes for light reflected from a thick piece of glass than from a thin film? Would it be easier if monochromatic light were used?
27.8: Polarization
30. Under what circumstances is the phase of light changed by reflection? Is the phase related to polarization?
31. Can a sound wave in air be polarized? Explain.
32. No light passes through two perfect polarizing filters with perpendicular axes. However, if a third polarizing filter is placed between the original two, some light can pass. Why is this? Under what circumstances does most of the light pass?
33. Explain what happens to the energy carried by light that it is dimmed by passing it through two crossed polarizing filters.
34. When particles scattering light are much smaller than its wavelength, the amount of scattering is proportional to \(\displaystyle 1/λ^4\). Does this mean there is more scattering for small \(\displaystyle λ\) than large \(\displaystyle λ\)? How does this relate to the fact that the sky is blue?
35. Using the information given in the preceding question, explain why sunsets are red.
36. When light is reflected at Brewster’s angle from a smooth surface, it is \(\displaystyle 100%\) polarized parallel to the surface. Part of the light will be refracted into the surface. Describe how you would do an experiment to determine the polarization of the refracted light. What direction would you expect the polarization to have and would you expect it to be \(\displaystyle 100%\)?
27.9: *Extended Topic* Microscopy Enhanced by the Wave Characteristics of Light
37. Explain how microscopes can use wave optics to improve contrast and why this is important.
38. A bright white light under water is collimated and directed upon a prism. What range of colors does one see emerging?
Problems & Exercises
27.1: The Wave Aspect of Light: Interference
39. Show that when light passes from air to water, its wavelength decreases to 0.750 times its original value.
Solution
1/1.333 = 0.750
40. Find the range of visible wavelengths of light in crown glass.
41. What is the index of refraction of a material for which the wavelength of light is 0.671 times its value in a vacuum? Identify the likely substance.
Solution
1.49, Polystyrene
42. Analysis of an interference effect in a clear solid shows that the wavelength of light in the solid is 329 nm. Knowing this light comes from a He-Ne laser and has a wavelength of 633 nm in air, is the substance zircon or diamond?
43. What is the ratio of thicknesses of crown glass and water that would contain the same number of wavelengths of light?
Solution
0.877 glass to water
27.3: Young’s Double Slit Experiment
44. At what angle is the first-order maximum for 450-nm wavelength blue light falling on double slits separated by 0.0500 mm?
Solution
\(\displaystyle 0.516^{\circ}\)
45. Calculate the angle for the third-order maximum of 580-nm wavelength yellow light falling on double slits separated by 0.100 mm.
46. What is the separation between two slits for which 610-nm orange light has its first maximum at an angle of \(\displaystyle 30.0^{\circ}\)?
Solution
\(\displaystyle 1.22 \times 10^{-6} m\)
47. Find the distance between two slits that produces the first minimum for 410-nm violet light at an angle of \(\displaystyle 45.0^{\circ}\).
48. Calculate the wavelength of light that has its third minimum at an angle of \(\displaystyle 30.0^{\circ}\) when falling on double slits separated by \(\displaystyle 3.00 \mu m\) Explicitly, show how you follow the steps in "Problem-Solving Strategies for Wave Optics."
Solution
\(\displaystyle 600 nm\)
49. What is the wavelength of light falling on double slits separated by \(\displaystyle 2.00 \mu m\) if the third-order maximum is at an angle of \(\displaystyle 60.0^{\circ}\)?
50. At what angle is the fourth-order maximum for the situation in the first exercise?
Solution
\(\displaystyle 2.06^{\circ}\)
51. What is the highest-order maximum for 400-nm light falling on double slits separated by \(\displaystyle 25.0 \mu m\)?
52. Find the largest wavelength of light falling on double slits separated by \(\displaystyle 1.20 \mu m\) for which there is a first-order maximum. Is this in the visible part of the spectrum?
Solution
1200 nm (not visible)
53. What is the smallest separation between two slits that will produce a second-order maximum for 720-nm red light?
54. (a) What is the smallest separation between two slits that will produce a second-order maximum for any visible light?
(b) For all visible light?
Solution
(a) 760 nm
(b) 1520 nm
55. (a) If the first-order maximum for pure-wavelength light falling on a double slit is at an angle of \(\displaystyle 10.0^{\circ}\), at what angle is the second-order maximum?
(b) What is the angle of the first minimum?
(c) What is the highest-order maximum possible here?
56. The figure shows a double slit located a distance \(\displaystyle x\) from a screen, with the distance from the center of the screen given by \(\displaystyle y\). When the distance \(\displaystyle d\) between the slits is relatively large, there will be numerous bright spots, called fringes. Show that, for small angles (where \(\displaystyle \sin{\theta} \approx \theta\), with \(\displaystyle \theta\) in radians), the distance between fringes is given by \(\displaystyle \delta y = x \lambda / d\).
The distance between adjacent fringes is \(\displaystyle \delta y = x \lambda / d\), assuming the slit separation \(\displaystyle d\) is large compared with \(\displaystyle \lambda\).
Solution
For small angles \(\displaystyle \sin{\theta} - \tan{\theta} \approx \theta \left( in radians \right).\)
For two adjacent fringes we have, \[d \sin{\theta_{m}} = m \lambda \tag{27.4.5}\] and \[d \sin{\theta_{m+1}} = \left( m + 1 \right) \lambda \tag{27.4.6}\] Subtracting these equations gives \[d \left( \sin{\theta_{m+1}} - \sin{\theta_{m}}\right) = \left[ \left( m+1 \right) - m \right] \lambda \tag{27.4.7}\] \[d \left( \theta_{m+1} - \theta_{m} \right) = \lambda \tag{27.4.8}\] \[\tan{\theta_{m}} = \frac{y_{m}}{x} \approx \theta_{m} \rightarrow d \left( \frac{y_{m+1}}{x} - \frac{y_{m}}{x} \right) = \lambda \tag{27.4.9}\] \[d \frac{\delta y}{x} = \lambda \rightarrow \delta y = \frac{x \lambda}{d} \tag{27.4.10}\]
57. Using the result of the problem above, calculate the distance between fringes for 633-nm light falling on double slits separated by 0.0800 mm, located 3.00 m from a screen as in Figure 8.
58. Using the result of the problem two problems prior, find the wavelength of light that produces fringes 7.50 mm apart on a screen 2.00 m from double slits separated by 0.120 mm (see Figure 8).
Solution
450 nm
27.4: Multiple Slit Diffraction
59. A diffraction grating has 2000 lines per centimeter. At what angle will the first-order maximum be for 520-nm-wavelength green light?
Solution
\(\displaystyle 5.97^{\circ}\)
60. Find the angle for the third-order maximum for 580-nm-wavelength yellow light falling on a diffraction grating having 1500 lines per centimeter.
61. How many lines per centimeter are there on a diffraction grating that gives a first-order maximum for 470-nm blue light at an angle of \(\displaystyle 25.0^{\circ}\)?
Solution
\(\displaystyle 8.99 \times 10^{3}\)
62. What is the distance between lines on a diffraction grating that produces a second-order maximum for 760-nm red light at an angle of \(\displaystyle 60.0^{\circ}\)?
63. Calculate the wavelength of light that has its second-order maximum at \(\displaystyle 45.0^{\circ}\) when falling on a diffraction grating that has 5000 lines per centimeter.
Solution
707 nm
64. An electric current through hydrogen gas produces several distinct wavelengths of visible light. What are the wavelengths of the hydrogen spectrum, if they form first-order maxima at angles of \(\displaystyle 24.2^{\circ}\), \(\displaystyle 25.7^{\circ}\), \(\displaystyle 29.1^{\circ}\), and \(\displaystyle 41.0^{\circ}\) when projected on a diffraction grating having 10,000 lines per centimeter? Explicitly show how you follow the steps in "Problem-Solving Strategies for Wave Optics."
65. (a) What do the four angles in the above problem become if a 5000-line-per-centimeter diffraction grating is used?
(b) Using this grating, what would the angles be for the second-order maxima?
(c) Discuss the relationship between integral reductions in lines per centimeter and the new angles of various order maxima.
Solution
(a) \(\displaystyle 11.8^{\circ}\), \(\displaystyle 12.5^{\circ}\), \(\displaystyle 14.1^{\circ}\), \(\displaystyle 19.2^{\circ}\)
(b) \(\displaystyle 24.2^{\circ}\), \(\displaystyle 25.7^{\circ}\), \(\displaystyle 29.1^{\circ}\), \(\displaystyle 41.0^{\circ}\)
(c) Decreasing the number of lines per centimeter by a factor of x means that the angle for the x‐order maximum is the same as the original angle for the first- order maximum.
66. What is the maximum number of lines per centimeter a diffraction grating can have and produce a complete first-order spectrum for visible light?
67. The yellow light from a sodium vapor lamp seems to be of pure wavelength, but it produces two first-order maxima at \(\displaystyle 36.093^{\circ}\) and \(\displaystyle 36.129^{\circ}\) when projected on a 10,000 line per centimeter diffraction grating. What are the two wavelengths to an accuracy of 0.1 nm?
Solution
589.1 nm and 589.6 nm
68. What is the spacing between structures in a feather that acts as a reflection grating, given that they produce a first-order maximum for 525-nm light at a \(\displaystyle 30.0^{\circ}\) angle?
69. Structures on a bird feather act like a reflection grating having 8000 lines per centimeter. What is the angle of the first-order maximum for 600-nm light?
Solution
\(\displaystyle 28.7^{\circ}\)
70. An opal such as that shown in Figure 2 acts like a reflection grating with rows separated by about \(\displaystyle 8 \mu m\). If the opal is illuminated normally,
(a) at what angle will red light be seen and
(b) at what angle will blue light be seen?
71. At what angle does a diffraction grating produces a second-order maximum for light having a first-order maximum at \(\displaystyle 20.0^{\circ}\)?
Solution
\(\displaystyle 43.2^{\circ}\)
72. Show that a diffraction grating cannot produce a second-order maximum for a given wavelength of light unless the first-order maximum is at an angle less than \(\displaystyle 30.0^{\circ}\).
73. If a diffraction grating produces a first-order maximum for the shortest wavelength of visible light at \(\displaystyle 30.0^{\circ}\), at what angle will the first-order maximum be for the longest wavelength of visible light?
Solution
\(\displaystyle 90.0^{\circ}\)
74. (a) Find the maximum number of lines per centimeter a diffraction grating can have and produce a maximum for the smallest wavelength of visible light.
(b) Would such a grating be useful for ultraviolet spectra?
(c) For infrared spectra?
75. (a) Show that a 30,000-line-per-centimeter grating will not produce a maximum for visible light.
(b) What is the longest wavelength for which it does produce a first-order maximum?
(c) What is the greatest number of lines per centimeter a diffraction grating can have and produce a complete second-order spectrum for visible light?
Solution
(a) The longest wavelength is 333.3 nm, which is not visible.
(b) 333 nm (UV)
(c) \(\displaystyle 6.58 \times 10^{3} cm\)
76. A He–Ne laser beam is reflected from the surface of a CD onto a wall. The brightest spot is the reflected beam at an angle equal to the angle of incidence. However, fringes are also observed. If the wall is 1.50 m from the CD, and the first fringe is 0.600 m from the central maximum, what is the spacing of grooves on the CD?
77. The analysis shown in the figure below also applies to diffraction gratings with lines separated by a distance \(\displaystyle d\). What is the distance between fringes produced by a diffraction grating having 125 lines per centimeter for 600-nm light, if the screen is 1.50 m away?
The distance between adjacent fringes is \(\displaystyle \delta y = x \lambda / d\), assuming the slit separation \(\displaystyle d\) is large compared with \(\displaystyle \lambda\).
Solution
\(\displaystyle 1.13 \times 10^{-2} m\)
78. Unreasonable Results:
Red light of wavelength of 700 nm falls on a double slit separated by 400 nm.
(a) At what angle is the first-order maximum in the diffraction pattern?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
79. Unreasonable Results:
(a) What visible wavelength has its fourth-order maximum at an angle of \(\displaystyle 25.0^{\circ}\) when projected on a 25,000-line-per-centimeter diffraction grating?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) 42.3 nm
(b) Not a visible wavelength
(c) The number of slits in this diffraction grating is too large. Etching in integrated circuits can be done to a resolution of 50 nm, so slit separations of 400 nm are at the limit of what we can do today. This line spacing is too small to produce diffraction of light.
80. Construct Your Own Problem:
Consider a spectrometer based on a diffraction grating. Construct a problem in which you calculate the distance between two wavelengths of electromagnetic radiation in your spectrometer. Among the things to be considered are the wavelengths you wish to be able to distinguish, the number of lines per meter on the diffraction grating, and the distance from the grating to the screen or detector. Discuss the practicality of the device in terms of being able to discern between wavelengths of interest.
27.5: Single Slit Diffraction
81. (a) At what angle is the first minimum for 550-nm light falling on a single slit of width \(\displaystyle 1.00 \mu m\)?
(b) Will there be a second minimum?
Solution
(a) \(\displaystyle 33.4^{\circ}\)
(b) No
82. (a) Calculate the angle at which a \(\displaystyle 2.00 - \mu m\)-wide slit produces its first minimum for 410-nm violet light.
(b) Where is the first minimum for 700-nm red light?
83. (a) How wide is a single slit that produces its first minimum for 633-nm light at an angle of \(\displaystyle 28.0^{\circ}\)?
(b) At what angle will the second minimum be?
Solution
(a) \(\displaystyle 1.35 \times 10^{-6} m\)
(b) \(\displaystyle 69.9^{\circ}\)
84. (a) What is the width of a single slit that produces its first minimum at \(\displaystyle 60.0^{\circ}\) for 600-nm light?
(b) Find the wavelength of light that has its first minimum at \(\displaystyle 62.0^{\circ}\).
85. Find the wavelength of light that has its third minimum at an angle of \(\displaystyle 48.6^{\circ}\) when it falls on a single slit of width \(\displaystyle 3.00 \mu m\).
Solution
750 nm
86. Calculate the wavelength of light that produces its first minimum at an angle of \(\displaystyle 36.9^{\circ}\) when falling on a single slit of width \(\displaystyle 1.00 \mu m\).
87. (a) Sodium vapor light averaging 589 nm in wavelength falls on a single slit of width \(\displaystyle 7.50 \mu m\). At what angle does it produces its second minimum?
(b) What is the highest-order minimum produced?
Solution
(a) \(\displaystyle 9.04^{\circ}\)
(b) 12
88. (a) Find the angle of the third diffraction minimum for 633-nm light falling on a slit of width \(\displaystyle 20.0 \mu m\).
(b) What slit width would place this minimum at \(\displaystyle 85.0^{\circ}\)? Explicitly show how you follow the steps in "Problem-Solving Strategies fro Wave Optics."
89. (a) Find the angle between the first minima for the two sodium vapor lines, which have wavelengths of 589.1 and 589.6 nm, when they fall upon a single slit of width \(\displaystyle 2.00 \mu m\).
(b) What is the distance between these minima if the diffraction pattern falls on a screen 1.00 m from the slit?
(c) Discuss the ease or difficulty of measuring such a distance.
Solution
(a) \(\displaystyle 0.0150^{\circ}\)
(b) 0.262 mm
(c) This distance is not easily measured by human eye, but under a microscope or magnifying glass it is quite easily measurable.
90. (a) What is the minimum width of a single slit (in multiples of \(\displaystyle \lambda\)) that will produce a first minimum for a wavelength \(\displaystyle \lambda\)?
(b) What is its minimum width if it produces 50 minima?
(c) 1000 minima?
91. (a) If a single slit produces a first minimum at \(\displaystyle 14.5^{\circ}\), at what angle is the second-order minimum?
(b) What is the angle of the third-order minimum?
(c) Is there a fourth-order minimum?
(d) Use your answers to illustrate how the angular width of the central maximum is about twice the angular width of the next maximum (which is the angle between the first and second minima).
Solution
(a) \(\displaystyle 30.1^{\circ}\)
(b) \(\displaystyle 48.7^{\circ}\)
(c) No
(d) \(\displaystyle 2 \theta_{1} = \left(2\right)\left(14.5^{\circ}\right) = 29^{\circ}, \theta_{2} - \theta_{1} = 30.05^{\circ} - 14.5^{\circ} = 15.56^{\circ}.\) Thus, \(\displaystyle 29^{\circ} \approx \left(2\right)\left(15.56^{\circ}\right) = 31.1^{\circ}\).
92. A double slit produces a diffraction pattern that is a combination of single and double slit interference. Find the ratio of the width of the slits to the separation between them, if the first minimum of the single slit pattern falls on the fifth maximum of the double slit pattern. (This will greatly reduce the intensity of the fifth maximum.)
93. Integrated Concepts:
A water break at the entrance to a harbor consists of a rock barrier with a 50.0-m-wide opening. Ocean waves of 20.0-m wavelength approach the opening straight on. At what angle to the incident direction are the boats inside the harbor most protected against wave action?
Solution
\(\displaystyle 23.6^{\circ}, 53.1^{\circ}\)
94. Integrated Concepts:
An aircraft maintenance technician walks past a tall hangar door that acts like a single slit for sound entering the hangar. Outside the door, on a line perpendicular to the opening in the door, a jet engine makes a 600-Hz sound. At what angle with the door will the technician observe the first minimum in sound intensity if the vertical opening is 0.800 m wide and the speed of sound is 340 m/s?
27.6: Limits of Resolution: The Rayleigh Criterion
95. The 300-m-diameter Arecibo radio telescope pictured in Figure 4 detects radio waves with a 4.00 cm average wavelength.
(a) What is the angle between two just-resolvable point sources for this telescope?
(b) How close together could these point sources be at the 2 million light year distance of the Andromeda galaxy?
Solution
(a) \(\displaystyle 1.63 \times 10^{-4} rad\)
(b) 326 ly
96. Assuming the angular resolution found for the Hubble Telescope in the "Calculating Diffraction Limits of the Hubble Space Telescope" example, what is the smallest detail that could be observed on the Moon?
97. Diffraction spreading for a flashlight is insignificant compared with other limitations in its optics, such as spherical aberrations in its mirror. To show this, calculate the minimum angular spreading of a flashlight beam that is originally 5.00 cm in diameter with an average wavelength of 600 nm.
Solution
\(\displaystyle 1.46 \times 10^{-5} rad\)
98. (a) What is the minimum angular spread of a 633-nm wavelength He-Ne laser beam that is originally 1.00 mm in diameter?
(b) If this laser is aimed at a mountain cliff 15.0 km away, how big will the illuminated spot be?
(c) How big a spot would be illuminated on the Moon, neglecting atmospheric effects? (This might be done to hit a corner reflector to measure the round-trip time and, hence, distance.) Explicitly show how you follow the steps in "Problem-Solving Strategies for Wave Optics."
99. A telescope can be used to enlarge the diameter of a laser beam and limit diffraction spreading. The laser beam is sent through the telescope in opposite the normal direction and can then be projected onto a satellite or the Moon.
(a) If this is done with the Mount Wilson telescope, producing a 2.54-m-diameter beam of 633-nm light, what is the minimum angular spread of the beam?
(b) Neglecting atmospheric effects, what is the size of the spot this beam would make on the Moon, assuming a lunar distance of \(\displaystyle 3.84 \times 10^{8} m\)?
Solution
(a) \(\displaystyle 3.04 \times 10^{-7} rad\)
(b) diameter of 235 m
100. The limit to the eye’s acuity is actually related to diffraction by the pupil.
(a) What is the angle between two just-resolvable points of light for a 3.00-mm-diameter pupil, assuming an average wavelength of 550 nm?
(b) Take your result to be the practical limit for the eye. What is the greatest possible distance a car can be from you if you can resolve its two headlights, given they are 1.30 m apart?
(c) What is the distance between two just-resolvable points held at an arm’s length (0.800 m) from your eye?
(d) How does your answer to (c) compare to details you normally observe in everyday circumstances?
101. What is the minimum diameter mirror on a telescope that would allow you to see details as small as 5.00 km on the Moon some 384,000 km away? Assume an average wavelength of 550 nm for the light received.
Solution
5.15 cm
102. You are told not to shoot until you see the whites of their eyes. If the eyes are separated by 6.5 cm and the diameter of your pupil is 5.0 mm, at what distance can you resolve the two eyes using light of wavelength 555 nm?
103. (a) The planet Pluto and its Moon Charon are separated by 19,600 km. Neglecting atmospheric effects, should the 5.08-m-diameter Mount Palomar telescope be able to resolve these bodies when they are \(\displaystyle 4.50 \times 10^{9} km\) from Earth? Assume an average wavelength of 550 nm.
(b) In actuality, it is just barely possible to discern that Pluto and Charon are separate bodies using an Earth-based telescope. What are the reasons for this?
Solution
(a) Yes. Should easily be able to discern.
(b) The fact that it is just barely possible to discern that these are separate bodies indicates the severity of atmospheric aberrations.
104. The headlights of a car are 1.3 m apart. What is the maximum distance at which the eye can resolve these two headlights? Take the pupil diameter to be 0.40 cm.
105. When dots are placed on a page from a laser printer, they must be close enough so that you do not see the individual dots of ink. To do this, the separation of the dots must be less than Raleigh’s criterion. Take the pupil of the eye to be 3.0 mm and the distance from the paper to the eye of 35 cm; find the minimum separation of two dots such that they cannot be resolved. How many dots per inch (dpi) does this correspond to?
106. Unreasonable Results:
An amateur astronomer wants to build a telescope with a diffraction limit that will allow him to see if there are people on the moons of Jupiter.
a) What diameter mirror is needed to be able to see 1.00 m detail on a Jovian Moon at a distance of \(\displaystyle 7.50 \times 10^{8} km\) from Earth? The wavelength of light averages 600 nm.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
107. Construct Your Own Problem:
Consider diffraction limits for an electromagnetic wave interacting with a circular object. Construct a problem in which you calculate the limit of angular resolution with a device, using this circular object (such as a lens, mirror, or antenna) to make observations. Also calculate the limit to spatial resolution (such as the size of features observable on the Moon) for observations at a specific distance from the device. Among the things to be considered are the wavelength of electromagnetic radiation used, the size of the circular object, and the distance to the system or phenomenon being observed.
27.7: Thin Film Interference
108. A soap bubble is 100 nm thick and illuminated by white light incident perpendicular to its surface. What wavelength and color of visible light is most constructively reflected, assuming the same index of refraction as water?
Solution
532 nm (green)
109. An oil slick on water is 120 nm thick and illuminated by white light incident perpendicular to its surface. What color does the oil appear (what is the most constructively reflected wavelength), given its index of refraction is 1.40?
110. Calculate the minimum thickness of an oil slick on water that appears blue when illuminated by white light perpendicular to its surface. Take the blue wavelength to be 470 nm and the index of refraction of oil to be 1.40.
Solution
83.9 nm
111. Find the minimum thickness of a soap bubble that appears red when illuminated by white light perpendicular to its surface. Take the wavelength to be 680 nm, and assume the same index of refraction as water.
112. A film of soapy water (\(\displaystyle n=1.33\)) on top of a plastic cutting board has a thickness of 233 nm. What color is most strongly reflected if it is illuminated perpendicular to its surface?
Solution
620 nm (orange)
113. What are the three smallest non-zero thicknesses of soapy water (\(\displaystyle n=1.33\)) on Plexiglas if it appears green (constructively reflecting 520-nm light) when illuminated perpendicularly by white light? Explicitly show how you follow the steps in Problem Solving Strategies for Wave Optics.
114 . Suppose you have a lens system that is to be used primarily for 700-nm red light. What is the second thinnest coating of fluorite (magnesium fluoride) that would be non-reflective for this wavelength?
Solution
380 nm
115. (a) As a soap bubble thins it becomes dark, because the path length difference becomes small compared with the wavelength of light and there is a phase shift at the top surface. If it becomes dark when the path length difference is less than one-fourth the wavelength, what is the thickest the bubble can be and appear dark at all visible wavelengths? Assume the same index of refraction as water.
(b) Discuss the fragility of the film considering the thickness found.
116. A film of oil on water will appear dark when it is very thin, because the path length difference becomes small compared with the wavelength of light and there is a phase shift at the top surface. If it becomes dark when the path length difference is less than one-fourth the wavelength, what is the thickest the oil can be and appear dark at all visible wavelengths? Oil has an index of refraction of 1.40.
Solution
33.9 nm
117. Figure shows two glass slides illuminated by pure-wavelength light incident perpendicularly. The top slide touches the bottom slide at one end and rests on a 0.100-mm-diameter hair at the other end, forming a wedge of air.
(a) How far apart are the dark bands, if the slides are 7.50 cm long and 589-nm light is used?
(b) Is there any difference if the slides are made from crown or flint glass? Explain.
118. Figure in Exercise 117 shows two 7.50-cm-long glass slides illuminated by pure 589-nm wavelength light incident perpendicularly. The top slide touches the bottom slide at one end and rests on some debris at the other end, forming a wedge of air. How thick is the debris, if the dark bands are 1.00 mm apart?
Solution
\(\displaystyle 4.42×10^{−5}m\)
119. Repeat Exercise, but take the light to be incident at a 45º angle.
120. Repeat Exercise, but take the light to be incident at a 45º angle.
Solution
The oil film will appear black, since the reflected light is not in the visible part of the spectrum.
121. Unreasonable Results
To save money on making military aircraft invisible to radar, an inventor decides to coat them with a non-reflective material having an index of refraction of 1.20, which is between that of air and the surface of the plane. This, he reasons, should be much cheaper than designing Stealth bombers.
(a) What thickness should the coating be to inhibit the reflection of 4.00-cm wavelength radar?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
27.8: Polarization
122. What angle is needed between the direction of polarized light and the axis of a polarizing filter to cut its intensity in half?
Solution
45.0º
123. The angle between the axes of two polarizing filters is 45.0º . By how much does the second filter reduce the intensity of the light coming through the first?
124. If you have completely polarized light of intensity \(\displaystyle 150 W/m^2\), what will its intensity be after passing through a polarizing filter with its axis at an \(\displaystyle 89.0º\) angle to the light’s polarization direction?
Solution
\(\displaystyle 45.7mW/m^2\)
125. What angle would the axis of a polarizing filter need to make with the direction of polarized light of intensity \(\displaystyle 1.00kW/m^2\) to reduce the intensity to \(\displaystyle 10.0W/m^2\)?
126. At the end of Example, it was stated that the intensity of polarized light is reduced to 90.0% of its original value by passing through a polarizing filter with its axis at an angle of 18.4º to the direction of polarization. Verify this statement.
Solution
90.0%
127. Show that if you have three polarizing filters, with the second at an angle of 45º to the first and the third at an angle of 90.0º to the first, the intensity of light passed by the first will be reduced to 25.0% of its value. (This is in contrast to having only the first and third, which reduces the intensity to zero, so that placing the second between them increases the intensity of the transmitted light.)
128. Prove that, if \(\displaystyle I\) is the intensity of light transmitted by two polarizing filters with axes at an angle \(\displaystyle θ\) and \(\displaystyle I'\) is the intensity when the axes are at an angle \(\displaystyle 90.0º−θ\), then \(\displaystyle I+I'=I_0\), the original intensity. (Hint: Use the trigonometric identities \(\displaystyle cos(90.0º−θ)=sinθ \) and \(\displaystyle cos^2θ+sin^2θ=1\).)
Solution
\(\displaystyle I_0\)
129. At what angle will light reflected from diamond be completely polarized?
130. What is Brewster’s angle for light traveling in water that is reflected from crown glass?
Solution
\(\displaystyle 48.8º\)
131. A scuba diver sees light reflected from the water’s surface. At what angle will this light be completely polarized?
132. At what angle is light inside crown glass completely polarized when reflected from water, as in a fish tank?
Solution
\(\displaystyle 41.2º\)
133. Light reflected at \(\displaystyle 55.6º\) from a window is completely polarized. What is the window’s index of refraction and the likely substance of which it is made?
134. (a) Light reflected at \(\displaystyle 62.5º\) from a gemstone in a ring is completely polarized. Can the gem be a diamond?
(b) At what angle would the light be completely polarized if the gem was in water?
Solution
(a) 1.92, not diamond (Zircon)
(b) \(\displaystyle 55.2º\)
135. If \(\displaystyle θ_b\) is Brewster’s angle for light reflected from the top of an interface between two substances, and \(\displaystyle θ'b\) is Brewster’s angle for light reflected from below, prove that \(\displaystyle θ_b+θ'_b=90.0º\).
136. Integrated Concepts
If a polarizing filter reduces the intensity of polarized light to \(\displaystyle 50.0%\) of its original value, by how much are the electric and magnetic fields reduced?
Solution
\(\displaystyle B_2=0.707B_1\)
137. Integrated Concepts
Suppose you put on two pairs of Polaroid sunglasses with their axes at an angle of \(\displaystyle 15.0º\). How much longer will it take the light to deposit a given amount of energy in your eye compared with a single pair of sunglasses? Assume the lenses are clear except for their polarizing characteristics.
138. Integrated Concepts
(a) On a day when the intensity of sunlight is \(\displaystyle 1.00kW/m^2\), a circular lens 0.200 m in diameter focuses light onto water in a black beaker. Two polarizing sheets of plastic are placed in front of the lens with their axes at an angle of \(\displaystyle 20.0º\). Assuming the sunlight is unpolarized and the polarizers are \(\displaystyle 100%\) efficient, what is the initial rate of heating of the water in \(\displaystyle ºC/s\), assuming it is \(\displaystyle 80.0%\) absorbed? The aluminum beaker has a mass of 30.0 grams and contains 250 grams of water.
(b) Do the polarizing filters get hot? Explain.
Solution
(a) \(\displaystyle 2.07×10^{-2} °C/s\)
(b) Yes, the polarizing filters get hot because they absorb some of the lost energy from the sunlight.
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:45.168183
| 2018-05-04T03:08:08 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/27%3A_Wave_Optics/27.E%3A_Vision_and_Optical_Instruments_(Exercise)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "27.E: Vision and Optical Instruments (Exercise)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity
|
28: Special Relativity
Modern relativity is divided into two parts. Special relativity deals with observers who are moving at constant velocity. General relativity deals with observers who are undergoing acceleration. Einstein is famous because his theories of relativity made revolutionary predictions. Most importantly, his theories have been verified to great precision in a vast range of experiments, altering forever our concept of space and time.
-
- 28.0: Prelude to Special Relativity
- It is important to note that although classical mechanics, in general, and classical relativity, in particular, are limited, they are extremely good approximations for large, slow-moving objects. Otherwise, we could not use classical physics to launch satellites or build bridges. In the classical limit (objects larger than submicroscopic and moving slower than about 1% of the speed of light), relativistic mechanics becomes the same as classical mechanics.
-
- 28.1: Einstein’s Postulates
- Relativity is the study of how different observers measure the same event. Modern relativity is correct in all circumstances and, in the limit of low velocity and weak gravitation, gives the same predictions as classical relativity. An inertial frame of reference is a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force. Modern relativity is based on Einstein’s two postulates.
-
- 28.2: Simultaneity and Time Dilation
- Two simultaneous events are not necessarily simultaneous to all observers—simultaneity is not absolute. Time dilation is the phenomenon of time passing slower for an observer who is moving relative to another observer. Observers moving at a relative velocity do not measure the same elapsed time for an event. Proper time is measured by an observer at rest relative to the event being observed and implies that relative velocity cannot exceed the speed of light.
-
- 28.3: Length Contraction
- All observers agree upon relative speed. Distance depends on an observer’s motion. Proper length is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth. Length contraction is the shortening of the measured length of an object moving relative to the observer’s frame.
-
- 28.4: Relativistic Addition of Velocities
- With classical velocity addition, velocities add vectorially. Relativistic velocity addition describes the velocities of an object moving at a relativistic speed. An observer of electromagnetic radiation sees relativistic Doppler effects if the source of the radiation is moving relative to the observer. The wavelength of the radiation is longer than that emitted by the source when the source moves away from the observer and shorter when the source moves toward the observer.
-
- 28.5: Relativistic Momentum
- The law of conservation of momentum is valid whenever the net external force is zero and for relativistic momentum. Relativistic momentum is classical momentum multiplied by the relativistic factor. At low velocities, relativistic momentum is equivalent to classical momentum. Relativistic momentum approaches infinity as uu approaches cc . This implies that an object with mass cannot reach the speed of light. Relativistic momentum is conserved, just as classical momentum is conserved.
-
- 28.6: Relativistic Energy
- Conservation of energy is one of the most important laws in physics. Not only does energy have many important forms, but each form can be converted to any other. We know that classically the total amount of energy in a system remains constant. Relativistically, energy is still conserved, provided its definition is altered to include the possibility of mass changing to energy, as in the reactions that occur within a nuclear reactor.
Thumbnail: A diagrammatic representation of spacetime. Image use with permission (CC-BY-SA 3.0; Stib).
|
libretexts
|
2025-03-17T19:53:45.234975
| 2015-11-01T04:21:57 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28: Special Relativity",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.00%3A_Prelude_to_Special_Relativity
|
28.0: Prelude to Special Relativity
Have you ever looked up at the night sky and dreamed of traveling to other planets in faraway star systems? Would there be other life forms? What would other worlds look like? You might imagine that such an amazing trip would be possible if we could just travel fast enough, but you will read in this chapter why this is not true. In 1905 Albert Einstein developed the theory of special relativity. This theory explains the limit on an object’s speed and describes the consequences.
Relativity. The word relativity might conjure an image of Einstein, but the idea did not begin with him. People have been exploring relativity for many centuries. Relativity is the study of how different observers measure the same event. Galileo and Newton developed the first correct version of classical relativity. Einstein developed the modern theory of relativity. Modern relativity is divided into two parts. Special relativity deals with observers who are moving at constant velocity. General relativity deals with observers who are undergoing acceleration. Einstein is famous because his theories of relativity made revolutionary predictions. Most importantly, his theories have been verified to great precision in a vast range of experiments, altering forever our concept of space and time.
It is important to note that although classical mechanics, in general, and classical relativity, in particular, are limited, they are extremely good approximations for large, slow-moving objects. Otherwise, we could not use classical physics to launch satellites or build bridges. In the classical limit (objects larger than submicroscopic and moving slower than about 1% of the speed of light), relativistic mechanics becomes the same as classical mechanics. This fact will be noted at appropriate places throughout this chapter.
|
libretexts
|
2025-03-17T19:53:45.294972
| 2016-07-24T08:19:49 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.00%3A_Prelude_to_Special_Relativity",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.0: Prelude to Special Relativity",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.01%3A_Einsteins_Postulates
|
28.1: Einstein’s Postulates
Learning Objectives
By the end of this section, you will be able to:
- State and explain both of Einstein’s postulates.
- Explain what an inertial frame of reference is.
- Describe one way the speed of light can be changed
Have you ever used the Pythagorean Theorem and gotten a wrong answer? Probably not, unless you made a mistake in either your algebra or your arithmetic. Each time you perform the same calculation, you know that the answer will be the same. Trigonometry is reliable because of the certainty that one part always flows from another in a logical way. Each part is based on a set of postulates, and you can always connect the parts by applying those postulates. Physics is the same way with the exception that all parts must describe nature. If we are careful to choose the correct postulates, then our theory will follow and will be verified by experiment.
Einstein essentially did the theoretical aspect of this method for relativity . With two deceptively simple postulates and a careful consideration of how measurements are made, he produced the theory of special relativity.
Einstein’s First Postulate
The first postulate upon which Einstein based the theory of special relativity relates to reference frames. All velocities are measured relative to some frame of reference. For example, a car’s motion is measured relative to its starting point or the road it is moving over, a projectile’s motion is measured relative to the surface it was launched from, and a planet’s orbit is measured relative to the star it is orbiting around. The simplest frames of reference are those that are not accelerated and are not rotating. Newton’s first law, the law of inertia, holds exactly in such a frame.
Definition: Inertial Reference Frame
An inertial frame of reference is a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force.
The laws of physics seem to be simplest in inertial frames. For example, when you are in a plane flying at a constant altitude and speed, physics seems to work exactly the same as if you were standing on the surface of the Earth. However, in a plane that is taking off, matters are somewhat more complicated. In these cases, the net force on an object, \(F\), is not equal to the product of mass and acceleration, \(ma\). Instead, \(F\) is equal to \(ma\) plus a fictitious force. This situation is not as simple as in an inertial frame. Not only are laws of physics simplest in inertial frames, but they should be the same in all inertial frames, since there is no preferred frame and no absolute motion. Einstein incorporated these ideas into his first postulate of special relativity .
First Postulate of Special Relativity
The laws of physics are the same and can be stated in their simplest form in all inertial frames of reference.
As with many fundamental statements, there is more to this postulate than meets the eye. The laws of physics include only those that satisfy this postulate. We shall find that the definitions of relativistic momentum and energy must be altered to fit. Another outcome of this postulate is the famous equation \(E = mc^{2}\).
Einstein’s Second Postulate
The second postulate upon which Einstein based his theory of special relativity deals with the speed of light. Late in the 19th century, the major tenets of classical physics were well established. Two of the most important were the laws of electricity and magnetism and Newton’s laws. In particular, the laws of electricity and magnetism predict that light travels at \(c = 3.00 \times 10^{8} m/s\) in a vacuum, but they do not specify the frame of reference in which light has this speed.
There was a contradiction between this prediction and Newton’s laws, in which velocities add like simple vectors. If the latter were true, then two observers moving at different speeds would see light traveling at different speeds. Imagine what a light wave would look like to a person traveling along with it at a speed \(c\). If such a motion were possible then the wave would be stationary relative to the observer. It would have electric and magnetic fields that varied in strength at various distances from the observer but were constant in time. This is not allowed by Maxwell’s equations. So either Maxwell’s equations are wrong, or an object with mass cannot travel at speed \(c\). Einstein concluded that the latter is true. An object with mass cannot travel at speed \(c\). This conclusion implies that light in a vacuum must always travel at speed \(c\) relative to any observer. Maxwell’s equations are correct, and Newton’s addition of velocities is not correct for light.
Investigations such as Young’s double slit experiment in the early-1800s had convincingly demonstrated that light is a wave. Many types of waves were known, and all travelled in some medium. Scientists therefore assumed that a medium carried light, even in a vacuum, and light travelled at a speed \(c\) relative to that medium. Starting in the mid-1880s, the American physicist A. A. Michelson, later aided by E. W. Morley, made a series of direct measurements of the speed of light. The results of their measurements were startling.
Michelson-Morley Experiment
The Michelson-Morley experiment demonstrated that the speed of light in a vacuum is independent of the motion of the Earth about the Sun.
The eventual conclusion derived from this result is that light, unlike mechanical waves such as sound, does not need a medium to carry it. Furthermore, the Michelson-Morley results implied that the speed of light \(c\) is independent of the motion of the source relative to the observer. That is, everyone observes light to move at speed \(c\) regardless of how they move relative to the source or one another. For a number of years, many scientists tried unsuccessfully to explain these results and still retain the general applicability of Newton’s laws.
It was not until 1905, when Einstein published his first paper on special relativity, that the currently accepted conclusion was reached. Based mostly on his analysis that the laws of electricity and magnetism would not allow another speed for light, and only slightly aware of the Michelson-Morley experiment, Einstein detailed his second postulate of special relativity .
Second Postulate of Special Relativity
The speed of light \(c\) is a constant, independent of the relative motion of the source.
Deceptively simple and counterintuitive, this and the first postulate leave all else open for change. Some fundamental concepts do change. Among the changes are the loss of agreement on the elapsed time for an event, the variation of distance with speed, and the realization that matter and energy can be converted into one another. You will read about these concepts in the following sections.
Misconception Alert: Constancy of the Speed of Light
The speed of light is a constant \(c = 3.00 \times 10^{8} m/s\) in a vacuum. If you remember the effect of the index of refraction from The Law of Refraction , the speed of light is lower in matter.
Exercise \(\PageIndex{1}\)
Explain how special relativity differs from general relativity.
- Answer
-
Special relativity applies only to unaccelerated motion, but general relativity applies to accelerated motion.
Summary
- Relativity is the study of how different observers measure the same event.
- Modern relativity is divided into two parts. Special relativity deals with observers who are in uniform (unaccelerated) motion, whereas general relativity includes accelerated relative motion and gravity. Modern relativity is correct in all circumstances and, in the limit of low velocity and weak gravitation, gives the same predictions as classical relativity.
- An inertial frame of reference is a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force.
- Modern relativity is based on Einstein’s two postulates. The first postulate of special relativity is the idea that the laws of physics are the same and can be stated in their simplest form in all inertial frames of reference. The second postulate of special relativity is the idea that the speed of light \(c\) is a constant, independent of the relative motion of the source.
- The Michelson-Morley experiment demonstrated that the speed of light in a vacuum is independent of the motion of the Earth about the Sun.
Glossary
- relativity
- the study of how different observers measure the same event
- special relativity
- the theory that, in an inertial frame of reference, the motion of an object is relative to the frame from which it is viewed or measured
- inertial frame of reference
- a reference frame in which a body at rest remains at rest and a body in motion moves at a constant speed in a straight line unless acted on by an outside force
- first postulate of special relativity
- the idea that the laws of physics are the same and can be stated in their simplest form in all inertial frames of reference
- second postulate of special relativity
- the idea that the speed of light \(c\) is a constant, independent of the source
- Michelson-Morley experiment
- an investigation performed in 1887 that proved that the speed of light in a vacuum is the same in all frames of reference from which it is viewed
|
libretexts
|
2025-03-17T19:53:45.369145
| 2016-07-24T08:20:29 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.01%3A_Einsteins_Postulates",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.1: Einstein’s Postulates",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.02%3A_Simultaneity_and_Time_Dilation
|
28.2: Simultaneity and Time Dilation
Learning Objectives
By the end of this section, you will be able to:
- Describe simultaneity.
- Describe time dilation.
- Calculate γ.
- Compare proper time and the observer’s measured time.
- Explain why the twin paradox is a false paradox.
Do time intervals depend on who observes them? Intuitively, we expect the time for a process, such as the elapsed time for a foot race, to be the same for all observers. Our experience has been that disagreements over elapsed time have to do with the accuracy of measuring time. When we carefully consider just how time is measured, however, we will find that elapsed time depends on the relative motion of an observer with respect to the process being measured.
Simultaneity
Consider how we measure elapsed time. If we use a stopwatch, for example, how do we know when to start and stop the watch? One method is to use the arrival of light from the event, such as observing a light turning green to start a drag race. The timing will be more accurate if some sort of electronic detection is used, avoiding human reaction times and other complications.
Now suppose we use this method to measure the time interval between two flashes of light produced by flash lamps (Figure \(\PageIndex{2}\)). Two flash lamps with observer A midway between them are on a rail car that moves to the right relative to observer B. Observer B arranges for the light flashes to be emitted just as A passes B, so that both A and B are equidistant from the lamps when the light is emitted. Observer B measures the time interval between the arrival of the light flashes. According to postulate 2, the speed of light is not affected by the motion of the lamps relative to B. Therefore, light travels equal distances to him at equal speeds. Thus observer B measures the flashes to be simultaneous.
Now consider what observer B sees happen to observer A. Observer B views light from the right reaching observer A before light from the left, because she has moved toward that flash lamp, lessening the distance the light must travel and reducing the time it takes to get to her. Light travels at speed \(c\) relative to both observers, but observer B remains equidistant between the points where the flashes were emitted, while A gets closer to the emission point on the right. From observer B’s point of view, then, there is a time interval between the arrival of the flashes to observer A. Observer B measures the flashes to arrive simultaneously relative to him but not relative to A.
Now consider what observer A sees happening. She sees the light from the right at the same time that she sees the light from the left. Since both lamps are the same distance from her in her reference frame, from her perspective, the flashes occurred at the same time. Here a relative velocity between observers affects whether two events are observed to be simultaneous.
Simultaneity is not absolute.
This illustrates the power of clear thinking. We might have guessed incorrectly that if light is emitted simultaneously, then two observers halfway between the sources would see the flashes simultaneously. But careful analysis shows this not to be the case. Einstein was brilliant at this type of thought experiment (in German, “Gedankenexperiment”). He very carefully considered how an observation is made and disregarded what might seem obvious. The validity of thought experiments, of course, is determined by actual observation. The genius of Einstein is evidenced by the fact that experiments have repeatedly confirmed his theory of relativity.
In summary: Two events are defined to be simultaneous if an observer measures them as occurring at the same time (such as by receiving light from the events). Two events are not necessarily simultaneous to all observers.
Time Dilation
The consideration of the measurement of elapsed time and simultaneity leads to an important relativistic effect.
Definition: Time Dilation
Time dilation is the phenomenon of time passing slower for an observer who is moving relative to another observer.
Suppose, for example, an astronaut measures the time it takes for light to cross her ship, bounce off a mirror, and return (Figure \(\PageIndex{3}\)). How does the elapsed time the astronaut measures compare with the elapsed time measured for the same event by a person on the Earth? Asking this question (another thought experiment) produces a profound result. We find that the elapsed time for a process depends on who is measuring it. In this case, the time measured by the astronaut is smaller than the time measured by the Earth-bound observer. The passage of time is different for the observers because the distance the light travels in the astronaut’s frame is smaller than in the Earth-bound frame. Light travels at the same speed in each frame, and so it will take longer to travel the greater distance in the Earth-bound frame.
To quantitatively verify that time depends on the observer, consider the paths followed by light as seen by each observer (Figure \(\PageIndex{3c}\)). The astronaut sees the light travel straight across and back for a total distance of \(2D\), twice the width of her ship. The Earth-bound observer sees the light travel a total distance \(2s\). Since the ship is moving at speed \(v\) to the right relative to the Earth, light moving to the right hits the mirror in this frame. Light travels at a speed \(c\) in both frames, and because time is the distance divided by speed, the time measured by the astronaut is \[\Delta t_{0} = \frac{2D}{c}.\label{28.3.1}\] This time has a separate name to distinguish it from the time measured by the Earth-bound observer.
Definition: Proper Time
Proper time \(\Delta t_{0}\) is the time measured by an observer at rest relative to the event being observed.
In the case of the astronaut observe the reflecting light, the astronaut measures proper time. The time measured by the Earth-bound observer is
\[\Delta t = \frac{2s}{c}\label{28.3.2}. \nonumber\]
To find the relationship between \(\Delta t_{0}\) and \(\Delta t\), consider the triangles formed by \(D\) and \(s\) (Figure \(\PageIndex{3c}\).) The third side of these similar triangles is \(L\), the distance the astronaut moves as the light goes across her ship. In the frame of the Earth-bound observer,
\[L = \frac{v\Delta t}{2}\label{28.3.3}. \nonumber\]
Using the Pythagorean Theorem , the distance \(s\) is found to be
\[s = \sqrt{D^{2} + \left(\dfrac{v\Delta t}{2} \right) ^{2}}.\label{28.3.4} \nonumber\]
Substituting \(s\) into the expression for the time interval \(\Delta t\) gives
\[\Delta t = \dfrac{2s}{c} = \dfrac{2 \sqrt{D^{2} + \left( \dfrac{v\Delta t}{2} \right) ^{2}}}{c}.\label{28.3.5} \nonumber\]
We square this equation, which yields
\[\begin{align*} (\Delta t)^2 &= \dfrac{4\left(D^2 + \dfrac{v^2(\Delta t)^2}{4}\right)}{c^2} \\[4pt] &= \dfrac{4D^2}{c^2} + \dfrac{v^2}{c^2}(\Delta t)^2. \end{align*}\]
Note that if we square the first expression we had for \(\Delta t_0\) we get \((\Delta t_0)^2 = \frac{4D^2}{c^2}\). This term appears in the preceding equation, giving us a means to relate the two time intervals. Thus,
\[(\Delta t)^2 = (\Delta t_0)^2 + \dfrac{v^2}{c^2}(\Delta t)^2. \nonumber\]
Gathering terms, we solve for \(\Delta t\):
\[(\Delta t)^2 \left(1 - \dfrac{v^2}{c^2}\right) = (\Delta t_0)^2. \nonumber\]
Thus,
\[(\Delta t)^2 = \dfrac{(\Delta t_0)^2}{1 - \dfrac{v^2}{c^2}}. \nonumber\]
Taking the square root yields an important relationship between elapsed times:
\[\begin{align*} \Delta t &= \dfrac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}} \\[4pt] &= \gamma \Delta t_0, \end{align*}\]
where
\[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}. \nonumber\]
This equation for \(\Delta t\) is truly remarkable. First, as contended, elapsed time is not the same for different observers moving relative to one another, even though both are in inertial frames. Proper time \(\Delta t\) measured by an observer, like the astronaut moving with the apparatus, is smaller than time measured by other observers. Since those other observers measure a longer time \(\Delta t\), the effect is called time dilation. The Earth-bound observer sees time dilate (get longer) for a system moving relative to the Earth. Alternatively, according to the Earth-bound observer, time slows in the moving frame, since less time passes there. All clocks moving relative to an observer, including biological clocks such as aging, are observed to run slow compared with a clock stationary relative to the observer.
Note that if the relative velocity is much less than the speed of light \((v \ll c)\), then \(\frac{v^2}{c^2}\) is extremely small, and the elapsed times \(\Delta t\) and \(\Delta t_0\) are nearly equal. At low velocities, modern relativity approaches classical physics—our everyday experiences have very small relativistic effects.
The equation \(\Delta t = \gamma \delta t_0\) also implies that relative velocity cannot exceed the speed of light. As \(v\) approaches \(c\), \(\Delta t\) also implies that relative velocity cannot exceed the speed of light. As \(v\) exceeded \(c\), then we would be taking the square root of a negative number, producing an imaginary value for \(\Delta t\).
There is considerable experimental evidence that the equation \(\delta t = \gamma \Delta t_0\) is correct. One example is found in cosmic ray particles that continuously rain down on the Earth from deep space. Some collisions of these particles with nuclei in the upper atmosphere result in short-lived particles called muons. The half-life (amount of time for half of a material to decay) of a muon is \(1.52 \, \mu s\) when it is at rest relative to the observer who measures the half-life. This is the proper time \(\Delta t_0\). Muons produced by cosmic ray particles have a range of velocities, with some moving near the speed of light. It has been found that the muon’s half-life as measured by an Earth-bound observer \((\Delta t)\) varies with velocity exactly as predicted by the equation \(\delta t = \gamma \Delta t_0\). The faster the muon moves, the longer it lives. We on the Earth see the muon’s half-life time dilated—as viewed from our frame, the muon decays more slowly than it does when at rest relative to us.
Example \(\PageIndex{1}\): Calculating \(\Delta t\) for a Relativistic Event: How Long Does a Speedy Muon Live?
Suppose a cosmic ray colliding with a nucleus in the Earth’s upper atmosphere produces a muon that has a velocity \(v = 0.950 \, c\). The muon then travels at constant velocity and lives \(1.52 \, \mu s\) as measured in the muon’s frame of reference. (You can imagine this as the muon’s internal clock.) How long does the muon live as measured by an Earth-bound observer (Figure \(\PageIndex{4}\))?
Strategy
A clock moving with the system being measured observes the proper time, so the time we are given is \(\Delta t_0 = 1.52 \, \mu s\). The Earth-bound observer measures \(\Delta t\) as given by the equation \(\delta t = \gamma \Delta t_0\).
Since we know the velocity, the calculation is straightforward.
Solution
- Identify the knowns. \(v = 0.950c\), \(\Delta t_{0} = 1.52 \mu s\)
- Identify the unknown. \(\Delta t\)
-
Choose the appropriate equation.
Use, \[\Delta t = \gamma \Delta t_{0},\label{28.3.6} \nonumber\] where \[\gamma = \frac{1}{\sqrt{1 - \frac{v^{2}}{c^{2}}}}.\nonumber\] -
Plug the knowns into the equation.
First find \(\gamma\). \[\begin{align*} \gamma &= \frac{1}{\sqrt{1 - \frac{v^{2}}{c^{2}}}} \\[4pt] &= \frac{1}{\sqrt{1 - \frac{\left(0.950\right)^{2}}{c^{2}}}} \\[4pt] &= \frac{1}{\sqrt{1 - \left(0.950\right)^{2}}} \\[4pt] &= 3.20. \end{align*}\]
Use the calculated value of \(\gamma\) to determine \(\Delta t\).
\[\begin{align*} \Delta t &= \gamma \Delta t_{0} \\[4pt] &= \left(3.20\right)\left(1.52 \mu s\right) \\[4pt] &= 4.87 \mu s \end{align*}\]
Discussion
One implication of this example is that since \(\gamma = 3.20\) at \(95.0\%\) of the speed of light (\(v = 0.950c\)), the relativistic effects are significant. The two time intervals differ by this factor of 3.20, where classically they would be the same. Something moving at \(0.950 c\) is said to be highly relativistic.
Another implication of the preceding example is that everything an astronaut does when moving at \(95.0\%\) of the speed of light relative to the Earth takes 3.20 times longer when observed from the Earth. Does the astronaut sense this? Only if she looks outside her spaceship. All methods of measuring time in her frame will be affected by the same factor of 3.20. This includes her wristwatch, heart rate, cell metabolism rate, nerve impulse rate, and so on. She will have no way of telling, since all of her clocks will agree with one another because their relative velocities are zero. Motion is relative, not absolute. But what if she does look out the window?
Real World Connections
It may seem that special relativity has little effect on your life, but it is probably more important than you realize. One of the most common effects is through the Global Positioning System (GPS). Emergency vehicles, package delivery services, electronic maps, and communications devices are just a few of the common uses of GPS, and the GPS system could not work without taking into account relativistic effects. GPS satellites rely on precise time measurements to communicate. The signals travel at relativistic speeds. Without corrections for time dilation, the satellites could not communicate, and the GPS system would fail within minutes.
The Twin Paradox
An intriguing consequence of time dilation is that a space traveler moving at a high velocity relative to the Earth would age less than her Earth-bound twin. Imagine the astronaut moving at such a velocity that \(\gamma = 30.0\) as in Figure \(\PageIndex{5}\). A trip that takes 2.00 years in her frame would take 60.0 years in her Earth-bound twin’s frame. Suppose the astronaut traveled 1.00 year to another star system. She briefly explored the area, and then traveled 1.00 year back. If the astronaut was 40 years old when she left, she would be 42 upon her return. Everything on the Earth, however, would have aged 60.0 years. Her twin, if still alive, would be 100 years old.
The situation would seem different to the astronaut. Because motion is relative, the spaceship would seem to be stationary and the Earth would appear to move. (This is the sensation you have when flying in a jet.) If the astronaut looks out the window of the spaceship, she will see time slow down on the Earth by a factor of \(\gamma = 30.0\). To her, the Earth-bound sister will have aged only 2/30 (1/15) of a year, while she aged 2.00 years. The two sisters cannot both be correct.
As with all paradoxes, the premise is faulty and leads to contradictory conclusions. In fact, the astronaut’s motion is significantly different from that of the Earth-bound twin. The astronaut accelerates to a high velocity and then decelerates to view the star system. To return to the Earth, she again accelerates and decelerates. The Earth-bound twin does not experience these accelerations. So the situation is not symmetric, and it is not correct to claim that the astronaut will observe the same effects as her Earth-bound twin. If you use special relativity to examine the twin paradox, you must keep in mind that the theory is expressly based on inertial frames, which by definition are not accelerated or rotating. Einstein developed general relativity to deal with accelerated frames and with gravity, a prime source of acceleration. You can also use general relativity to address the twin paradox and, according to general relativity, the astronaut will age less. Some important conceptual aspects of general relativity are discussed in the Section on General Relativity and Quantum Gravity of this course.
In 1971, American physicists Joseph Hafele and Richard Keating verified time dilation at low relative velocities by flying extremely accurate atomic clocks around the Earth on commercial aircraft. They measured elapsed time to an accuracy of a few nanoseconds and compared it with the time measured by clocks left behind. Hafele and Keating’s results were within experimental uncertainties of the predictions of relativity. Both special and general relativity had to be taken into account, since gravity and accelerations were involved as well as relative motion.
Exercise \(\PageIndex{1}\)
- What is \(\gamma\) if \(v = 0.650 c\)?
- A particle travels at \(1.90 \times 10^8 \, m/s\) and lives \(2.10 \times 10^{-8} s\) when at rest relative to an observer. How long does the particle live as viewed in the laboratory?
- Answer
-
- \(\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{e^2}}} = \dfrac{1}{\sqrt{1 - \frac{(0.650 c)^2}{c^2}}} = 1.32\)
- \(\Delta t = \dfrac{\Delta_t^0}{\sqrt{1-\frac{v^2}{c^2}}} = \dfrac{2.10 \times 10^{-8} s}{\sqrt{1 - \frac{(1.90 \times 10^8 \, m/s)^2}{(3.00 \times 10^8 \, m/s)^2}}} = 2.71 \times 10^{-8} \, s\)
Summary
- Two events are defined to be simultaneous if an observer measures them as occurring at the same time. They are not necessarily simultaneous to all observers—simultaneity is not absolute.
- Time dilation is the phenomenon of time passing slower for an observer who is moving relative to another observer.
- Observers moving at a relative velocity \(v\) do not measure the same elapsed time for an event. Proper time \(\Delta t_0\) is the time measured by an observer at rest relative to the event being observed. Proper time is related to the time \(\Delta t\) measured by an Earth-bound observer by the equation \[\Delta t = \dfrac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}} = \gamma \Delta t_0, \nonumber \] where \[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}. \nonumber \]
- The equation relating proper time and time measured by an Earth-bound observer implies that relative velocity cannot exceed the speed of light.
- The twin paradox asks why a twin traveling at a relativistic speed away and then back towards the Earth ages less than the Earth-bound twin. The premise to the paradox is faulty because the traveling twin is accelerating. Special relativity does not apply to accelerating frames of reference.
- Time dilation is usually negligible at low relative velocities, but it does occur, and it has been verified by experiment.
Glossary
- time dilation
- the phenomenon of time passing slower to an observer who is moving relative to another observer
- proper time
- \(\Delta t_0\) the time measured by an observer at rest relative to the event being observed: \(\Delta t = \dfrac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}} = \gamma \Delta t_0,\) where \(\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\)
- twin paradox
- this asks why a twin traveling at a relativistic speed away and then back towards the Earth ages less than the Earth-bound twin. The premise to the paradox is faulty because the traveling twin is accelerating, and special relativity does not apply to accelerating frames of reference
|
libretexts
|
2025-03-17T19:53:45.455973
| 2016-07-24T08:22:59 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.02%3A_Simultaneity_and_Time_Dilation",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.2: Simultaneity and Time Dilation",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.03%3A_Length_Contraction
|
28.3: Length Contraction
Learning Objectives
By the end of this section, you will be able to:
- Describe proper length.
- Calculate length contraction.
- Explain why we don’t notice these effects at everyday scales.
Have you ever driven on a road that seems like it goes on forever? If you look ahead, you might say you have about 10 km left to go. Another traveler might say the road ahead looks like it’s about 15 km long. If you both measured the road, however, you would agree. Traveling at everyday speeds, the distance you both measure would be the same. You will read in this section, however, that this is not true at relativistic speeds. Close to the speed of light, distances measured are not the same when measured by different observers.
Proper Length
One thing all observers agree upon is relative speed. Even though clocks measure different elapsed times for the same process, they still agree that relative speed, which is distance divided by elapsed time, is the same. This implies that distance, too, depends on the observer’s relative motion. If two observers see different times, then they must also see different distances for relative speed to be the same to each of them.
The muon illustrates this concept. To an observer on the Earth, the muon travels at \(0.950c\) for \(7.05 \mu s\) from the time it is produced until it decays. Thus it travels a distance \[L_{0} = v \Delta t = \left(0.950\right)\left(3.00 \times 10^{8} m/s\right)\left(7.05 \times 10^{-6} s\right) = 2.01 km \label{28.4.1}\] relative to the Earth. In the muon’s frame of reference, its lifetime is only \(2.20 \mu s\). It has enough time to travel only \[L = v \Delta t_{0} = \left(0.950\right)\left(3.00 \times 10^{8} m/s\right)\left(2.20 \times 10^{-6} s\right) = 0.627 km. \label{28.4.2}\] The distance between the same two events (production and decay of a muon) depends on who measures it and how they are moving relative to it.
PROPER LENGTH
Proper length \(L_{0}\) is the distance between two points measured by an observer who is at rest relative to both of the points.
The Earth-bound observer measures the proper length \(L_{0}\), because the points at which the muon is produced and decays are stationary relative to the Earth. To the muon, the Earth, air, and clouds are moving, and so the distance \(L\) it sees is not the proper length.
Length Contraction
To develop an equation relating distances measured by different observers, we note that the velocity relative to the Earth-bound observer in our muon example is given by \[v = \frac{L_{0}}{\Delta t}.\label{28.4.3}\] The time relative to the Earth-bound observer is \(\Delta t\), since the object being timed is moving relative to this observer. The velocity relative to the moving observer is given by \[v = \frac{L}{\Delta t_{0}}.\label{28.4.4}\] The moving observer travels with the muon and therefore observes the proper time \(\Delta t_{0}\). The two velocities are identical; thus, \[\frac{L_{0}}{\Delta t} = \frac{L}{\Delta t_{0}}.\label{28..4.5}\] We know that \(\Delta t = \gamma \Delta t_{0}\). Substituting this equation into the relationship above gives \[L = \frac{L_{0}}{\gamma}.\label{28.4.6}\] Substituting for \(\gamma\) gives an equation relating the distances measured by different observers.
LENGTH CONTRACTION
Length contraction \(L\) is the shortening of the measured length of an object moving relative to the observer’s frame. \[L = L_{0} \sqrt{1-\frac{v^{2}}{c^{2}}}.\label{28.4.7}\]
If we measure the length of anything moving relative to our frame, we find its length \(L\) to be smaller than the proper length \(L_{0}\) that would be measured if the object were stationary. For example, in the muon’s reference frame, the distance between the points where it was produced and where it decayed is shorter. Those points are fixed relative to the Earth but moving relative to the muon. Clouds and other objects are also contracted along the direction of motion in the muon’s reference frame.
Example \(\PageIndex{1}\): Calculating Length Contraction: The Distance between Stars Contracts when You Travel at High Velocity:
Suppose an astronaut, such as the twin discussed in "Simultaneity and Time Dilation," travels so fast that \(\gamma = 30.00\). (a) She travels from the Earth to the nearest star system, Alpha Centauri, 4.300 light years (ly) away as measured by an Earth-bound observer. How far apart are the Earth and Alpha Centauri as measured by the astronaut? (b) In terms of \(c\), what is her velocity relative to the Earth? You may neglect the motion of the Earth relative to the Sun. (See Figure 3.)
Strategy
First note that a light year (ly) is a convenient unit of distance on an astronomical scale—it is the distance light travels in a year. For part (a), note that the 4.300 ly distance between the Alpha Centauri and the Earth is the proper distance \(l_0\), because it is measured by an Earth-bound observer to whom both stars are (approximately) stationary. To the astronaut, the Earth and the Alpha Centauri are moving by at the same velocity, and so the distance between them is the contracted length \(L\). In part (b), we are given \(\gamma\), and so we can find \(v\) by rearranging the definition of \(\gamma\) to express \(v\) in terms of \(c\).
Solution for (a)
- Identify the knowns: \(L_0 - 4.300 \, ly; \, \gamma = 30.00\)
- Identify the unknown: \(L\)
- Choose the appropriate equation: \(L = \frac{L_0}{\gamma}\)
- Rearrange the equation to solve for the unknown; \[L = \dfrac{L_0}{\gamma}\] \[= \dfrac{4.300 \, ly}{30.00}\] \[= 0.1433 \, ly\]
Solution for (b)
- Identify the known: \(\gamma = 30.00\)
- Identify the unknown: \(v\) in terms of \(c\)
- Choose the appropriate equation \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\)
- Rearrange the equation to solve for the unknown: \[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\] \[ 30.00 = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\]
Squaring both sides of the equation and rearranging terms gives: \[900.0 = \dfrac{1}{1 - \frac{v^2}{c^2}}\]
so that \[1 - \dfrac{v^2}{c^2} = \dfrac{1}{900.0}\] and \[\dfrac{v^2}{c^2} = 1 - \dfrac{1}{900.0} = 0.99888....\]
Taking the square root, we find \[\dfrac{v}{c} = 0.99944,\] which is rearranged to produce a value for the velocity \[v = 0.9994c.\]
Discussion
First, remember that you should not round off calculations until the final result is obtained, or you could get erroneous results. This is especially true for special relativity calculations, where the differences might only be revealed after several decimal places. The relativistic effect is large here ( γ = 30 . 00 ), and we see that is approaching (not equaling) the speed of light. Since the distance as measured by the astronaut is so much smaller, the astronaut can travel it in much less time in her frame.
People could be sent very large distances (thousands or even millions of light years) and age only a few years on the way if they traveled at extremely high velocities. But, like emigrants of centuries past, they would leave the Earth they know forever. Even if they returned, thousands to millions of years would have passed on the Earth, obliterating most of what now exists. There is also a more serious practical obstacle to traveling at such velocities; immensely greater energies than classical physics predicts would be needed to achieve such high velocities. This will be discussed in Relatavistic Energy .
Why don’t we notice length contraction in everyday life? The distance to the grocery shop does not seem to depend on whether we are moving or not. Examining the equation \(L = L_0\sqrt{1 - \frac{v^2}{c^2}}\), we see that at low velocities \((v <<c)\) the lengths are nearly equal, the classical expectation. But length contraction is real, if not commonly experienced. For example, a charged particle, like an electron, traveling at relativistic velocity has electric field lines that are compressed along the direction of motion as seen by a stationary observer. (See Figure .) As the electron passes a detector, such as a coil of wire, its field interacts much more briefly, an effect observed at particle accelerators such as the 3 km long Stanford Linear Accelerator (SLAC). In fact, to an electron traveling down the beam pipe at SLAC, the accelerator and the Earth are all moving by and are length contracted. The relativistic effect is so great than the accelerator is only 0.5 m long to the electron. It is actually easier to get the electron beam down the pipe, since the beam does not have to be as precisely aimed to get down a short pipe as it would down one 3 km long. This, again, is an experimental verification of the Special Theory of Relativity.
Exercise \(\PageIndex{1}\)
A particle is traveling through the Earth’s atmosphere at a speed of \(0.750c\). To an Earth-bound observer, the distance it travels is 2.50 km. How far does the particle travel in the particle’s frame of reference?
- Answer
-
\[L = L_0\sqrt{1 - \dfrac{v^2}{c^2}} = (2.50 \, km)\sqrt{1 - \dfrac{(0.750c)^2}{c^2}} = 1.65 \, km\]
Summary
- All observers agree upon relative speed.
- Distance depends on an observer’s motion. Proper length \(L_0\) is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth.
- Length contraction \(L\) is the shortening of the measured length of an object moving relative to the observer’s frame: \[L = L_0 \sqrt{1 - \dfrac{v^2}{c^2}} = \dfrac{L_0}{\gamma}.\]
Glossary
- proper length
- \(L_0\) the distance between two points measured by an observer who is at rest relative to both of the points; Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth
- length contraction
- \(L\) the shortening of the measured length of an object moving relative to the observer’s frame: \(L = L_0\sqrt{1 - \frac{v^2}{c^2}} = \frac{L_0}{\gamma}\)
Contributor
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:45.533562
| 2016-07-24T08:23:48 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.03%3A_Length_Contraction",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.3: Length Contraction",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.04%3A_Relativistic_Addition_of_Velocities
|
28.4: Relativistic Addition of Velocities
Learning Objectives
By the end of this section, you will be able to:
- Calculate relativistic velocity addition.
- Explain when relativistic velocity addition should be used instead of classical addition of velocities.
- Calculate relativistic Doppler shift.
If you’ve ever seen a kayak move down a fast-moving river, you know that remaining in the same place would be hard. The river current pulls the kayak along. Pushing the oars back against the water can move the kayak forward in the water, but that only accounts for part of the velocity. The kayak’s motion is an example of classical addition of velocities. In classical physics, velocities add as vectors. The kayak’s velocity is the vector sum of its velocity relative to the water and the water’s velocity relative to the riverbank.
Classical Velocity Addition
For simplicity, we restrict our consideration of velocity addition to one-dimensional motion. Classically, velocities add like regular numbers in one-dimensional motion (Figure \(\PageIndex{1}\)). Suppose, for example, a girl is riding in a sled at a speed 1.0 m/s relative to an observer. She throws a snowball first forward, then backward at a speed of 1.5 m/s relative to the sled. We denote direction with plus and minus signs in one dimension; in this example, forward is positive. Let \(v\) be the velocity of the sled relative to the Earth, \(u\) the velocity of the snowball relative to the Earth-bound observer, and \(u'\) the velocity of the snowball relative to the sled.
CLASSICAL VELOCITY ADDITION
\[u = v + u'\]
Thus, when the girl throws the snowball forward, \(U = 1.0 \, m/s + 1.5 \, m/s = 2.5 \, m/s\). It makes good intuitive sense that the snowball will head towards the Earth-bound observer faster, because it is thrown forward from a moving vehicle. When the girl throws the snowball backward, \(u = 1.0 \, m/s + (-1.5 \, m/s) = -0.5 \, m/s\). The minus sign means the snowball moves away from the Earth-bound observer.
Relativistic Velocity Addition
The second postulate of relativity (verified by extensive experimental observation) says that classical velocity addition does not apply to light. Imagine a car traveling at night along a straight road, as in Figure \(\PageIndex{3}\). If classical velocity addition applied to light, then the light from the car’s headlights would approach the observer on the sidewalk at a speed \(u = v + c\). But we know that light will move away from the car at speed \(c\) relative to the driver of the car, and light will move towards the observer on the sidewalk at speed \(c\), too.
RELATIVISTIC VELOCITY ADDITION
Either light is an exception, or the classical velocity addition formula only works at low velocities. The latter is the case. The correct formula for one-dimensional relativistic velocity addition is
\[u = \dfrac{v + u'}{1 + \frac{vu'}{c^2}},\]
where \(v\) is the relative velocity between two observers, \(u\) is the velocity of an object relative to one observer, and \(u'\) is the velocity relative to the other observer. (For ease of visualization, we often choose to measure \(u\) in our reference frame, while someone moving at \(v\) relative to us measures \(u'\).) Note that the term \(\frac{vu'}{c^2}\) becomes very small at low velocities, and \(u = \frac{v + u}{1 + \frac{vu'}{v^2}}\) gives a result very close to classical velocity addition. As before, we see that classical velocity addition is an excellent approximation to the correct relativistic formula for small velocities. No wonder that it seems correct in our experience.
Example \(\PageIndex{1}\): Showing that the Speed of Light towards an Observer is Constant (in a Vacuum): The Speed of Light is the Speed of Light
Suppose a spaceship heading directly towards the Earth at half the speed of light sends a signal to us on a laser-produced beam of light. Given that the light leaves the ship at speed as observed from the ship, calculate the speed at which it approaches the Earth.
Strategy
Because the light and the spaceship are moving at relativistic speeds, we cannot use simple velocity addition. Instead, we can determine the speed at which the light approaches the Earth using relativistic velocity addition.
Solution
- Identify the knowns: \(v = 0.500 c\); \(u' = c\)
- Identify the unknown: \(u\)
- Choose the appropriate equation: \(u = \frac{v + u'}{1 + \frac{vu'}{c^2}}\).
- Plug the knowns into the equation. \[u = \dfrac{v + u'}{1 + \frac{vu'}{c^2}} = \dfrac{0.500 c + c}{1 + \frac{(0.500C)(c)}{c^2}} = \dfrac{(0.500 +1)c}{1 + \frac{0.500c^2}{c^2}} = \dfrac{1.500 c}{1 + 0.500} = \dfrac{1.500c}{1.500} = c\]
Discussion
Relativistic velocity addition gives the correct result. Light leaves the ship at speed
and approaches the Earth at speed \(c\). The speed of light is independent of the relative motion of source and observer, whether the observer is on the ship or Earth-bound.
Velocities cannot add to greater than the speed of light, provided that \(v\) is less than \(c\) and \(u'\) does not exceed \(c\). The following example illustrates that relativistic velocity addition is not as symmetric as classical velocity addition.
Example \(\PageIndex{2}\): Comparing the Speed of Light towards and away from an Observer: Relativistic Package Delivery
Suppose the spaceship in the previous example is approaching the Earth at half the speed of light and shoots a canister at a speed of \(0.750 c\).
- At what velocity will an Earth-bound observer see the canister if it is shot directly towards the Earth?
- If it is shot directly away from the Earth? (Figure \(\PageIndex{5}\)).
Strategy
Because the canister and the spaceship are moving at relativistic speeds, we must determine the speed of the canister by an Earth-bound observer using relativistic velocity addition instead of simple velocity addition.
Solution for (a)
- Identify the knowns: \(v = 0.500 c\); \(u' = 0.750 c\)
- Identify the unknown: \(u\)
- Choose the appropriate equation: \(u = \frac{v + u'}{1 + \frac{vu'}{c^2}}\)
- Plug the knowns into the equation: \[ u = \dfrac{v + u'}{1 + \frac{vu'}{c^2}} = \dfrac{0.500 c + 0.750 c}{1 + \frac{(0.500 c)(0.750 c)}{c^2}} = \dfrac{1.250 c}{1 + 0.375)} = 0.909 c\]
Solution for (b)
- Identify the knowns: \(v = 0.500 c\); \(u' = -0.750 c\)
- Identify the unknown: \(u\)
- Choose the appropriate equation: \(u = \frac{v + u'}{1 + \frac{vu'}{c^2}}\)
- Plug the knowns into the equation: \[u = \dfrac{v + u'}{1 + \frac{vu'}{c^2}} = \dfrac{0.500 c + (-0.750 c)}{1 + \frac{(0.500 C)(-0.750 c)}{c^2}} = \dfrac{-0.250 c}{1 - 0.375} = -0.400 c\]
Discussion
The minus sign indicates velocity away from the Earth (in the opposite direction from \(v\)), which means the canister is heading towards the Earth in part (a) and away in part (b), as expected. But relativistic velocities do not add as simply as they do classically. In part (a), the canister does approach the Earth faster, but not at the simple sum of \(1.250 c\). The total velocity is less than you would get classically. And in part (b), the canister moves away from the Earth at a velocity of \(-0.400c\), which is faster than the \(-0.250 c\) you would expect classically. The velocities are not even symmetric. In part (a) the canister moves \(0.409 c\) faster than the ship relative to the Earth, whereas in part (b) it moves \(0.900 c\) slower than the ship.
Doppler Shift
Although the speed of light does not change with relative velocity, the frequencies and wavelengths of light do. First discussed for sound waves, a Doppler shift occurs in any wave when there is relative motion between source and observer.
RELATIVISTIC DOPPLER EFFECTS
The observed wavelength of electromagnetic radiation is longer (called a red shift) than that emitted by the source when the source moves away from the observer and shorter (called a blue shift) when the source moves towards the observer.
\[\lambda_{obs} = \lambda_s \sqrt{\dfrac{1 + \frac{u}{c}}{1 - \frac{u}{c}}}\]
In the Doppler equation \(\lambda_{obs}\) is the observed wavelength, \(\lambda_s\) is the source wavelength, and \(u\) is the relative velocity of the source to the observer. The velocity \(u\) is positive for motion away from an observer and negative for motion toward an observer. In terms of source frequency and observed frequency, this equation can be written \[f_{obs} = f_s \sqrt{\dfrac{1 - \frac{u}{c}}{1 + \frac{u}{c}}}.\] Notice that the – and + signs are different than in the wavelength equation.
CAREER CONNECTION: ASTRONOMER
If you are interested in a career that requires a knowledge of special relativity, there’s probably no better connection than astronomy. Astronomers must take into account relativistic effects when they calculate distances, times, and speeds of black holes, galaxies, quasars, and all other astronomical objects. To have a career in astronomy, you need at least an undergraduate degree in either physics or astronomy, but a Master’s or doctoral degree is often required. You also need a good background in high-level mathematics.
Example \(\PageIndex{3}\): Calculating a Doppler Shift: Radio Waves from a Receding Galaxy
Suppose a galaxy is moving away from the Earth at a speed \(0.825 c\). It emits radio waves with a wavelength of \(0.525 \, m\).
What wavelength would we detect on the Earth?
Strategy
Because the galaxy is moving at a relativistic speed, we must determine the Doppler shift of the radio waves using the relativistic Doppler shift instead of the classical Doppler shift.
Solution
- Identify the knowns: \(u = 0.825 c\); \(\lambda_s = 0.525 \, m\)
- Identify the unknown: \(\lambda_{obs}\)
- Choose the appropriate equation: \(\lambda_{obs} = \lambda \sqrt{\dfrac{1 + \frac{u}{c}}{1 - \frac{u}{c}}}\)
- Plug the knowns into the equation \[\lambda_{obs} = \lambda \sqrt{\dfrac{1 + \frac{u}{c}}{1 - \frac{u}{c}}} = (0.525 \, m)\sqrt{\dfrac{1 + \frac{0.825 c}{c}}{1 - \frac{0.825 c}{c}}} = 1.70 \, m.\]
Discussion
Because the galaxy is moving away from the Earth, we expect the wavelengths of radiation it emits to be redshifted. The wavelength we calculated is 1.70 m, which is redshifted from the original wavelength of 0.525 m.
The relativistic Doppler shift is easy to observe. This equation has everyday applications ranging from Doppler-shifted radar velocity measurements of transportation to Doppler-radar storm monitoring. In astronomical observations, the relativistic Doppler shift provides velocity information such as the motion and distance of stars.
Exercise \(\PageIndex{1}\)
Suppose a space probe moves away from the Earth at a speed \(0.350 c\). It sends a radio wave message back to the Earth at a frequency of 1.50 GHz. At what frequency is the message received on the Earth?
- Answer
-
\[f_{obs} = f_c \sqrt{\dfrac{1 - \frac{u}{c}}{1 + \frac{u}{c}}} = (1.50 \, GHz) \sqrt{\dfrac{1 - \frac{0.350 c}{c}}{1 + \frac{0.350 c}{c}}} = 1.04 \, GHz \nonumber\]
Summary
- With classical velocity addition, velocities add like regular numbers in one-dimensional motion: \(u = v + u'\), where \(v\) is the velocity between two observers, \(u\) is the velocity of an object relative to one observer, and \(u'\) is the velocity relative to the other observer.
- Velocities cannot add to be greater than the speed of light. Relativistic velocity addition describes the velocities of an object moving at a relativistic speed: \[u = \dfrac{v + u'}{1 + \frac{vu'}{c^2}} \nonumber\]
- An observer of electromagnetic radiation sees relativistic Doppler effects if the source of the radiation is moving relative to the observer. The wavelength of the radiation is longer (called a red shift) than that emitted by the source when the source moves away from the observer and shorter (called a blue shift) when the source moves toward the observer. The shifted wavelength is described by the equation \[\lambda_{obs} = \lambda_s \sqrt{\dfrac{1 + \frac{u}{c}}{1 - \frac{u}{c}}} \nonumber\] \(\lambda_{obs}\) is the observed wavelength, \(\lambda_s\) is the source wavelength, and \(u\) is the relative velocity of the source to the observer.
Glossary
- classical velocity addition
- the method of adding velocities when \(v << c\), velocities add like regular numbers in one-dimensional motion: \(u = v + u'\), where \(v\) is the velocity between two observers, \(u\) is the velocity of an object relative to one observer, and \(u'\) is the velocity relative to the other observer.
- relativistic velocity addition
- the method of adding velocities of an object moving at a relativistic speed \(u \frac{v + u'}{1 + \frac{vu'}{c^2}}\), where \(v\) is the relative velocity between two observers, \(u\) is the velocity of an object relative to one observer, and \(u'\) is the velocity relative to the other observer
- relativistic Doppler effects
- a change in wavelength of radiation that is moving relative to the observer; the wavelength of the radiation is longer (called a red shift) than that emitted by the source when the source moves away from the observer and shorter (called a blue shift) when the source moves toward the observer; the shifted wavelength is described by the equation \[\lambda_{obs} = \lambda_s \sqrt{\dfrac{1 + \frac{u}{c}}{1 - \frac{u}{c}}}\] where \(\lambda_{obs}\) is the observed wavelength, \(\lambda_s\) is the source wavelength, and \(u\) is the velocity of the source to the observer
|
libretexts
|
2025-03-17T19:53:45.617090
| 2016-07-24T08:24:37 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.04%3A_Relativistic_Addition_of_Velocities",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.4: Relativistic Addition of Velocities",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.05%3A_Relativistic_Momentum
|
28.5: Relativistic Momentum
Learning Objectives
By the end of this section, you will be able to:
- Calculate relativistic momentum.
- Explain why the only mass it makes sense to talk about is rest mass.
In classical physics, momentum is a simple product of mass and velocity. However, we saw in the last section that when special relativity is taken into account, massive objects have a speed limit. What effect do you think mass and velocity have on the momentum of objects moving at relativistic speeds?
Momentum is one of the most important concepts in physics. The broadest form of Newton’s second law is stated in terms of momentum. Momentum is conserved whenever the net external force on a system is zero. This makes momentum conservation a fundamental tool for analyzing collisions. All of Work, Energy, and Energy Resources is devoted to momentum, and momentum has been important for many other topics as well, particularly where collisions were involved. We will see that momentum has the same importance in modern physics. Relativistic momentum is conserved, and much of what we know about subatomic structure comes from the analysis of collisions of accelerator-produced relativistic particles.
The first postulate of relativity states that the laws of physics are the same in all inertial frames. Does the law of conservation of momentum survive this requirement at high velocities? The answer is yes, provided that the momentum is defined as follows.
Definition: Relativistic Momentum
Relativistic momentum \(p\) is classical momentum multiplied by the relativistic factor \(\gamma\)
\[p = \gamma mu,\]
where \(m\) is the rest mass of the object, \(u\) is its velocity relative to an observer, and the relativistic factor
\[\gamma = \dfrac{1}{\sqrt{1 - \dfrac{u^2}{c^2}}}.\]
Note that we use \(u\) for velocity here to distinguish it from relative velocity \(v\) between observers. Only one observer is being considered here. With \(p\) defined in this way, total momentum \(p_{tot}\) is conserved whenever the net external force is zero, just as in classical physics. Again we see that the relativistic quantity becomes virtually the same as the classical at low velocities. That is, relativistic momentum \(\gamma mu\) becomes the classical \(mu\) at low velocities, because \(\gamma\) is very nearly equal to 1 at low velocities.
Relativistic momentum has the same intuitive feel as classical momentum. It is greatest for large masses moving at high velocities, but, because of the factor \(\gamma\), relativistic momentum approaches infinity as \(u\) approaches \(c\) (Figure \(\PageIndex{2}\)). This is another indication that an object with mass cannot reach the speed of light. If it did, its momentum would become infinite, an unreasonable value.
MISCONCEPTION ALERT: RELATIVISTIC MASS AND MOMENTUM
The relativistically correct definition of momentum as \(p = \gamma mu\), is sometimes taken to imply that mass varies with velocity: \(m_{var} = \gamma m\), particularly in older textbooks. However, note that \(m\) is the mass of the object as measured by a person at rest relative to the object. Thus, \(m\) is defined to be the rest mass, which could be measured at rest, perhaps using gravity. When a mass is moving relative to an observer, the only way that its mass can be determined is through collisions or other means in which momentum is involved. Since the mass of a moving object cannot be determined independently of momentum, the only meaningful mass is rest mass. Thus, when we use the term mass, assume it to be identical to rest mass.
Relativistic momentum is defined in such a way that the conservation of momentum will hold in all inertial frames. Whenever the net external force on a system is zero, relativistic momentum is conserved, just as is the case for classical momentum. This has been verified in numerous experiments.
In Section on Relativistic Energy , the relationship of relativistic momentum to energy is explored. That subject will produce our first inkling that objects without mass may also have momentum.
Exercise \(\PageIndex{1}\)
What is the momentum of an electron traveling at a speed \(0.985 c\)? The rest mass of the electron is \(9.11 \times 10^{-31} \, kg\).
- Answer
-
\[ \begin{align*} p &= \gamma mu \\[5pt] &= \dfrac{mu}{\sqrt{1 - \frac{u^2}{c^2}}} \\[5pt] &= \dfrac{(9.11 \times 10^{-31} kg) (0.985 c)(3.00 \times 10^8 m/s)}{\sqrt{1 - \frac{(0.985 c)2}{c^2}}} \\[5pt] &= 1.56 \times 10^{-21} kg \cdot m/s \end{align*} \]
Summary
- The law of conservation of momentum is valid whenever the net external force is zero and for relativistic momentum. Relativistic momentum \(p\) is classical momentum multiplied by the relativistic factor \(\gamma\)
- \(p = \gamma mu\), where \(m\) is the rest mass of the object, \(u\) is its velocity relative to an observer, and the relativistic factor \(\gamma = \frac{1}{\sqrt{1 - \frac{u^2}{c^2}}}.\)
- At low velocities, relativistic momentum is equivalent to classical momentum.
- Relativistic momentum approaches infinity as \(u\) approaches \(c\). This implies that an object with mass cannot reach the speed of light.
- Relativistic momentum is conserved, just as classical momentum is conserved.
Glossary
- relativistic momentum
- \(p\), the momentum of an object moving at relativistic velocity; \(p = \gamma mu\), where \(m\) is the rest mass of the object, \(u\) is its velocity relative to an observer, and the relativistic factor \(\gamma = \frac{1}{\sqrt{1 - \frac{u^2}{c^2}}}\)
- rest mass
- the mass of an object as measured by a person at rest relative to the object
|
libretexts
|
2025-03-17T19:53:45.684057
| 2016-07-24T08:25:16 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.05%3A_Relativistic_Momentum",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.5: Relativistic Momentum",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.06%3A_Relativistic_Energy
|
28.6: Relativistic Energy
Learning Objectives
By the end of this section, you will be able to:
- Compute total energy of a relativistic object.
- Compute the kinetic energy of a relativistic object.
- Describe rest energy, and explain how it can be converted to other forms.
- Explain why massive particles cannot reach C.
A tokamak is a form of experimental fusion reactor, which can change mass to energy. Accomplishing this requires an understanding of relativistic energy. Nuclear reactors are proof of the conservation of relativistic energy.
Conservation of energy is one of the most important laws in physics. Not only does energy have many important forms, but each form can be converted to any other. We know that classically the total amount of energy in a system remains constant. Relativistically, energy is still conserved, provided its definition is altered to include the possibility of mass changing to energy, as in the reactions that occur within a nuclear reactor. Relativistic energy is intentionally defined so that it will be conserved in all inertial frames, just as is the case for relativistic momentum. As a consequence, we learn that several fundamental quantities are related in ways not known in classical physics. All of these relationships are verified by experiment and have fundamental consequences. The altered definition of energy contains some of the most fundamental and spectacular new insights into nature found in recent history.
Total Energy and Rest Energy
The first postulate of relativity states that the laws of physics are the same in all inertial frames. Einstein showed that the law of conservation of energy is valid relativistically, if we define energy to include a relativistic factor.
Definition: Total Energy
Total energy \(E\) is defined to be
\[E = \gamma mc^2,\]
where \(m\) is mass, \(c\) is the speed of light, \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\), and \(v\) is the velocity of the mass relative to an observer.
There are many aspects of the total energy \(E\) that we will discuss—among them are how kinetic and potential energies are included in \(E\), and how \(E\) is related to relativistic momentum. But first, note that at rest, total energy is not zero. Rather, when \(v = 0\), we have \(\gamma = 1\), and an object has rest energy.
Definition: Rest Energy
Rest energy is
\[E_0 = mc^2.\]
This is the correct form of Einstein’s most famous equation, which for the first time showed that energy is related to the mass of an object at rest. For example, if energy is stored in the object, its rest mass increases. This also implies that mass can be destroyed to release energy. The implications of these first two equations regarding relativistic energy are so broad that they were not completely recognized for some years after Einstein published them in 1907, nor was the experimental proof that they are correct widely recognized at first. Einstein, it should be noted, did understand and describe the meanings and implications of his theory.
Example \(\PageIndex{1}\): Calculating Rest Energy: Rest Energy is Very Large
Calculate the rest energy of a 1.00-g mass.
Strategy
One gram is a small mass—less than half the mass of a penny. We can multiply this mass, in SI units, by the speed of light squared to find the equivalent rest energy.
Solution
- Identify the knowns: \(m = 1.00 \times 10^{-3} \, kg\); \(c = 3.00 \times 10^8 \, m/s\)
- Identify the unknown: \(E_0\)
- Choose the appropriate equation: \(E_0 = mc^2\)
- Plug the knowns into the equation: \[ \begin{align*} E_0 &= mc^2 \\[4pt] &= (1.00 \times 10^{-3} \, kg)(3.00 \times 10^8 \, m/s)^2 \\[4pt] &= 9.00 \times 10^{13} \, kg \cdot m^2/s^2 \end{align*}\]
- Convert units.
Noting that \(1 \, kg \ cdot m^2/s^2 = 1 \, J\), we see the rest mass energy is \[E_0 = 9.00 \times 10^{13} \, J.\]
Discussion
This is an enormous amount of energy for a 1.00-g mass. We do not notice this energy, because it is generally not available. Rest energy is large because the speed of light \(c^2\) is a very large number, so that \(mc^2\) is huge for any macroscopic mass. The \(9.00 \times 10^{13} \, J\) rest mass energy for 1.00 g is about twice the energy released by the Hiroshima atomic bomb and about 10,000 times the kinetic energy of a large aircraft carrier. If a way can be found to convert rest mass energy into some other form (and all forms of energy can be converted into one another), then huge amounts of energy can be obtained from the destruction of mass.
Today, the practical applications of the conversion of mass into another form of energy , such as in nuclear weapons and nuclear power plants, are well known. But examples also existed when Einstein first proposed the correct form of relativistic energy, and he did describe some of them. Nuclear radiation had been discovered in the previous decade, and it had been a mystery as to where its energy originated. The explanation was that, in certain nuclear processes, a small amount of mass is destroyed and energy is released and carried by nuclear radiation. But the amount of mass destroyed is so small that it is difficult to detect that any is missing. Although Einstein proposed this as the source of energy in the radioactive salts then being studied, it was many years before there was broad recognition that mass could be and, in fact, commonly is converted to energy (Figure \(\PageIndex{1}\)).
Because of the relationship of rest energy to mass, we now consider mass to be a form of energy rather than something separate. There had not even been a hint of this prior to Einstein’s work. Such conversion is now known to be the source of the Sun’s energy, the energy of nuclear decay, and even the source of energy keeping Earth’s interior hot.
Stored Energy and Potential Energy
What happens to energy stored in an object at rest, such as the energy put into a battery by charging it, or the energy stored in a toy gun’s compressed spring? The energy input becomes part of the total energy of the object and, thus, increases its rest mass. All stored and potential energy becomes mass in a system. Why is it we don’t ordinarily notice this? In fact, conservation of mass (meaning total mass is constant) was one of the great laws verified by 19th-century science. Why was it not noticed to be incorrect? The following example helps answer these questions.
Example \(\PageIndex{2}\): Calculating Rest Mass: A Small Mass Increase due to Energy Input
A car battery is rated to be able to move 600 ampere-hours \(( \cdot h)\) of charge at 12.0 V.
- Calculate the increase in rest mass of such a battery when it is taken from being fully depleted to being fully charged.
- What percent increase is this, given the battery’s mass is 20.0 kg?
Strategy
In part (a), we first must find the energy stored in the battery, which equals what the battery can supply in the form of electrical potential energy. Since \(PE_{elec} = qV\), we have to calculate the charge \(q\) in \(600 \, A \cdot h\), which is the product of the current \(I\) and the time \(t\). We then multiply the result by 12.0 V. We can then calculate the battery’s increase in mass using \(\Delta E = PE_{elec} = (\Delta m)c^2\).
Part (b) is a simple ratio converted to a percentage.
Solution for (a)
- Identify the knowns: \(I \cdot t = 600 \, A \cdot h\); \(V = 12.0 \, V\); \(c = 3.00 \times 10^8 \, m/s\)
- Identify the unknown: \(\delta m\)
- Choose the appropriate equation: \(PE_{elec} = (\Delta m)c^2\)
- Rearrange the equation to solve for the unknown: \(\Delta m = \frac{PE_{elec}}{c^2}\)
- Plug the knowns into the equation: \[ \Delta m = \dfrac{PE_{elec}}{c^2} = \dfrac{qV}{c^2} = \dfrac{(It)V}{c^2} = \dfrac{(600 \, A \cdot h)(12.0 \, V)}{(3.00 \times 10^8)^2}.\] Write amperes A as coulombs per second (C/s), and convert hours to seconds. \[\Delta m = \dfrac{(600 \, C/s \cdot h(\frac{3600 \, s}{1 \, h})(12.0 \, J/C)}{3.00 \times 10^8 \, m/s)^2}\]\[ = \dfrac{(2.16 \times 10^6 \, C)(12.0 \, J/C)}{(3.00 \times 10^8 \, m/s)^2}\] Using the conversion \(1 \, kg \cdot m^2/c^2 = 1 \, J\), we can write the mass as \(\delta m = 2.88 \times 10^{-10} \, kg\).
Solution for (b)
- Identify the knowns: \(\Delta m = 2.88 \times 10^{-10} \, kg\); \(m = 20.0 \, kg\)
- Identify the unknown: % change
- Choose the appropriate equation: \(\% \, increase = \frac{\Delta m}{m} \times 100\%\)
- Plug the knowns into the equation: \[\% \, increase = \dfrac{\delta m}{m} \times 100\% = \dfrac{2.88 \times 10^{-10} \, kg}{20.0 \, kg} \times 100\% = 1.44 \times 10^{-9}\%\]
Discussion
Both the actual increase in mass and the percent increase are very small, since energy is divided by \(c^2\), a very large number. We would have to be able to measure the mass of the battery to a precision of a billionth of a percent, or 1 part in \(10^{11}\), to notice this increase. It is no wonder that the mass variation is not readily observed. In fact, this change in mass is so small that we may question how you could verify it is real. The answer is found in nuclear processes in which the percentage of mass destroyed is large enough to be measured. The mass of the fuel of a nuclear reactor, for example, is measurably smaller when its energy has been used. In that case, stored energy has been released (converted mostly to heat and electricity) and the rest mass has decreased. This is also the case when you use the energy stored in a battery, except that the stored energy is much greater in nuclear processes, making the change in mass measurable in practice as well as in theory.
Kinetic Energy and the Ultimate Speed Limit
Kinetic energy is energy of motion. Classically, kinetic energy has the familiar expression \(\frac{1}{2} mv^2\). The relativistic expression for kinetic energy is obtained from the work-energy theorem. This theorem states that the net work on a system goes into kinetic energy. If our system starts from rest, then the work-energy theorem is
\[W_{net} = KE.\]
Relativistically, at rest we have rest energy \(E_0 = mc^2\). The work increases this to the total energy \(E = \gamma mc^2\). Thus,
\[W_{net} = E - E_0 = \gamma mc^2 - mc^2 = (\gamma - 1)mc^2.\]
Relativistically, we have \(W_{net} = KE_{rel}.\)
Definition: Relativistic Kinetic Energy
Relativistic kinetic energy is
\[KE_{rel} = (\gamma - 1)mc^2.\]
When motionless, we have \(v = 0\) and
\[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}} = 1,\]
so that \(KE_{rel} = 0\) at rest, as expected. But the expression for relativistic kinetic energy (such as total energy and rest energy) does not look much like the classical \(\frac{1}{2}mv^2\) To show that the classical expression for kinetic energy is obtained at low velocities, we note that the binomial expansion for \(\gamma\) at low velocities gives
\[\gamma = 1 + \dfrac{1}{2} \dfrac{v^2}{c^2}.\]
Entering this into the expression for relativistic kinetic energy gives
\[KE_{rel} = \left[\dfrac{1}{2} \dfrac{v^2}{c^2} \right] mc^2 = \dfrac{1}{2}mv^2 = KE_{class}.\]
So, in fact, relativistic kinetic energy does become the same as classical kinetic energy when \(v < < c\).
It is even more interesting to investigate what happens to kinetic energy when the velocity of an object approaches the speed of light. We know that \(\gamma\) becomes infinite as \(v\) approaches \(c\), so that \(KE_{rel}\) also becomes infinite as the velocity approaches the speed of light (Figure \(\PageIndex{1}\)). An infinite amount of work (and, hence, an infinite amount of energy input) is required to accelerate a mass to the speed of light.
Definition: Speed of Light
No object with mass can attain the speed of light.
So the speed of light is the ultimate speed limit for any particle having mass. All of this is consistent with the fact that velocities less than \(c\) always add to less than \(c\). Both the relativistic form for kinetic energy and the ultimate speed limit being \(c\) have been confirmed in detail in numerous experiments. No matter how much energy is put into accelerating a mass, its velocity can only approach—not reach—the speed of light.
Example \(\PageIndex{3}\): Comparing Kinetic Energy: Relativistic Energy Versus Classical Kinetic Energy
An electron has a velocity \(v = 0.990 c\).
- Calculate the kinetic energy in MeV of the electron.
- Compare this with the classical value for kinetic energy at this velocity. (The mass of an electron is \(9.11 \times 10^{-31} \, kg\).)
Strategy
The expression for relativistic kinetic energy is always correct, but for (a) it must be used since the velocity is highly relativistic (close to \(c\)). First, we will calculate the relativistic factor \(\gamma\), and then use it to determine the relativistic kinetic energy. For (b), we will calculate the classical kinetic energy (which would be close to the relativistic value if \(v\) were less than a few percent of \(c\)) and see that it is not the same.
Solution for (a)
- Identify the knowns: \(v = 0.990 c\); \(m = 9.11 \times 10^{-31} \, kg\)
- Identify the unknown: \(KE_{rel}\)
- Choose the appropriate equation \(KE_{rel} = (\gamma - 1) mc^2\)
- Plug the knowns into the equation:
First calculate \(\gamma\). We will carry extra digits because this is an intermediate calculation.
\[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}} = \dfrac{1}{\sqrt{1 - \frac{(0.990 c)^2}{c^2}}} = \dfrac{1}{\sqrt{1 - (0.990)^2}} = 7.0888\]
Next, we use this value to calculate the kinetic energy.
\[KE_{rel} = (\gamma - 1)mc^2 = (7.0888 -1)(9.11 \times 10^{-31} \, kg)(3.00 \times 10^8 \, m/s)^2 = 4.99 \times 10^{-13} \, J\]
5. Convert units:
\[KE_{rel} = (4.99 \times 10^{-13} \, J)\left( \dfrac{1 \, MeV}{1.60 \times 10^{-13} \, J} \right) = 3.12 \, MeV\]
Solution for (b)
- List the knowns: \(v = 0.990 c\); \(m = 9.11 \times 10^{-31} \, kg\)
- List the unknown: \(KE_{class}\)
- Choose the appropriate equation: \(KE_{class} = \frac{1}{2} mv^2\)
- Plug the knowns into the equation: \[KE_{class} = \dfrac{1}{2} mv^2\] \[ = \dfrac{1}{2}(9.11 \times 10^{-31} \, kg)(0.990)^2(3.00 \times 10^8 \, m/s)^2\]\[= 4.02 \times 10^{-14} \, J\]
- Convert units: \[KE_{class} = 4.02 \times 10^{-14} \, \left(\dfrac{1 \, MeV}{1.60 \times 10^{-13} \, J}\right) = 0.251 \, MeV\]
Discussion
As might be expected, since the velocity is 99.0% of the speed of light, the classical kinetic energy is significantly off from the correct relativistic value. Note also that the classical value is much smaller than the relativistic value. In fact, \(KE_{rel}/KE_{class} = 12.4\) here. This is some indication of how difficult it is to get a mass moving close to the speed of light. Much more energy is required than predicted classically. Some people interpret this extra energy as going into increasing the mass of the system, but, as discussed in Relativistic Momentum , this cannot be verified unambiguously. What is certain is that ever-increasing amounts of energy are needed to get the velocity of a mass a little closer to that of light. An energy of 3 MeV is a very small amount for an electron, and it can be achieved with present-day particle accelerators. SLAC, for example, can accelerate electrons to over \(50 \times 10^9 \, eV = 50,000 MeV\).
Is there any point in getting \(v\) a little closer to c than 99.0% or 99.9%? The answer is yes. We learn a great deal by doing this. The energy that goes into a high-velocity mass can be converted to any other form, including into entirely new masses. (See Figure .) Most of what we know about the substructure of matter and the collection of exotic short-lived particles in nature has been learned this way. Particles are accelerated to extremely relativistic energies and made to collide with other particles, producing totally new species of particles. Patterns in the characteristics of these previously unknown particles hint at a basic substructure for all matter. These particles and some of their characteristics will be covered in Particle Physics .
Relativistic Energy and Momentum
We know classically that kinetic energy and momentum are related to each other, since \[KE_{class} = \dfrac{p^2}{2m} = \dfrac{(mv)^2}{2m} = \dfrac{1}{2} mv^2.\]
Relativistically, we can obtain a relationship between energy and momentum by algebraically manipulating their definitions. This produces
\[E^2 = (pc)^2 + (mc^2)^2,\]
where \(E\) is the relativistic total energy and \(p\) is the relativistic momentum. This relationship between relativistic energy and relativistic momentum is more complicated than the classical, but we can gain some interesting new insights by examining it. First, total energy is related to momentum and rest mass. At rest, momentum is zero, and the equation gives the total energy to be the rest energy \(mc^2\) (so this equation is consistent with the discussion of rest energy above). However, as the mass is accelerated, its momentum \(p\) increases, thus increasing the total energy. At sufficiently high velocities, the rest energy term \((mc^2)^2\) becomes negligible compared with the momentum term \((pc)^2\); thus, \(E = pc\) at extremely relativistic velocities.
If we consider momentum \(p\) to be distinct from mass, we can determine the implications of the equation \(E^2 = (pc)^2 + (mc^2)^2\), for a particle that has no mass. If we take \(m\) to be zero in this equation, then \(E = pc\), or \(p = E/c\). Massless particles have this momentum. There are several massless particles found in nature, including photons (these are quanta of electromagnetic radiation). Another implication is that a massless particle must travel at speed \(c\) and only at speed \(c\). While it is beyond the scope of this text to examine the relationship in the equation \(E^2 = (pc)^2 + (mc^2)^2\), in detail, we can see that the relationship has important implications in special relativity.
PROBLEM-SOLVING STRATEGIES FOR RELATIVITY
- Examine the situation to determine that it is necessary to use relativity . Relativistic effects are related to \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\), the quantitative relativistic factor. If \(\gamma\) is very close to 1, then relativistic effects are small and differ very little from the usually easier classical calculations.
- Identify exactly what needs to be determined in the problem (identify the unknowns).
- Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Look in particular for information on relative velocity \(v\).
- Make certain you understand the conceptual aspects of the problem before making any calculations. Decide, for example, which observer sees time dilated or length contracted before plugging into equations. If you have thought about who sees what, who is moving with the event being observed, who sees proper time, and so on, you will find it much easier to determine if your calculation is reasonable.
- Determine the primary type of calculation to be done to find the unknowns identified above. You will find the section summary helpful in determining whether a length contraction, relativistic kinetic energy, or some other concept is involved.
- Do not round off during the calculation. As noted in the text, you must often perform your calculations to many digits to see the desired effect. You may round off at the very end of the problem, but do not use a rounded number in a subsequent calculation.
- Check the answer to see if it is reasonable: Does it make sense? This may be more difficult for relativity, since we do not encounter it directly. But you can look for velocities greater than \(c\) or relativistic effects that are in the wrong direction (such as a time contraction where a dilation was expected).
Exercise \(\PageIndex{1}\)
A photon decays into an electron-positron pair. What is the kinetic energy of the electron if its speed is \(0.992 c\)?
- Answer
-
\[\begin{align*} KE_{rel} &= (\gamma -1)mc^2 \\[5pt] &= \left(\dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}} - 1\right) mc^2 \\[5pt] &= \left( \dfrac{1}{\sqrt{1 - \frac{(0.992 c)^2}{c^2}}} - 1\right) (9.11 \times 10^{-31} \, kg)(3.00 \times 10^8 \, m/s)^2 \\[5pt] &= 5.67 \times 10^{-13} \, J \end{align*}\]
Summary
- Relativistic energy is conserved as long as we define it to include the possibility of mass changing to energy.
- Total Energy is defined as: \(E = \gamma mc^2\), where \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\)
- Rest energy is \(E_0 = mc^2\), meaning that mass is a form of energy. If energy is stored in an object, its mass increases. Mass can be destroyed to release energy.
- We do not ordinarily notice the increase or decrease in mass of an object because the change in mass is so small for a large increase in energy.
- The relativistic work-energy theorem is \(W_{net} = E - E_0 = \gamma mc^2 = (\gamma - 1) mc^2\).
- Relativistically, \(W_{net} = KE_{rel}\), where \(KE_{rel}\) is the relativistic kinetic energy.
- Relativistic kinetic energy is \(KE_{rel} = (\gamma - 1) mc^2\), where \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\). At low velocities, relativistic kinetic energy reduces to classical kinetic energy.
- No object with mass can attain the speed of light because an infinite amount of work and an infinite amount of energy input is required to accelerate a mass to the speed of light.
- The equation \(E^2 = (pc)^2 + (mc^2)^2\) relates the relativistic total energy \(E\) and the relativistic momentum \(p\). At extremely high velocities, the rest energy \(mc^2\) becomes negligible, and \(E = pc\).
Glossary
- total energy
- defined as \(E = \gamma mc^2\), where \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\)
- rest energy
- the energy stored in an object at rest: \(E_0 = mc^2\)
- relativistic kinetic energy
- the kinetic energy of an object moving at relativistic speeds: \(KE_{rel} = (\gamma -1) mc^2\), where \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\)
|
libretexts
|
2025-03-17T19:53:45.781781
| 2016-07-24T08:26:01 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.06%3A_Relativistic_Energy",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.6: Relativistic Energy",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.E%3A_Special_Relativity_(Exercise)
|
28.E: Special Relativity (Exercise)
-
- Last updated
- Save as PDF
Conceptual Questions
28.1: Einstein’s Postulates
1. Which of Einstein’s postulates of special relativity includes a concept that does not fit with the ideas of classical physics? Explain.
2. Is Earth an inertial frame of reference? Is the Sun? Justify your response.
3. When you are flying in a commercial jet, it may appear to you that the airplane is stationary and the Earth is moving beneath you. Is this point of view valid? Discuss briefly.
28.2: Simultaneity and Time Dilation
4. Does motion affect the rate of a clock as measured by an observer moving with it? Does motion affect how an observer moving relative to a clock measures its rate?
5. To whom does the elapsed time for a process seem to be longer, an observer moving relative to the process or an observer moving with the process? Which observer measures proper time?
6. How could you travel far into the future without aging significantly? Could this method also allow you to travel into the past?
28.3: Length Contraction
7. To whom does an object seem greater in length, an observer moving with the object or an observer moving relative to the object? Which observer measures the object’s proper length?
8. Relativistic effects such as time dilation and length contraction are present for cars and airplanes. Why do these effects seem strange to us?
9. Suppose an astronaut is moving relative to the Earth at a significant fraction of the speed of light.
(a) Does he observe the rate of his clocks to have slowed?
(b) What change in the rate of Earth-bound clocks does he see?
(c) Does his ship seem to him to shorten?
(d) What about the distance between stars that lie on lines parallel to his motion?
(e) Do he and an Earth-bound observer agree on his velocity relative to the Earth?
28.4: Relativistic Addition of Velocities
10. Explain the meaning of the terms “red shift” and “blue shift” as they relate to the relativistic Doppler effect.
11. What happens to the relativistic Doppler effect when relative velocity is zero? Is this the expected result?
12. Is the relativistic Doppler effect consistent with the classical Doppler effect in the respect that \(\displaystyle λ_{obs}\) is larger for motion away?
13. All galaxies farther away than about \(\displaystyle 50×10^6ly\) exhibit a red shift in their emitted light that is proportional to distance, with those farther and farther away having progressively greater red shifts. What does this imply, assuming that the only source of red shift is relative motion? (Hint: At these large distances, it is space itself that is expanding, but the effect on light is the same.)
28.5: Relativistic Momentum
14. How does modern relativity modify the law of conservation of momentum?
15. Is it possible for an external force to be acting on a system and relativistic momentum to be conserved? Explain.
28.6: Relativistic Energy
16. How are the classical laws of conservation of energy and conservation of mass modified by modern relativity?
17. What happens to the mass of water in a pot when it cools, assuming no molecules escape or are added? Is this observable in practice? Explain.
18. Consider a thought experiment. You place an expanded balloon of air on weighing scales outside in the early morning. The balloon stays on the scales and you are able to measure changes in its mass. Does the mass of the balloon change as the day progresses? Discuss the difficulties in carrying out this experiment.
19. The mass of the fuel in a nuclear reactor decreases by an observable amount as it puts out energy. Is the same true for the coal and oxygen combined in a conventional power plant? If so, is this observable in practice for the coal and oxygen? Explain.
20. We know that the velocity of an object with mass has an upper limit of c. Is there an upper limit on its momentum? Its energy? Explain.
21. Given the fact that light travels at c, can it have mass? Explain.
22. If you use an Earth-based telescope to project a laser beam onto the Moon, you can move the spot across the Moon’s surface at a velocity greater than the speed of light. Does this violate modern relativity? (Note that light is being sent from the Earth to the Moon, not across the surface of the Moon.)
Problems & Exercises
28.2: Simultaneity and Time Dilation
23. (a) What is \(\displaystyle γ\) if \(\displaystyle v=0.250c\)?
(b) If \(\displaystyle v=0.500c\)?
Solution
(a) 1.0328
(b) 1.15
24. (a) What is \(\displaystyle γ\) if \(\displaystyle v=0.100c\)?
(b) If \(\displaystyle v=0.900c\)?
25. Particles called \(\displaystyle π\)-mesons are produced by accelerator beams. If these particles travel at \(\displaystyle 2.70×10^8m/s\) and live \(\displaystyle 2.60×10^{−8}s\) when at rest relative to an observer, how long do they live as viewed in the laboratory?
Solution
\(\displaystyle 5.96×10^{−8}s\)
26. Suppose a particle called a kaon is created by cosmic radiation striking the atmosphere. It moves by you at \(\displaystyle 0.980c\), and it lives \(\displaystyle 1.24×10^{−8}s\) when at rest relative to an observer. How long does it live as you observe it?
27. A neutral \(\displaystyle π\)-meson is a particle that can be created by accelerator beams. If one such particle lives \(\displaystyle 1.40×10^{−16}s\) as measured in the laboratory, and \(\displaystyle 0.840×10^{−16}s\) when at rest relative to an observer, what is its velocity relative to the laboratory?
Solution
0.800c
28. A neutron lives 900 s when at rest relative to an observer. How fast is the neutron moving relative to an observer who measures its life span to be 2065 s?
29. If relativistic effects are to be less than 1%, then \(\displaystyle γ\) must be less than 1.01. At what relative velocity is \(\displaystyle γ=1.01\)?
Solution
\(\displaystyle 0.140c\)
30. If relativistic effects are to be less than 3%, then \(\displaystyle γ\) must be less than 1.03. At what relative velocity is \(\displaystyle γ=1.03\)?
31. (a) At what relative velocity is \(\displaystyle γ=1.50\)?
(b) At what relative velocity is \(\displaystyle γ=100\)?
Solution
(a) \(\displaystyle 0.745c\)
(b) \(\displaystyle 0.99995c\) (to five digits to show effect)
32. (a) At what relative velocity is \(\displaystyle γ=2.00\)?
(b) At what relative velocity is \(\displaystyle γ=10.0\)?
33. Unreasonable Results
(a) Find the value of \(\displaystyle γ\) for the following situation. An Earth-bound observer measures 23.9 h to have passed while signals from a high-velocity space probe indicate that \(\displaystyle 24.0 h\) have passed on board.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) 0.996
(b) \(\displaystyle γ\) cannot be less than 1.
(c) Assumption that time is longer in moving ship is unreasonable.
28.3: Length Contraction
34. A spaceship, 200 m long as seen on board, moves by the Earth at \(\displaystyle 0.970c\). What is its length as measured by an Earth-bound observer?
Solution
48.6 m
35. How fast would a 6.0 m-long sports car have to be going past you in order for it to appear only 5.5 m long?
36. (a) How far does the muon in [link] travel according to the Earth-bound observer?
(b) How far does it travel as viewed by an observer moving with it? Base your calculation on its velocity relative to the Earth and the time it lives (proper time).
(c) Verify that these two distances are related through length contraction \(\displaystyle γ=3.20\).
Solution
(a) 1.387 km = 1.39 km
(b) 0.433 km
(c) \(\displaystyle L=\frac{L_0}{γ}=\frac{1.387×10^3m}{3.20}=433.4 m=0.433 km\)
Thus, the distances in parts (a) and (b) are related when \(\displaystyle γ=3.20\).
37. (a) How long would the muon in [link] have lived as observed on the Earth if its velocity was \(\displaystyle 0.0500c\)?
(b) How far would it have traveled as observed on the Earth? (c) What distance is this in the muon’s frame?
38. (a) How long does it take the astronaut in Example to travel 4.30 ly at \(\displaystyle 0.99944c\) (as measured by the Earth-bound observer)?
(b) How long does it take according to the astronaut?
(c) Verify that these two times are related through time dilation with \(\displaystyle γ=30.00\) as given.
Solution
(a) 4.303 y (to four digits to show any effect)
(b) 0.1434 y
(c) \(\displaystyle Δt=γΔt_0⇒γ=\frac{Δt}{Δt_0}=\frac{4.303 y}{0.1434 y}=30.0\)
Thus, the two times are related when \(\displaystyle γ=30.00\).
39. (a) How fast would an athlete need to be running for a 100-m race to look 100 yd long?
(b) Is the answer consistent with the fact that relativistic effects are difficult to observe in ordinary circumstances? Explain.
40. Unreasonable Results
(a) Find the value of γ for the following situation. An astronaut measures the length of her spaceship to be 25.0 m, while an Earth-bound observer measures it to be 100 m.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) 0.250
(b) \(\displaystyle γ\) must be ≥1
(c) The Earth-bound observer must measure a shorter length, so it is unreasonable to assume a longer length.
41. Unreasonable Results
A spaceship is heading directly toward the Earth at a velocity of \(\displaystyle 0.800c\). The astronaut on board claims that he can send a canister toward the Earth at \(\displaystyle 1.20c\) relative to the Earth.
(a) Calculate the velocity the canister must have relative to the spaceship.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
28.4: Relativistic Addition of Velocities
42. Suppose a spaceship heading straight towards the Earth at \(\displaystyle 0.750c\) can shoot a canister at \(\displaystyle 0.500c\) relative to the ship.
(a) What is the velocity of the canister relative to the Earth, if it is shot directly at the Earth?
(b) If it is shot directly away from the Earth?
Solution
(a) \(\displaystyle 0.909c\)
(b) \(\displaystyle 0.400c\)
43. Repeat the previous problem with the ship heading directly away from the Earth.
44. If a spaceship is approaching the Earth at \(\displaystyle 0.100c\) and a message capsule is sent toward it at \(\displaystyle 0.100c\) relative to the Earth, what is the speed of the capsule relative to the ship?
Solution
0.198c
45. (a) Suppose the speed of light were only \(\displaystyle 3000 m/s\). A jet fighter moving toward a target on the ground at \(\displaystyle 800 m/s\) shoots bullets, each having a muzzle velocity of \(\displaystyle 1000 m/s\). What are the bullets’ velocity relative to the target?
(b) If the speed of light was this small, would you observe relativistic effects in everyday life? Discuss.
46. If a galaxy moving away from the Earth has a speed of \(\displaystyle 1000 km/s\) and emits \(\displaystyle 656 nm\) light characteristic of hydrogen (the most common element in the universe). (a) What wavelength would we observe on the Earth?
(b) What type of electromagnetic radiation is this?
(c) Why is the speed of the Earth in its orbit negligible here?
Solution
a) \(\displaystyle 658 nm\)
b) red
c) \(\displaystyle v/c=9.92×10^{−5}\) (negligible)
47. A space probe speeding towards the nearest star moves at \(\displaystyle 0.250c\) and sends radio information at a broadcast frequency of 1.00 GHz. What frequency is received on the Earth?
48. If two spaceships are heading directly towards each other at \(\displaystyle 0.800c\), at what speed must a canister be shot from the first ship to approach the other at \(\displaystyle 0.999c\) as seen by the second ship?
Solution
\(\displaystyle 0.991c\)
49. Two planets are on a collision course, heading directly towards each other at \(\displaystyle 0.250c\). A spaceship sent from one planet approaches the second at \(\displaystyle 0.750c\) as seen by the second planet. What is the velocity of the ship relative to the first planet?
50. When a missile is shot from one spaceship towards another, it leaves the first at \(\displaystyle 0.950c\) and approaches the other at \(\displaystyle 0.750c\). What is the relative velocity of the two ships?
Solution
\(\displaystyle −0.696c\)
51. What is the relative velocity of two spaceships if one fires a missile at the other at \(\displaystyle 0.750c\) and the other observes it to approach at \(\displaystyle 0.950c\)?
52. Near the center of our galaxy, hydrogen gas is moving directly away from us in its orbit about a black hole. We receive 1900 nm electromagnetic radiation and know that it was 1875 nm when emitted by the hydrogen gas. What is the speed of the gas?
Solution
\(\displaystyle 0.01324c\)
53. A highway patrol officer uses a device that measures the speed of vehicles by bouncing radar off them and measuring the Doppler shift. The outgoing radar has a frequency of 100 GHz and the returning echo has a frequency 15.0 kHz higher. What is the velocity of the vehicle? Note that there are two Doppler shifts in echoes. Be certain not to round off until the end of the problem, because the effect is small.
54. Prove that for any relative velocity v between two observers, a beam of light sent from one to the other will approach at speed c (provided that v is less than c, of course).
Solution
\(\displaystyle u'=c\), so
\(\displaystyle u=\frac{v+u′}{1+(vu′/c^2)}=\frac{v+c}{1+(vc/c^2)}=\frac{v+c}{1+(v/c)}=\frac{c(v+c)}{c+v}=c\)
55. Show that for any relative velocity v between two observers, a beam of light projected by one directly away from the other will move away at the speed of light (provided that v is less than c, of course).
56. (a) All but the closest galaxies are receding from our own Milky Way Galaxy. If a galaxy \(\displaystyle 12.0×10^9ly\) ly away is receding from us at \(\displaystyle 0.0.900c\), at what velocity relative to us must we send an exploratory probe to approach the other galaxy at \(\displaystyle 0.990c\), as measured from that galaxy?
(b) How long will it take the probe to reach the other galaxy as measured from the Earth? You may assume that the velocity of the other galaxy remains constant.
(c) How long will it then take for a radio signal to be beamed back? (All of this is possible in principle, but not practical.)
Solution
a) \(\displaystyle 0.99947c\)
b) \(\displaystyle 1.2064×10^{11}y\)
c) \(\displaystyle 1.2058×10^{12}y\) (all to sufficient digits to show effects)
28.5: Relativistic Momentum
57. Find the momentum of a helium nucleus having a mass of \(\displaystyle 6.68×10^{–27}kg\) that is moving at \(\displaystyle 0.200c\).
Solution
\(\displaystyle 4.09×10^{–19}kg⋅m/s\)
58. What is the momentum of an electron traveling at \(\displaystyle 0.980c\)?
59. (a) Find the momentum of a \(\displaystyle 1.00×10^9kg\) asteroid heading towards the Earth at 30.0 km/s.
(b) Find the ratio of this momentum to the classical momentum. (Hint: Use the approximation that \(\displaystyle γ=1+(1/2)v^2/c^2\) at low velocities.)
Solution
(a) \(\displaystyle 3.000000015×10^{13}kg⋅m/s\).
(b) Ratio of relativistic to classical momenta equals 1.000000005 (extra digits to show small effects)
60. (a) What is the momentum of a 2000 kg satellite orbiting at 4.00 km/s?
(b) Find the ratio of this momentum to the classical momentum. (Hint: Use the approximation that \(\displaystyle γ=1+(1/2)v^2/c^2\) at low velocities.)
61. What is the velocity of an electron that has a momentum of \(\displaystyle 3.04×10^{–21}kg⋅m/s\)? Note that you must calculate the velocity to at least four digits to see the difference from c.
Solution
\(\displaystyle 2.9957×10^8m/s\)
62. Find the velocity of a proton that has a momentum of \(\displaystyle 4.48×–10^{−19}kg⋅m/s\).
63. (a) Calculate the speed of a \(\displaystyle 1.00-μg\) particle of dust that has the same momentum as a proton moving at \(\displaystyle 0.999c\).
(b) What does the small speed tell us about the mass of a proton compared to even a tiny amount of macroscopic matter?
Solution
(a) \(\displaystyle 1.121×10^{–8}m/s\)
(b) The small speed tells us that the mass of a proton is substantially smaller than that of even a tiny amount of macroscopic matter!
64. (a) Calculate γ for a proton that has a momentum of \(\displaystyle 1.00 kg⋅m/s\).
(b) What is its speed? Such protons form a rare component of cosmic radiation with uncertain origins.
28.6: Relativistic Energy
65. What is the rest energy of an electron, given its mass is \(\displaystyle 9.11×10^{−31}kg\)? Give your answer in joules and MeV.
Solution
\(\displaystyle 8.20×10^{−14}J\)
0.512 MeV
66. Find the rest energy in joules and MeV of a proton, given its mass is \(\displaystyle 1.67×10^{−27}kg|).
67. If the rest energies of a proton and a neutron (the two constituents of nuclei) are 938.3 and 939.6 MeV respectively, what is the difference in their masses in kilograms?
Solution
\(\displaystyle 2.3×10^{−30}kg\)
68. The Big Bang that began the universe is estimated to have released \(\displaystyle 10^{68}J\) of energy. How many stars could half this energy create, assuming the average star’s mass is \(\displaystyle 4.00×10^{30}kg\)?
69. A supernova explosion of a \(\displaystyle 2.00×10^{31}kg\) star produces \(\displaystyle 1.00×10^{44}J\) of energy.
(a) How many kilograms of mass are converted to energy in the explosion?
(b) What is the ratio \(\displaystyle Δm/m\) of mass destroyed to the original mass of the star?
Solution
(a) \(\displaystyle 1.11×10^{27}kg\)
(b) \(\displaystyle 5.56×10^{−5}\)
70. (a) Using data from [link], calculate the mass converted to energy by the fission of 1.00 kg of uranium.
(b) What is the ratio of mass destroyed to the original mass, \(\displaystyle Δm/m\)?
71. (a) Using data from [link], calculate the amount of mass converted to energy by the fusion of 1.00 kg of hydrogen.
(b) What is the ratio of mass destroyed to the original mass, \(\displaystyle Δm/m\)?
(c) How does this compare with \(\displaystyle Δm/m\) for the fission of 1.00 kg of uranium?
Solution
\(\displaystyle 7.1×10^{−3}kg\)
\(\displaystyle 7.1×10^{−3}\)
The ratio is greater for hydrogen.
72. There is approximately \(\displaystyle 10^{34}J\) of energy available from fusion of hydrogen in the world’s oceans.
(a) If \(\displaystyle 10^{33}J\) of this energy were utilized, what would be the decrease in mass of the oceans? Assume that 0.08% of the mass of a water molecule is converted to energy during the fusion of hydrogen.
(b) How great a volume of water does this correspond to?
(c) Comment on whether this is a significant fraction of the total mass of the oceans.
73. A muon has a rest mass energy of 105.7 MeV, and it decays into an electron and a massless particle.
(a) If all the lost mass is converted into the electron’s kinetic energy, find \(\displaystyle γ\) for the electron.
(b) What is the electron’s velocity?
Solution
208
\(\displaystyle 0.999988c\)
74. A \(\displaystyle π\)-meson is a particle that decays into a muon and a massless particle. The \(\displaystyle π\)-meson has a rest mass energy of 139.6 MeV, and the muon has a rest mass energy of 105.7 MeV. Suppose the π-meson is at rest and all of the missing mass goes into the muon’s kinetic energy. How fast will the muon move?
75. (a) Calculate the relativistic kinetic energy of a 1000-kg car moving at 30.0 m/s if the speed of light were only 45.0 m/s.
(b) Find the ratio of the relativistic kinetic energy to classical.
Solution
\(\displaystyle 6.92×10^5J\)
1.54
76. Alpha decay is nuclear decay in which a helium nucleus is emitted. If the helium nucleus has a mass of \(\displaystyle 6.80×10^{−27}kg\) and is given 5.00 MeV of kinetic energy, what is its velocity?
77. (a) Beta decay is nuclear decay in which an electron is emitted. If the electron is given 0.750 MeV of kinetic energy, what is its velocity?
(b) Comment on how the high velocity is consistent with the kinetic energy as it compares to the rest mass energy of the electron.
Solution
(a) 0.914c
(b) The rest mass energy of an electron is 0.511 MeV, so the kinetic energy is approximately 150% of the rest mass energy. The electron should be traveling close to the speed of light.
78. A positron is an antimatter version of the electron, having exactly the same mass. When a positron and an electron meet, they annihilate, converting all of their mass into energy.
(a) Find the energy released, assuming negligible kinetic energy before the annihilation.
(b) If this energy is given to a proton in the form of kinetic energy, what is its velocity?
(c) If this energy is given to another electron in the form of kinetic energy, what is its velocity?
79. What is the kinetic energy in MeV of a π-meson that lives \(\displaystyle 1.40×10^{−16}s\) as measured in the laboratory, and \(\displaystyle 0.840×10^{−16}s\) when at rest relative to an observer, given that its rest energy is 135 MeV?
Solution
90.0 MeV
80. Find the kinetic energy in MeV of a neutron with a measured life span of 2065 s, given its rest energy is 939.6 MeV, and rest life span is 900s.
81. (a) Show that \(\displaystyle (pc)^2/(mc^2)^2=γ^2−1\). This means that at large velocities \(\displaystyle pc>>mc^2\).
(b) Is \(\displaystyle E≈pc\) when \(\displaystyle γ=30.0\), as for the astronaut discussed in the twin paradox?
Solution
(a) \(\displaystyle E^2=p^2c^2+m^2c^4=γ^2m^2c^4\), so that \(\displaystyle p^2c^2=(γ^2−1)m^2c^4\), and therefore \(\displaystyle \frac{(pc)^2}{(mc^2)^2}=γ^2−1\)
(b) yes
82. One cosmic ray neutron has a velocity of \(\displaystyle 0.250c\) relative to the Earth.
(a) What is the neutron’s total energy in MeV?
(b) Find its momentum.
(c) Is \(\displaystyle E≈pc\) in this situation? Discuss in terms of the equation given in part (a) of the previous problem.
83. What is \(\displaystyle γ\) for a proton having a mass energy of 938.3 MeV accelerated through an effective potential of 1.0 TV (teravolt) at Fermilab outside Chicago?
Solution
\(\displaystyle 1.07×10^3\)
84. (a) What is the effective accelerating potential for electrons at the Stanford Linear Accelerator, if \(\displaystyle γ=1.00×10^5\) for them?
(b) What is their total energy (nearly the same as kinetic in this case) in GeV?
85. (a) Using data from [link], find the mass destroyed when the energy in a barrel of crude oil is released.
(b) Given these barrels contain 200 liters and assuming the density of crude oil is \(\displaystyle 750 kg/m^3\), what is the ratio of mass destroyed to original mass, \(\displaystyle Δm/m\)?
Solution
\(\displaystyle 6.56×10^{−8}kg\)
\(\displaystyle 4.37×10^{−10}\)
86. (a) Calculate the energy released by the destruction of 1.00 kg of mass.
(b) How many kilograms could be lifted to a 10.0 km height by this amount of energy?
87. A Van de Graaff accelerator utilizes a 50.0 MV potential difference to accelerate charged particles such as protons.
(a) What is the velocity of a proton accelerated by such a potential?
(b) An electron?
Solution
\(\displaystyle 0.314c\)
\(\displaystyle 0.99995c\)
88. Suppose you use an average of \(\displaystyle 500 kW⋅h\) of electric energy per month in your home.
(a) How long would 1.00 g of mass converted to electric energy with an efficiency of 38.0% last you?
(b) How many homes could be supplied at the \(\displaystyle 500 kW⋅h\) per month rate for one year by the energy from the described mass conversion?
89. (a) A nuclear power plant converts energy from nuclear fission into electricity with an efficiency of 35.0%. How much mass is destroyed in one year to produce a continuous 1000 MW of electric power?
(b) Do you think it would be possible to observe this mass loss if the total mass of the fuel is \(\displaystyle 10^4kg\)?
Solution
(a) 1.00 kg
(b) This much mass would be measurable, but probably not observable just by looking because it is 0.01% of the total mass.
90. Nuclear-powered rockets were researched for some years before safety concerns became paramount.
(a) What fraction of a rocket’s mass would have to be destroyed to get it into a low Earth orbit, neglecting the decrease in gravity? (Assume an orbital altitude of 250 km, and calculate both the kinetic energy (classical) and the gravitational potential energy needed.)
(b) If the ship has a mass of \(\displaystyle 1.00×10^5kg\) (100 tons), what total yield nuclear explosion in tons of TNT is needed?
91. The Sun produces energy at a rate of \(\displaystyle 4.00×10^{26}\) W by the fusion of hydrogen.
(a) How many kilograms of hydrogen undergo fusion each second?
(b) If the Sun is 90.0% hydrogen and half of this can undergo fusion before the Sun changes character, how long could it produce energy at its current rate?
(c) How many kilograms of mass is the Sun losing per second?
(d) What fraction of its mass will it have lost in the time found in part (b)?
Solution
(a) \(\displaystyle 6.3×10^{11}kg/s\)
(b) \(\displaystyle 4.5×10^{10}y\)
(c) \(\displaystyle 4.44×10^9kg\)
(d) 0.32%
92. Unreasonable Results
A proton has a mass of \(\displaystyle 1.67×10^{−27}kg\). A physicist measures the proton’s total energy to be 50.0 MeV.
(a) What is the proton’s kinetic energy?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
93. Construct Your Own Problem
Consider a highly relativistic particle. Discuss what is meant by the term “highly relativistic.” (Note that, in part, it means that the particle cannot be massless.) Construct a problem in which you calculate the wavelength of such a particle and show that it is very nearly the same as the wavelength of a massless particle, such as a photon, with the same energy. Among the things to be considered are the rest energy of the particle (it should be a known particle) and its total energy, which should be large compared to its rest energy.
94. Construct Your Own Problem
Consider an astronaut traveling to another star at a relativistic velocity. Construct a problem in which you calculate the time for the trip as observed on the Earth and as observed by the astronaut. Also calculate the amount of mass that must be converted to energy to get the astronaut and ship to the velocity travelled. Among the things to be considered are the distance to the star, the velocity, and the mass of the astronaut and ship. Unless your instructor directs you otherwise, do not include any energy given to other masses, such as rocket propellants.
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:45.906894
| 2018-05-04T03:08:56 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.E%3A_Special_Relativity_(Exercise)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "28.E: Special Relativity (Exercise)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics
|
29: Introduction to Quantum Physics
-
- 29.1: Quantization of Energy
- Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike the classical case. This would be like having only certain speeds at which a car can travel because its kinetic energy can have only certain values. We also find that some forms of energy transfer take place with discrete lumps of energy.
-
- 29.2: The Photoelectric Effect
- When light strikes materials, it can eject electrons from them. This is called the photoelectric effect, meaning that light (photo) produces electricity. One common use of the photoelectric effect is in light meters, such as those that adjust the automatic iris on various types of cameras. In a similar way, another use is in solar cells, as you probably have in your calculator or have seen on a roof top or a roadside sign.
-
- 29.5: The Particle-Wave Duality of Light
- We have long known that EM radiation is a wave, capable of interference and diffraction. We now see that light can be modeled as photons, which are massless particles. This may seem contradictory, since we ordinarily deal with large objects that never act like both wave and particle. An ocean wave, for example, looks nothing like a rock. To understand small-scale phenomena, we make analogies with the large-scale phenomena we observe directly.
-
- 29.7: Probability and The Heisenberg Uncertainty Principle
- Experiments show that you will find the electron at some definite location, unlike a wave. But if you set up exactly the same situation and measure it again, you will find the electron in a different location, often far outside any experimental uncertainty in your measurement. Repeated measurements will display a statistical distribution of locations that appears wavelike.
Thumbnail: Sometimes matter behaves as a particle and sometimes a wave. Quantum physics is the study of this phenomena. Imag used with permission (Public domain; Maschen).
|
libretexts
|
2025-03-17T19:53:45.974668
| 2015-11-01T04:22:25 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29: Introduction to Quantum Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.00%3A_Prelude_to_Quantum_Physics
|
29.0: Prelude to Quantum Physics
Quantum mechanics is the branch of physics needed to deal with submicroscopic objects. Because these objects are smaller than we can observe directly with our senses and generally must be observed with the aid of instruments, parts of quantum mechanics seem as foreign and bizarre as parts of relativity. But, like relativity, quantum mechanics has been shown to be valid—truth is often stranger than fiction.
Certain aspects of quantum mechanics are familiar to us. We accept as fact that matter is composed of atoms, the smallest unit of an element, and that these atoms combine to form molecules, the smallest unit of a compound (Figure \(\PageIndex{2}\)). While we cannot see the individual water molecules in a stream, for example, we are aware that this is because molecules are so small and so numerous in that stream. When introducing atoms, we commonly say that electrons orbit atoms in discrete shells around a tiny nucleus, itself composed of smaller particles called protons and neutrons. We are also aware that electric charge comes in tiny units carried almost entirely by electrons and protons. As with water molecules in a stream, we do not notice individual charges in the current through a lightbulb, because the charges are so small and so numerous in the macroscopic situations we sense directly.
MAKING CONNECTIONS: REALMS OF PHYSICS
Classical physics is a good approximation of modern physics under conditions first discussed in the The Nature of Science and Physics. Quantum mechanics is valid in general, and it must be used rather than classical physics to describe small objects, such as atoms.
Atoms, molecules, and fundamental electron and proton charges are all examples of physical entities that are quantized —that is, they appear only in certain discrete values and do not have every conceivable value. Quantized is the opposite of continuous. We cannot have a fraction of an atom, or part of an electron’s charge, or 14-1/3 cents, for example. Rather, everything is built of integral multiples of these substructures. Quantum physics is the branch of physics that deals with small objects and the quantization of various entities, including energy and angular momentum. Just as with classical physics, quantum physics has several subfields, such as mechanics and the study of electromagnetic forces. The correspondence principle states that in the classical limit (large, slow-moving objects), quantum mechanics becomes the same as classical physics. In this chapter, we begin the development of quantum mechanics and its description of the strange submicroscopic world. In later chapters, we will examine many areas, such as atomic and nuclear physics, in which quantum mechanics is crucial.
Glossary
- quantized
- the fact that certain physical entities exist only with particular discrete values and not every conceivable value
- correspondence principle
- in the classical limit (large, slow-moving objects), quantum mechanics becomes the same as classical physics
- quantum mechanics
- the branch of physics that deals with small objects and with the quantization of various entities, especially energy
|
libretexts
|
2025-03-17T19:53:46.035750
| 2016-07-24T08:26:56 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.00%3A_Prelude_to_Quantum_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.0: Prelude to Quantum Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.01%3A_Quantization_of_Energy
|
29.1: Quantization of Energy
Learning Objectives
By the end of this section, you will be able to:
- Explain Max Planck’s contribution to the development of quantum mechanics.
- Explain why atomic spectra indicate quantization.
Planck’s Contribution
Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike the classical case. This would be like having only certain speeds at which a car can travel because its kinetic energy can have only certain values. We also find that some forms of energy transfer take place with discrete lumps of energy. While most of us are familiar with the quantization of matter into lumps called atoms, molecules, and the like, we are less aware that energy, too, can be quantized. Some of the earliest clues about the necessity of quantum mechanics over classical physics came from the quantization of energy.
Where is the quantization of energy observed? Let us begin by considering the emission and absorption of electromagnetic (EM) radiation. The EM spectrum radiated by a hot solid is linked directly to the solid’s temperature (Figure \(\PageIndex{1}\)) An ideal radiator is one that has an emissivity of 1 at all wavelengths and, thus, is jet black. Ideal radiators are therefore called blackbodies , and their EM radiation is called blackbody radiation . It was discussed that the total intensity of the radiation varies as \(T^4\), the fourth power of the absolute temperature of the body, and that the peak of the spectrum shifts to shorter wavelengths at higher temperatures. All of this seems quite continuous, but it was the curve of the spectrum of intensity versus wavelength that gave a clue that the energies of the atoms in the solid are quantized. In fact, providing a theoretical explanation for the experimentally measured shape of the spectrum was a mystery at the turn of the century. When this “ultraviolet catastrophe” was eventually solved, the answers led to new technologies such as computers and the sophisticated imaging techniques described in earlier chapters. Once again, physics as an enabling science changed the way we live.
The German physicist Max Planck (1858–1947) used the idea that atoms and molecules in a body act like oscillators to absorb and emit radiation. The energies of the oscillating atoms and molecules had to be quantized to correctly describe the shape of the blackbody spectrum. Planck deduced that the energy of an oscillator having a frequency \(f\) is given by
\[E = \left(n + \dfrac{1}{2} \right) hf.\]
Here \(n\) is any nonnegative integer (0, 1, 2, 3, …). The symbol \(h\) stands for Planck’s constant , given by
\[h = 6.626 \times 10^{-34} \, J \cdot s.\] The equation \(E = (n + \frac{1}{2}) hf\) means that an oscillator having a frequency \(f\) (emitting and absorbing EM radiation of frequency \(f\)) can have its energy increase or decrease only in discrete steps of size
\[\Delta E = hf.\]
It might be helpful to mention some macroscopic analogies of this quantization of energy phenomena. This is like a pendulum that has a characteristic oscillation frequency but can swing with only certain amplitudes. Quantization of energy also resembles a standing wave on a string that allows only particular harmonics described by integers. It is also similar to going up and down a hill using discrete stair steps rather than being able to move up and down a continuous slope. Your potential energy takes on discrete values as you move from step to step.
Using the quantization of oscillators, Planck was able to correctly describe the experimentally known shape of the blackbody spectrum. This was the first indication that energy is sometimes quantized on a small scale and earned him the Nobel Prize in Physics in 1918. Although Planck’s theory comes from observations of a macroscopic object, its analysis is based on atoms and molecules. It was such a revolutionary departure from classical physics that Planck himself was reluctant to accept his own idea that energy states are not continuous. The general acceptance of Planck’s energy quantization was greatly enhanced by Einstein’s explanation of the photoelectric effect (discussed in the next section), which took energy quantization a step further. Planck was fully involved in the development of both early quantum mechanics and relativity. He quickly embraced Einstein’s special relativity, published in 1905, and in 1906 Planck was the first to suggest the correct formula for relativistic momentum, \(p = \gamma mu\).
Note that Planck’s constant \(h\) is a very small number. So for an infrared frequency of \(10^{14}\) being emitted by a blackbody, for example, the difference between energy levels is only \(\Delta E = hf = (6.63 \times 10^{-34} \, J \cdot s)(10^{14} \, Hz) = 6.63 \times 10^{-20} \, J\), or about 0.4 eV. This 0.4 eV of energy is significant compared with typical atomic energies, which are on the order of an electron volt, or thermal energies, which are typically fractions of an electron volt. But on a macroscopic or classical scale, energies are typically on the order of joules. Even if macroscopic energies are quantized, the quantum steps are too small to be noticed. This is an example of the correspondence principle. For a large object, quantum mechanics produces results indistinguishable from those of classical physics.
Atomic Spectra
Now let us turn our attention to the emission and absorption of EM radiation by gases . The Sun is the most common example of a body containing gases emitting an EM spectrum that includes visible light. We also see examples in neon signs and candle flames. Studies of emissions of hot gases began more than two centuries ago, and it was soon recognized that these emission spectra contained huge amounts of information. The type of gas and its temperature, for example, could be determined. We now know that these EM emissions come from electrons transitioning between energy levels in individual atoms and molecules; thus, they are called atomic spectra . Atomic spectra remain an important analytical tool today. Figure \(\PageIndex{3}\) shows an example of an emission spectrum obtained by passing an electric discharge through a material. One of the most important characteristics of these spectra is that they are discrete. By this we mean that only certain wavelengths, and hence frequencies, are emitted. This is called a line spectrum. If frequency and energy are associated as \(\Delta E = hf\), the energies of the electrons in the emitting atoms and molecules are quantized. This is discussed in more detail later in this chapter.
It was a major puzzle that atomic spectra are quantized. Some of the best minds of 19th-century science failed to explain why this might be. Not until the second decade of the 20th century did an answer based on quantum mechanics begin to emerge. Again a macroscopic or classical body of gas was involved in the studies, but the effect, as we shall see, is due to individual atoms and molecules.
PHET EXPLORATIONS: MODELS OF THE HYDROGEN ATOM
How did scientists figure out the structure of atoms without looking at them? Try out different models by shooting light at the atom. Check how the prediction of the model matches the experimental results.
Summary
- The first indication that energy is sometimes quantized came from blackbody radiation, which is the emission of EM radiation by an object with an emissivity of 1.
- Planck recognized that the energy levels of the emitting atoms and molecules were quantized, with only the allowed values of \(E = (n + \frac{1}{2})hf\), where \(n\) is any non-negative integer (0, 1, 2, 3, …).
- \(h\) is Planck’s constant, whose value is \(h = 6.626 \times 10^{-34} \, J \cdot s\).
- Thus, the oscillatory absorption and emission energies of atoms and molecules in a blackbody could increase or decrease only in steps of size \(\Delta E = hf\) where \(f\) is the frequency of the oscillatory nature of the absorption and emission of EM radiation.
- Another indication of energy levels being quantized in atoms and molecules comes from the lines in atomic spectra, which are the EM emissions of individual atoms and molecules.
Glossary
- blackbody
- an ideal radiator, which can radiate equally well at all wavelengths
- blackbody radiation
- the electromagnetic radiation from a blackbody
- Planck’s constant
- \(6.626 \times 10^{-34} \, J \cdot s\)
- atomic spectra
- the electromagnetic emission from atoms and molecules
|
libretexts
|
2025-03-17T19:53:46.105452
| 2016-07-24T08:27:59 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.01%3A_Quantization_of_Energy",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.1: Quantization of Energy",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.02%3A_The_Photoelectric_Effect
|
29.2: The Photoelectric Effect
Learning Objectives
By the end of this section, you will be able to:
- Describe a typical photoelectric-effect experiment.
- Determine the maximum kinetic energy of photoelectrons ejected by photons of one energy or wavelength, when given the maximum kinetic energy of photoelectrons for a different photon energy or wavelength.
When light strikes materials, it can eject electrons from them. This is called the photoelectric effect , meaning that light ( photo ) produces electricity. One common use of the photoelectric effect is in light meters, such as those that adjust the automatic iris on various types of cameras. In a similar way, another use is in solar cells, as you probably have in your calculator or have seen on a roof top or a roadside sign. These make use of the photoelectric effect to convert light into electricity for running different devices.
This effect has been known for more than a century and can be studied using a device such as that shown in Figure \(\PageIndex{1}\). This figure shows an evacuated tube with a metal plate and a collector wire that are connected by a variable voltage source, with the collector more negative than the plate. When light (or other EM radiation) strikes the plate in the evacuated tube, it may eject electrons. If the electrons have energy in electron volts (eV) greater than the potential difference between the plate and the wire in volts, some electrons will be collected on the wire. Since the electron energy in eV is \(qV\), where \(q\) is the electron charge and \(V\) is the potential difference, the electron energy can be measured by adjusting the retarding voltage between the wire and the plate. The voltage that stops the electrons from reaching the wire equals the energy in eV. For example, if \(-3.00 \, V\) barely stops the electrons, their energy is 3.00 eV. The number of electrons ejected can be determined by measuring the current between the wire and plate. The more light, the more electrons; a little circuitry allows this device to be used as a light meter.
What is really important about the photoelectric effect is what Albert Einstein deduced from it. Einstein realized that there were several characteristics of the photoelectric effect that could be explained only if EM radiation is itself quantized : the apparently continuous stream of energy in an EM wave is actually composed of energy quanta called photons. In his explanation of the photoelectric effect, Einstein defined a quantized unit or quantum of EM energy, which we now call a photon , with an energy proportional to the frequency of EM radiation. In equation form, the photon energy is \[E = hf,\] where \(E\) is the energy of a photon of frequency \(f\) and \(h\) is Planck’s constant. This revolutionary idea looks similar to Planck’s quantization of energy states in blackbody oscillators, but it is quite different. It is the quantization of EM radiation itself. EM waves are composed of photons and are not continuous smooth waves as described in previous chapters on optics. Their energy is absorbed and emitted in lumps, not continuously. This is exactly consistent with Planck’s quantization of energy levels in blackbody oscillators, since these oscillators increase and decrease their energy in steps of \(hf\) by absorbing and emitting photons having \(E = hf\). We do not observe this with our eyes, because there are so many photons in common light sources that individual photons go unnoticed (Figure \(\PageIndex{2}\)). The next section of the text ( Photon Energies and the Electromagnetic Spectrum ) is devoted to a discussion of photons and some of their characteristics and implications. For now, we will use the photon concept to explain the photoelectric effect, much as Einstein did.
The photoelectric effect has the properties discussed below. All these properties are consistent with the idea that individual photons of EM radiation are absorbed by individual electrons in a material, with the electron gaining the photon’s energy. Some of these properties are inconsistent with the idea that EM radiation is a simple wave. For simplicity, let us consider what happens with monochromatic EM radiation in which all photons have the same energy \(hf\).
- If we vary the frequency of the EM radiation falling on a material, we find the following: For a given material, there is a threshold frequency \(f_0\) for the EM radiation below which no electrons are ejected, regardless of intensity. Individual photons interact with individual electrons. Thus if the photon energy is too small to break an electron away, no electrons will be ejected. If EM radiation was a simple wave, sufficient energy could be obtained by increasing the intensity.
- Once EM radiation falls on a material, electrons are ejected without delay . As soon as an individual photon of a sufficiently high frequency is absorbed by an individual electron, the electron is ejected. If the EM radiation were a simple wave, several minutes would be required for sufficient energy to be deposited to the metal surface to eject an electron.
- The number of electrons ejected per unit time is proportional to the intensity of the EM radiation and to no other characteristic. High-intensity EM radiation consists of large numbers of photons per unit area, with all photons having the same characteristic energy \(hf\).
- If we vary the intensity of the EM radiation and measure the energy of ejected electrons, we find the following: The maximum kinetic energy of ejected electrons is independent of the intensity of the EM radiation . Since there are so many electrons in a material, it is extremely unlikely that two photons will interact with the same electron at the same time, thereby increasing the energy given it. Instead (as noted in 3 above), increased intensity results in more electrons of the same energy being ejected. If EM radiation were a simple wave, a higher intensity could give more energy, and higher-energy electrons would be ejected.
- The kinetic energy of an ejected electron equals the photon energy minus the binding energy of the electron in the specific material. An individual photon can give all of its energy to an electron. The photon’s energy is partly used to break the electron away from the material. The remainder goes into the ejected electron’s kinetic energy. In equation form, this is given by \[KE_e = hf - BE,\] where \(KE_e\) is the maximum kinetic energy of the ejected electron, \(hf\) is the photon’s energy, and BE is the binding energy of the electron to the particular material. (BE is sometimes called the work function of the material.) This equation, due to Einstein in 1905, explains the properties of the photoelectric effect quantitatively. An individual photon of EM radiation (it does not come any other way) interacts with an individual electron, supplying enough energy, BE, to break it away, with the remainder going to kinetic energy. The binding energy is \(BE = hf_0\), where \(f_0\) is the threshold frequency for the particular material. Figure \(\PageIndex{3}\) shows a graph of maximum \(KE_e\) versus the frequency of incident EM radiation falling on a particular material.
Einstein’s idea that EM radiation is quantized was crucial to the beginnings of quantum mechanics. It is a far more general concept than its explanation of the photoelectric effect might imply. All EM radiation can also be modeled in the form of photons, and the characteristics of EM radiation are entirely consistent with this fact. (As we will see in the next section, many aspects of EM radiation, such as the hazards of ultraviolet (UV) radiation, can be explained only by photon properties.) More famous for modern relativity, Einstein planted an important seed for quantum mechanics in 1905, the same year he published his first paper on special relativity. His explanation of the photoelectric effect was the basis for the Nobel Prize awarded to him in 1921. Although his other contributions to theoretical physics were also noted in that award, special and general relativity were not fully recognized in spite of having been partially verified by experiment by 1921. Although hero-worshipped, this great man never received Nobel recognition for his most famous work—relativity.
Example \(\PageIndex{1}\): Calculating Photon Energy and the Photoelectric Effect: A Violet Light
- What is the energy in joules and electron volts of a photon of 420-nm violet light?
- What is the maximum kinetic energy of electrons ejected from calcium by 420-nm violet light, given that the binding energy (or work function) of electrons for calcium metal is 2.71 eV?
Strategy
To solve part (a), note that the energy of a photon is given by \(E = hf\). For part (b), once the energy of the photon is calculated, it is a straightforward application of \(KE_e = hf - BE\)
to find the ejected electron’s maximum kinetic energy, since BE is given.
Solution for (a)
Photon energy is given by \[E = hf\] Since we are given the wavelength rather than the frequency, we solve the familiar relationship \(c = f \lambda\) for the frequency, yielding \[f = \dfrac{c}{\lambda}.\]
Now substituting known values yields
\[E = \dfrac{(6.63 \times 10^{-34} \, J \cdot s)(3.00 \times 10^8 \, m/s)}{420 \times 10^{-9} \, m} = 4.74 \times 10^{-19} \, J. \nonumber\]
Converting to eV, the energy of the photon is
\[E = (4.74 \times 10^{-19} \, J) \dfrac{1 \, eV}{1.6 \times 10^{-19} \, J} = 2.96 \, eV. \nonumber\]
Solution for (b)
Finding the kinetic energy of the ejected electron is now a simple application of the equation \(KE_e = hf - BE\). Substituting the photon energy and binding energy yields
\[KE_e = hf - BE = 2.96 \, eV - 2.71 \, eV = 0.246 \, eV. \nonumber\]
Discussion
The energy of this 420-nm photon of violet light is a tiny fraction of a joule, and so it is no wonder that a single photon would be difficult for us to sense directly—humans are more attuned to energies on the order of joules. But looking at the energy in electron volts, we can see that this photon has enough energy to affect atoms and molecules. A DNA molecule can be broken with about 1 eV of energy, for example, and typical atomic and molecular energies are on the order of eV, so that the UV photon in this example could have biological effects. The ejected electron (called a photoelectron ) has a rather low energy, and it would not travel far, except in a vacuum. The electron would be stopped by a retarding potential of but 0.26 eV. In fact, if the photon wavelength were longer and its energy less than 2.71 eV, then the formula would give a negative kinetic energy, an impossibility. This simply means that the 420-nm photons with their 2.96-eV energy are not much above the frequency threshold. You can show for yourself that the threshold wavelength is 459 nm (blue light). This means that if calcium metal is used in a light meter, the meter will be insensitive to wavelengths longer than those of blue light. Such a light meter would be completely insensitive to red light, for example.
PHET EXPLORATIONS: PHOTOELECTRIC EFFECT
See how light knocks electrons off a metal target, and recreate the experiment that spawned the field of quantum mechanics.
Summary
- The photoelectric effect is the process in which EM radiation ejects electrons from a material.
- Einstein proposed photons to be quanta of EM radiation having energy \(E = hf\), where \(f\) is the frequency of the radiation.
- All EM radiation is composed of photons. As Einstein explained, all characteristics of the photoelectric effect are due to the interaction of individual photons with individual electrons.
- The maximum kinetic energy \(KE_e\) of ejected electrons (photoelectrons) is given by \(KE_e = hf - BE\), where \(hf\) is the photon energy and BE is the binding energy (or work function) of the electron to the particular material.
Glossary
- photoelectric effect
- the phenomenon whereby some materials eject electrons when light is shined on them
- photon
- a quantum, or particle, of electromagnetic radiation
- photon energy
- the amount of energy a photon has; \(E = hf\)
- binding energy
- also called the work function ; the amount of energy necessary to eject an electron from a material
|
libretexts
|
2025-03-17T19:53:46.254042
| 2016-07-24T08:28:49 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.02%3A_The_Photoelectric_Effect",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.2: The Photoelectric Effect",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.03%3A_Photon_Energies_and_the_Electromagnetic_Spectrum
|
29.3: Photon Energies and the Electromagnetic Spectrum
Learning Objectives
By the end of this section, you will be able to:
- Explain the relationship between the energy of a photon in joules or electron volts and its wavelength or frequency.
- Calculate the number of photons per second emitted by a monochromatic source of specific wavelength and power.
Ionizing Radiation
A photon is a quantum of EM radiation. Its energy is given by \(E = hf\) and is related to the frequency \(f\) and wavelength \(\lambda\) of the radiation by
\[\begin{align*} E &= hf \\[4pt] &= \dfrac{hc}{\lambda} \quad \text{(energy of a photon)} \end{align*}\]
where \(E\) is the energy of a single photon and \(c\) is the speed of light. When working with small systems, energy in eV is often useful. Note that Planck’s constant in these units is
\[h = 4.14 \times 10^{-15} \, eV \cdot s.\]
Since many wavelengths are stated in nanometers (nm), it is also useful to know that
\[hc = 1240 \, eV \cdot nm.\]
These will make many calculations a little easier.
All EM radiation is composed of photons. Figure \(\PageIndex{1}\) shows various divisions of the EM spectrum plotted against wavelength, frequency, and photon energy. Previously in this book, photon characteristics were alluded to in the discussion of some of the characteristics of UV, x rays, and \(\gamma\) rays, the first of which start with frequencies just above violet in the visible spectrum. It was noted that these types of EM radiation have characteristics much different than visible light. We can now see that such properties arise because photon energy is larger at high frequencies.
Photons act as individual quanta and interact with individual electrons, atoms, molecules, and so on. The energy a photon carries is, thus, crucial to the effects it has. Table \(\PageIndex{1}\) lists representative submicroscopic energies in eV. When we compare photon energies from the EM spectrum in Figure \(\PageIndex{1}\) with energies in the table, we can see how effects vary with the type of EM radiation.
| Rotational energies of molecules | \(10^{-5} \, eV\) |
| Vibrational energies of molecules | 0.1 eV |
| Energy between outer electron shells in atoms | 1 eV |
| Binding energy of a weakly bound molecule | 1 eV |
| Energy of red light | 2 eV |
| Binding energy of a tightly bound molecule | 10 eV |
| Energy to ionize atom or molecule | 10 to 1000 eV |
Gamma rays , a form of nuclear and cosmic EM radiation, can have the highest frequencies and, hence, the highest photon energies in the EM spectrum. For example, a \(\gamma\)-ray photon with \(f = 10^{21} \, Hz\) has an energy \(E = hf = 6.63 \times 10^{-13} \, J = 4.14 \, MeV\). This is sufficient energy to ionize thousands of atoms and molecules, since only 10 to 1000 eV are needed per ionization. In fact, \(\gamma\)-rays are one type of ionizing radiation , as are x rays and UV, because they produce ionization in materials that absorb them. Because so much ionization can be produced, a single \(\gamma\)-ray photon can cause significant damage to biological tissue, killing cells or damaging their ability to properly reproduce. When cell reproduction is disrupted, the result can be cancer, one of the known effects of exposure to ionizing radiation. Since cancer cells are rapidly reproducing, they are exceptionally sensitive to the disruption produced by ionizing radiation. This means that ionizing radiation has positive uses in cancer treatment as well as risks in producing cancer.
High photon energy also enables \(\gamma\) rays to penetrate materials, since a collision with a single atom or molecule is unlikely to absorb all the \(\gamma\) ray’s energy. This can make \(\gamma\) rays useful as a probe, and they are sometimes used in medical imaging. x rays , as you can see in Figure \(\PageIndex{1}\) , overlap with the low-frequency end of the \(\gamma\) ray range. Since x rays have energies of keV and up, individual x-ray photons also can produce large amounts of ionization. At lower photon energies, x rays are not as penetrating as \(\gamma\) rays and are slightly less hazardous. X rays are ideal for medical imaging, their most common use, and a fact that was recognized immediately upon their discovery in 1895 by the German physicist W. C. Roentgen (1845–1923). (See Figure \(\PageIndex{2}\) .) Within one year of their discovery, x rays (for a time called Roentgen rays) were used for medical diagnostics. Roentgen received the 1901 Nobel Prize for the discovery of x rays.
CONNECTIONS: CONSERVATION OF ENERGY
Once again, we find that conservation of energy allows us to consider the initial and final forms that energy takes, without having to make detailed calculations of the intermediate steps. Example \(\PageIndex{1}\) is solved by considering only the initial and final forms of energy.
While \(\gamma\) rays originate in nuclear decay, x rays are produced by the process shown in Figure \(\PageIndex{3}\). Electrons ejected by thermal agitation from a hot filament in a vacuum tube are accelerated through a high voltage, gaining kinetic energy from the electrical potential energy. When they strike the anode, the electrons convert their kinetic energy to a variety of forms, including thermal energy. But since an accelerated charge radiates EM waves, and since the electrons act individually, photons are also produced. Some of these x-ray photons obtain the kinetic energy of the electron. The accelerated electrons originate at the cathode, so such a tube is called a cathode ray tube (CRT), and various versions of them are found in older TV and computer screens as well as in x-ray machines.
Example \(\PageIndex{1}\): X-ray Photon Energy and X-ray Tube Voltage
Find the maximum energy in eV of an x-ray photon produced by electrons accelerated through a potential difference of 50.0 kV in a CRT like the one in Figure \(\PageIndex{3}\).
Strategy
Electrons can give all of their kinetic energy to a single photon when they strike the anode of a CRT. (This is something like the photoelectric effect in reverse.) The kinetic energy of the electron comes from electrical potential energy. Thus we can simply equate the maximum photon energy to the electrical potential energy—that is, \(hf = qV\). (We do not have to calculate each step from beginning to end if we know that all of the starting energy \(qV\) is converted to the final form \(hf.\))
Solution
The maximum photon energy is \(hf = qV\), where \(q\) is the charge of the electron and \(V\) is the accelerating voltage. Thus,
\[hf = (1.60 \times 10^{-19} \, C)(50.0 \times 10^3 \, V). \nonumber\]
From the definition of the electron volt, we know \(1 \, eV = 1.60 \times 10^{-19} \, J\), where \(1 \, J = 1 \, C \cdot V\). Gathering factors and converting energy to eV yields
\[hf = (50.0 \times 10^3)(1.60 \times 10^{-19} \, C \cdot V)\left(\dfrac{1 \, eV}{1.60 \times 10^{-19} \, C \cdot V} \right) \nonumber\]
Discussion
This example produces a result that can be applied to many similar situations. If you accelerate a single elementary charge, like that of an electron, through a potential given in volts, then its energy in eV has the same numerical value. Thus a 50.0-kV potential generates 50.0 keV electrons, which in turn can produce photons with a maximum energy of 50 keV. Similarly, a 100-kV potential in an x-ray tube can generate up to 100-keV x-ray photons. Many x-ray tubes have adjustable voltages so that various energy x rays with differing energies, and therefore differing abilities to penetrate, can be generated.
Figure \(\PageIndex{4}\) shows the spectrum of x rays obtained from an x-ray tube. There are two distinct features to the spectrum. First, the smooth distribution results from electrons being decelerated in the anode material. A curve like this is obtained by detecting many photons, and it is apparent that the maximum energy is unlikely. This decelerating process produces radiation that is called bremsstrahlung (German for braking radiation ). The second feature is the existence of sharp peaks in the spectrum; these are called characteristic x rays , since they are characteristic of the anode material. Characteristic x rays come from atomic excitations unique to a given type of anode material. They are akin to lines in atomic spectra, implying the energy levels of atoms are quantized. Phenomena such as discrete atomic spectra and characteristic x rays are explored further in Atomic Physics.
Ultraviolet radiation (approximately 4 eV to 300 eV) overlaps with the low end of the energy range of x rays, but UV is typically lower in energy. UV comes from the de-excitation of atoms that may be part of a hot solid or gas. These atoms can be given energy that they later release as UV by numerous processes, including electric discharge, nuclear explosion, thermal agitation, and exposure to x rays. A UV photon has sufficient energy to ionize atoms and molecules, which makes its effects different from those of visible light. UV thus has some of the same biological effects as rays and x rays. For example, it can cause skin cancer and is used as a sterilizer. The major difference is that several UV photons are required to disrupt cell reproduction or kill a bacterium, whereas single \(\gamma\)-ray and X-ray photons can do the same damage. But since UV does have the energy to alter molecules, it can do what visible light cannot. One of the beneficial aspects of UV is that it triggers the production of vitamin D in the skin, whereas visible light has insufficient energy per photon to alter the molecules that trigger this production. Infantile jaundice is treated by exposing the baby to UV (with eye protection), called phototherapy, the beneficial effects of which are thought to be related to its ability to help prevent the buildup of potentially toxic bilirubin in the blood.
Example \(\PageIndex{2}\): Photon Energy and Effects for UV
Short-wavelength UV is sometimes called vacuum UV, because it is strongly absorbed by air and must be studied in a vacuum. Calculate the photon energy in eV for 100-nm vacuum UV, and estimate the number of molecules it could ionize or break apart.
Strategy
Using the equation \(E = hf\)
and appropriate constants, we can find the photon energy and compare it with energy information in Table .
Solution
The energy of a photon is given by
\[E = hf = \dfrac{hc}{\lambda}. \nonumber\]
Using \(hc = 1240 \, eV \cdot nm\), we find that
\[E = \dfrac{hc}{\lambda} = \dfrac{1240 \, eV \cdot nm}{100 \, nm} = 12.4 \, eV. \nonumber\]
Discussion
According to Table \(\PageIndex{1}\), this photon energy might be able to ionize an atom or molecule, and it is about what is needed to break up a tightly bound molecule, since they are bound by approximately 10 eV. This photon energy could destroy about a dozen weakly bound molecules. Because of its high photon energy, UV disrupts atoms and molecules it interacts with. One good consequence is that all but the longest-wavelength UV is strongly absorbed and is easily blocked by sunglasses. In fact, most of the Sun’s UV is absorbed by a thin layer of ozone in the upper atmosphere, protecting sensitive organisms on Earth. Damage to our ozone layer by the addition of such chemicals as CFC’s has reduced this protection for us.
Visible Light
The range of photon energies for visible light from red to violet is 1.63 to 3.26 eV, respectively (left for this chapter’s Problems and Exercises to verify). These energies are on the order of those between outer electron shells in atoms and molecules. This means that these photons can be absorbed by atoms and molecules. A single photon can actually stimulate the retina, for example, by altering a receptor molecule that then triggers a nerve impulse. Photons can be absorbed or emitted only by atoms and molecules that have precisely the correct quantized energy step to do so. For example, if a red photon of frequency \(f\) encounters a molecule that has an energy step, \(\Delta E\), equal to \(hf\), then the photon can be absorbed. Violet flowers absorb red and reflect violet; this implies there is no energy step between levels in the receptor molecule equal to the violet photon’s energy, but there is an energy step for the red.
There are some noticeable differences in the characteristics of light between the two ends of the visible spectrum that are due to photon energies. Red light has insufficient photon energy to expose most black-and-white film, and it is thus used to illuminate darkrooms where such film is developed. Since violet light has a higher photon energy, dyes that absorb violet tend to fade more quickly than those that do not. (See Figure \(\PageIndex{5}\).) Take a look at some faded color posters in a storefront some time, and you will notice that the blues and violets are the last to fade. This is because other dyes, such as red and green dyes, absorb blue and violet photons, the higher energies of which break up their weakly bound molecules. (Complex molecules such as those in dyes and DNA tend to be weakly bound.) Blue and violet dyes reflect those colors and, therefore, do not absorb these more energetic photons, thus suffering less molecular damage.
Transparent materials, such as some glasses, do not absorb any visible light, because there is no energy step in the atoms or molecules that could absorb the light. Since individual photons interact with individual atoms, it is nearly impossible to have two photons absorbed simultaneously to reach a large energy step. Because of its lower photon energy, visible light can sometimes pass through many kilometers of a substance, while higher frequencies like UV, x ray, and \(\gamma\) rays are absorbed, because they have sufficient photon energy to ionize the material.
Example \(\PageIndex{3}\): How Many Photons per Second Does a Typical Light Bulb Produce?
Assuming that 10.0% of a 100-W light bulb’s energy output is in the visible range (typical for incandescent bulbs) with an average wavelength of 580 nm, calculate the number of visible photons emitted per second.
Strategy
Power is energy per unit time, and so if we can find the energy per photon, we can determine the number of photons per second. This will best be done in joules, since power is given in watts, which are joules per second.
Solution
The power in visible light production is 10.0% of 100 W, or 10.0 J/s. The energy of the average visible photon is found by substituting the given average wavelength into the formula
\[E = \dfrac{hc}{\lambda}. \nonumber\]
This produces
\[E = \dfrac{(6.63 \times 10^{-34} \, J \cdot s)(3.00 \times 10^8 \, m/s}{580 \times 10^{-9}} = 3.43 \times 10^{-19} \, J. \nonumber\]
The number of visible photons per second is thus
\[photon/s = \dfrac{10.0 \, J/s}{3.43 \times 10^{-19} \, J/photon} = 2.92 \times 10^{19} \, photon/s \nonumber\]
Discussion
This incredible number of photons per second is verification that individual photons are insignificant in ordinary human experience. It is also a verification of the correspondence principle—on the macroscopic scale, quantization becomes essentially continuous or classical. Finally, there are so many photons emitted by a 100-W lightbulb that it can be seen by the unaided eye many kilometers away.
Lower-Energy Photons
Infrared radiation (IR) has even lower photon energies than visible light and cannot significantly alter atoms and molecules. IR can be absorbed and emitted by atoms and molecules, particularly between closely spaced states. IR is extremely strongly absorbed by water, for example, because water molecules have many states separated by energies on the order of \(10^{-5} \, eV\) to \(10^{-2} \, eV\) well within the IR and microwave energy ranges. This is why in the IR range, skin is almost jet black, with an emissivity near 1—there are many states in water molecules in the skin that can absorb a large range of IR photon energies. Not all molecules have this property. Air, for example, is nearly transparent to many IR frequencies.
Microwaves are the highest frequencies that can be produced by electronic circuits, although they are also produced naturally. Thus microwaves are similar to IR but do not extend to as high frequencies. There are states in water and other molecules that have the same frequency and energy as microwaves, typically about \(10^{-5} \, eV\). This is one reason why food absorbs microwaves more strongly than many other materials, making microwave ovens an efficient way of putting energy directly into food.
Photon energies for both IR and microwaves are so low that huge numbers of photons are involved in any significant energy transfer by IR or microwaves (such as warming yourself with a heat lamp or cooking pizza in the microwave). Visible light, IR, microwaves, and all lower frequencies cannot produce ionization with single photons and do not ordinarily have the hazards of higher frequencies. When visible, IR, or microwave radiation is hazardous, such as the inducement of cataracts by microwaves, the hazard is due to huge numbers of photons acting together (not to an accumulation of photons, such as sterilization by weak UV). The negative effects of visible, IR, or microwave radiation can be thermal effects, which could be produced by any heat source. But one difference is that at very high intensity, strong electric and magnetic fields can be produced by photons acting together. Such electromagnetic fields (EMF) can actually ionize materials.
MISCONCEPTION ALERT: HIGH VOLTAGE POWER LINES
- Although some people think that living near high-voltage power lines is hazardous to one’s health, ongoing studies of the transient field effects produced by these lines show their strengths to be insufficient to cause damage. Demographic studies also fail to show significant correlation of ill effects with high-voltage power lines. The American Physical Society issued a report over 10 years ago on power-line fields, which concluded that the scientific literature and reviews of panels show no consistent, significant link between cancer and power-line fields. They also felt that the “diversion of resources to eliminate a threat which has no persuasive scientific basis is disturbing.”
It is virtually impossible to detect individual photons having frequencies below microwave frequencies, because of their low photon energy. But the photons are there. A continuous EM wave can be modeled as photons. At low frequencies, EM waves are generally treated as time- and position-varying electric and magnetic fields with no discernible quantization. This is another example of the correspondence principle in situations involving huge numbers of photons.
PHET EXPLORATIONS: COLOR VISION
Make a whole rainbow by mixing red, green, and blue light. Change the wavelength of a monochromatic beam or filter white light. View the light as a solid beam, or see the individual photons.
Summary
- Photon energy is responsible for many characteristics of EM radiation, being particularly noticeable at high frequencies.
- Photons have both wave and particle characteristics.
Glossary
- gamma ray
- also \(\gamma\)-ray; highest-energy photon in the EM spectrum
- ionizing radiation
- radiation that ionizes materials that absorb it
- x ray
- EM photon between \(\gamma\)-ray and UV in energy
- bremsstrahlung
- German for braking radiation ; produced when electrons are decelerated
- characteristic x rays
- x rays whose energy depends on the material they were produced in
- ultraviolet radiation
- UV; ionizing photons slightly more energetic than violet light
- visible light
- the range of photon energies the human eye can detect
- infrared radiation
- photons with energies slightly less than red light
- microwaves
- photons with wavelengths on the order of a micron \((\mu m)0\))
|
libretexts
|
2025-03-17T19:53:46.348019
| 2016-07-24T08:29:50 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.03%3A_Photon_Energies_and_the_Electromagnetic_Spectrum",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.3: Photon Energies and the Electromagnetic Spectrum",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.04%3A_Photon_Momentum
|
29.4: Photon Momentum
Learning Objectives
By the end of this section, you will be able to:
- Relate the linear momentum of a photon to its energy or wavelength, and apply linear momentum conservation to simple processes involving the emission, absorption, or reflection of photons.
- Account qualitatively for the increase of photon wavelength that is observed, and explain the significance of the Compton wavelength.
Measuring Photon Momentum
The quantum of EM radiation we call a photon has properties analogous to those of particles we can see, such as grains of sand. A photon interacts as a unit in collisions or when absorbed, rather than as an extensive wave. Massive quanta, like electrons, also act like macroscopic particles—something we expect, because they are the smallest units of matter. Particles carry momentum as well as energy. Despite photons having no mass, there has long been evidence that EM radiation carries momentum. (Maxwell and others who studied EM waves predicted that they would carry momentum.) It is now a well-established fact that photons do have momentum. In fact, photon momentum is suggested by the photoelectric effect, where photons knock electrons out of a substance. Figure \(\PageIndex{1}\) shows macroscopic evidence of photon momentum.
Figure \(\PageIndex{1}\) shows a comet with two prominent tails. What most people do not know about the tails is that they always point away from the Sun rather than trailing behind the comet (like the tail of Bo Peep’s sheep). Comet tails are composed of gases and dust evaporated from the body of the comet and ionized gas. The dust particles recoil away from the Sun when photons scatter from them. Evidently, photons carry momentum in the direction of their motion (away from the Sun), and some of this momentum is transferred to dust particles in collisions. Gas atoms and molecules in the blue tail are most affected by other particles of radiation, such as protons and electrons emanating from the Sun, rather than by the momentum of photons.
Connections: Conservation of Momentum
Not only is momentum conserved in all realms of physics, but all types of particles are found to have momentum. We expect particles with mass to have momentum, but now we see that massless particles including photons also carry momentum.
Momentum is conserved in quantum mechanics just as it is in relativity and classical physics. Some of the earliest direct experimental evidence of this came from scattering of x-ray photons by electrons in substances, named Compton scattering after the American physicist Arthur H. Compton (1892–1962). Around 1923, Compton observed that x rays scattered from materials had a decreased energy and correctly analyzed this as being due to the scattering of photons from electrons. This phenomenon could be handled as a collision between two particles—a photon and an electron at rest in the material. Energy and momentum are conserved in the collision (Figure \(\PageIndex{2}\)) He won a Nobel Prize in 1929 for the discovery of this scattering, now called the Compton effect , because it helped prove that photon momentum is given by
\[p = \dfrac{h}{\lambda},\]
where \(h\) is Planck's constant and \(\lambda\) is the photon wavelength. (Note that relativistic momentum given as \(p = \gamma mu\) is valid only for particles having mass.)
We can see that photon momentum is small, since \(p = h/\lambda\) and \(h\) is very small. It is for this reason that we do not ordinarily observe photon momentum. Our mirrors do not recoil when light reflects from them (except perhaps in cartoons). Compton saw the effects of photon momentum because he was observing x rays, which have a small wavelength and a relatively large momentum, interacting with the lightest of particles, the electron.
Example \(\PageIndex{1}\): Electron and Photon Momentum Compared
- Calculate the momentum of a visible photon that has a wavelength of 500 nm.
- Find the velocity of an electron having the same momentum.
- What is the energy of the electron, and how does it compare with the energy of the photon?
Strategy
Finding the photon momentum is a straightforward application of its definition: \(p = \frac{h}{\lambda}\).
If we find the photon momentum is small, then we can assume that an electron with the same momentum will be nonrelativistic, making it easy to find its velocity and kinetic energy from the classical formulas.
Solution for (a)
Photon momentum is given by the equation: \[p = \dfrac{h}{\lambda}. \nonumber\]
Entering the given photon wavelength yields
\[p = \dfrac{6.63 \times 10^{-34} \, J \cdot s}{500 \times 10^{-0} \, m} = 1.33 \times 10^{-27} \, kg \cdot m/s. \nonumber\]
Solution for (b)
Since this momentum is indeed small, we will use the classical expression \(p = mv\) to find the velocity of an electron with this momentum. Solving for \(v\) and using the known value for the mass of an electron gives
\[v = \dfrac{p}{m} = \dfrac{1.33 \times 10^{-27} \, kg \cdot m/s}{9.11 \times 10^{-31} \, kg} = 1460 \, m/s \approx 1460 \, m/s. \nonumber\]
Solution for (c)
The electron has kinetic energy, which is classically given by
\[KE_e = \dfrac{1}{2} mv^2. \nonumber\]
Thus,
\[KE_e = \dfrac{1}{2} (9.11 \times 10^{-31} \, kg)(1455 \, m/s)^2 = 9.64 \times 10^{-25} \, J. \nonumber\]
Converting this to eV by multiplying by \((1 \, eV)/(1.602 \times 10^{-19} \, J)\) yields
\[KE_e = 6.02 \times 10^{-6} \, eV. \nonumber\]
The photon energy \(E\) is
\[E = \dfrac{hc}{\lambda} = \dfrac{1240 \, eV \cdot nm}{500 \, nm} = 2.48 \, eV, \nonumber\]
which is about five orders of magnitude greater.
Discussion
Photon momentum is indeed small. Even if we have huge numbers of them, the total momentum they carry is small. An electron with the same momentum has a 1460 m/s velocity, which is clearly nonrelativistic. A more massive particle with the same momentum would have an even smaller velocity. This is borne out by the fact that it takes far less energy to give an electron the same momentum as a photon. But on a quantum-mechanical scale, especially for high-energy photons interacting with small masses, photon momentum is significant. Even on a large scale, photon momentum can have an effect if there are enough of them and if there is nothing to prevent the slow recoil of matter. Comet tails are one example, but there are also proposals to build space sails that use huge low-mass mirrors (made of aluminized Mylar) to reflect sunlight. In the vacuum of space, the mirrors would gradually recoil and could actually take spacecraft from place to place in the solar system (Figure \(\PageIndex{3}\)).
Relativistic Photon Momentum
There is a relationship between photon momentum \(p\) and photon energy \(E\) that is consistent with the relation given previously for the relativistic total energy of a particle as
\[E^2 = (pc)^2 + (mc^2)^2. \label{photon1}\]
We know \(m\) is zero for a photon, but \(p\) is not, so that Equation \ref{photon1} becomes
\[E = pc,\] or \[p = \dfrac{E}{c} \text{(for photons)}. \nonumber\]
To check the validity of this relation, note that \(E = hc/\lambda\) for a photon. Substituting this into \(p = E/c\) yields
\[p = (hc/\lambda) / c = \dfrac{h}{\lambda},\]
as determined experimentally and discussed above. Thus, \(p = E/c\) is equivalent to Compton’s result \(p = h/\lambda.\) For a further verification of the relationship between photon energy and momentum, see Example \(\PageIndex{3}\).
Photon Detectors
Almost all detection systems talked about thus far—eyes, photographic plates, photomultiplier tubes in microscopes, and CCD cameras—rely on particle-like properties of photons interacting with a sensitive area. A change is caused and either the change is cascaded or zillions of points are recorded to form an image we detect. These detectors are used in biomedical imaging systems, and there is ongoing research into improving the efficiency of receiving photons, particularly by cooling detection systems and reducing thermal effects.
Example \(\PageIndex{2}\): Photon Energy and Momentum
Show that \(p = E/c\) for the photon considered in the Example \(\PageIndex{2}\) .
Strategy
We will take the energy \(E\) found in Example \(\PageIndex{2}\) , divide it by the speed of light, and see if the same momentum is obtained as before.
Solution
Given that the energy of the photon is 2.48 eV and converting this to joules, we get
\[p = \dfrac{E}{c} = \dfrac{(2.48 \, eV)(1.60 \times 10^{-19} \, J/eV)}{3.00 \times 10^8 \, m/s} = 1.33 \times 10^{-27} \, kg. \nonumber\]
Discussion
This value for momentum is the same as found before (note that unrounded values are used in all calculations to avoid even small rounding errors), an expected verification of the relationship \(p = E/c\). This also means the relationship between energy, momentum, and mass given by \(E^2 = (pc)^2 + (mc)^2 \) applies to both matter and photons. Once again, note that \(p\) is not zero, even when \(m\) is.
PROBLEM-SOLVING SUGGESTIONS
Note that the forms of the constants \(h = 4.14 \times 10^{-15} \, eV \cdot s\) and \(hc = 1240 \, eV \cdot nm\) may be particularly useful for this section’s Problems and Exercises.
Summary
- Photons have momentum, given by \(p = \frac{h}{\lambda}\), where \(\lambda\) is the photon wavelength.
- Photon energy and momentum are related by \(p = \frac{E}{c}\), where \(E = hf = hc/\lambda\) for a photon.
Glossary
- photon momentum
- the amount of momentum a photon has, calculated by \(p = \frac{h}{\lambda} = \frac{E}{c}\)
- Compton effect
- the phenomenon whereby x rays scattered from materials have decreased energy
|
libretexts
|
2025-03-17T19:53:46.425757
| 2016-07-24T08:30:37 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.04%3A_Photon_Momentum",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.4: Photon Momentum",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.05%3A_The_Particle-Wave_Duality_of_Light
|
29.5: The Particle-Wave Duality of Light
Learning Objectives
By the end of this section, you will be able to:
- Explain what the term particle-wave duality means, and why it is applied to EM radiation.
We have long known that EM radiation is a wave, capable of interference and diffraction. We now see that light can be modeled as photons, which are massless particles. This may seem contradictory, since we ordinarily deal with large objects that never act like both wave and particle. An ocean wave, for example, looks nothing like a rock. To understand small-scale phenomena, we make analogies with the large-scale phenomena we observe directly. When we say something behaves like a wave, we mean it shows interference effects analogous to those seen in overlapping water waves. (Figure \(\PageIndex{1}\)) Two examples of waves are sound and EM radiation. When we say something behaves like a particle, we mean that it interacts as a discrete unit with no interference effects. Examples of particles include electrons, atoms, and photons of EM radiation. How do we talk about a phenomenon that acts like both a particle and a wave?
There is no doubt that EM radiation interferes and has the properties of wavelength and frequency. There is also no doubt that it behaves as particles—photons with discrete energy. We call this twofold nature the particle-wave duality , meaning that EM radiation has both particle and wave properties. This so-called duality is simply a term for properties of the photon analogous to phenomena we can observe directly, on a macroscopic scale. If this term seems strange, it is because we do not ordinarily observe details on the quantum level directly, and our observations yield either particle or wavelike properties, but never both simultaneously.
Since we have a particle-wave duality for photons, and since we have seen connections between photons and matter in that both have momentum, it is reasonable to ask whether there is a particle-wave duality for matter as well. If the EM radiation we once thought to be a pure wave has particle properties, is it possible that matter has wave properties? The answer is yes. The consequences are tremendous, as we will begin to see in the next section.
PHET: EXPLORATIONS: QUANTUM WAVE INTERFERENCE
When do photons, electrons, and atoms behave like particles and when do they behave like waves? Watch waves spread out and interfere as they pass through a double slit, then get detected on a screen as tiny dots. Use quantum detectors to explore how measurements change the waves and the patterns they produce on the screen. Clikc this link to download PhET simulation .
Summary
- EM radiation can behave like either a particle or a wave.
- This is termed particle-wave duality.
Glossary
- particle-wave duality
- the property of behaving like either a particle or a wave; the term for the phenomenon that all particles have wave characteristics
|
libretexts
|
2025-03-17T19:53:46.488584
| 2016-07-24T08:31:14 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.05%3A_The_Particle-Wave_Duality_of_Light",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.5: The Particle-Wave Duality of Light",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.06%3A_The_Wave_Nature_of_Matter
|
29.6: The Wave Nature of Matter
Learning Objectives
By the end of this section, you will be able to:
- Describe the Davisson-Germer experiment, and explain how it provides evidence for the wave nature of electrons.
In 1923 a French physics graduate student named Prince Louis-Victor de Broglie (1892–1987) made a radical proposal based on the hope that nature is symmetric. If EM radiation has both particle and wave properties, then nature would be symmetric if matter also had both particle and wave properties. If what we once thought of as an unequivocal wave (EM radiation) is also a particle, then what we think of as an unequivocal particle (matter) may also be a wave. De Broglie’s suggestion, made as part of his doctoral thesis, was so radical that it was greeted with some skepticism. A copy of his thesis was sent to Einstein, who said it was not only probably correct, but that it might be of fundamental importance. With the support of Einstein and a few other prominent physicists, de Broglie was awarded his doctorate.
De Broglie took both relativity and quantum mechanics into account to develop the proposal that all particles have a wavelength , given by
\[\lambda = \dfrac{h}{p} \, (\text{matter and photons}),\]
where \(h\) Planck’s constant and \(p\) is momentum. This is defined to be the de Broglie wavelength . (Note that we already have this for photons, from the equation \(p = h/\lambda\).) The hallmark of a wave is interference. If matter is a wave, then it must exhibit constructive and destructive interference. Why isn’t this ordinarily observed? The answer is that in order to see significant interference effects, a wave must interact with an object about the same size as its wavelength. Since \(h\) is very small, \(\lambda\) is also small, especially for macroscopic objects. A 3-kg bowling ball moving at 10 m/s, for example, has
\[\begin{align*} \lambda &= h/p \\[4pt] &= (6.63 \times 10^{-34} \, J \cdot s)/[(3 \, kg)(10 \, m/s)] \\[4pt] &= 2 \times 10^{-35} \, m. \end{align*}\]
This means that to see its wave characteristics, the bowling ball would have to interact with something about \(10^{-35} \, m\) in size—far smaller than anything known. When waves interact with objects much larger than their wavelength, they show negligible interference effects and move in straight lines (such as light rays in geometric optics). To get easily observed interference effects from particles of matter, the longest wavelength and hence smallest mass possible would be useful. Therefore, this effect was first observed with electrons.
American physicists Clinton J. Davisson and Lester H. Germer in 1925 and, independently, British physicist G. P. Thomson (son of J. J. Thomson, discoverer of the electron) in 1926 scattered electrons from crystals and found diffraction patterns. These patterns are exactly consistent with interference of electrons having the de Broglie wavelength and are somewhat analogous to light interacting with a diffraction grating (Figure \(\PageIndex{1}\))
Connections: Waves
All microscopic particles, whether massless, like photons, or having mass, like electrons, have wave properties. The relationship between momentum and wavelength is fundamental for all particles.
De Broglie’s proposal of a wave nature for all particles initiated a remarkably productive era in which the foundations for quantum mechanics were laid. In 1926, the Austrian physicist Erwin Schrödinger (1887–1961) published four papers in which the wave nature of particles was treated explicitly with wave equations. At the same time, many others began important work. Among them was German physicist Werner Heisenberg (1901–1976) who, among many other contributions to quantum mechanics, formulated a mathematical treatment of the wave nature of matter that used matrices rather than wave equations. We will deal with some specifics in later sections, but it is worth noting that de Broglie’s work was a watershed for the development of quantum mechanics. De Broglie was awarded the Nobel Prize in 1929 for his vision, as were Davisson and G. P. Thomson in 1937 for their experimental verification of de Broglie’s hypothesis.
Example \(\PageIndex{1}\): Electron Wavelength versus Velocity and Energy
For an electron having a de Broglie wavelength of 0.167 nm (appropriate for interacting with crystal lattice structures that are about this size):
- Calculate the electron’s velocity, assuming it is nonrelativistic.
- Calculate the electron’s kinetic energy in eV.
Strategy
For part (a), since the de Broglie wavelength is given, the electron’s velocity can be obtained from \(\lambda = h/p\) by using the nonrelativistic formula for momentum, \(p = mv\). For part (b), once \(v\) is obtained (and it has been verified that \(v\) is nonrelativistic), the classical kinetic energy is simply \((1/2)mv^2\).
Solution for (a)
Substituting the nonrelativistic formula for momentum \((p = mv)\) into the de Broglie wavelength gives
\[\begin{align*} \lambda &= \dfrac{h}{p} \\[4pt] &= \dfrac{h}{mv}. \end{align*}\]
Solving for \(v\) gives
\[v =\dfrac{h}{m\lambda}.\nonumber\]
Substituting known values yields
\[\begin{align*} v &= \dfrac{6.63 \times 10^{-34} \, J \cdot s}{(9.11 \times 10^{-31} \, kg)(0.167 \times 10^{-9} \, m)} \\[4pt] &= 4.36 \times 10^6 \, m/s.\end{align*}\]
Solution for (b)
While fast compared with a car, this electron’s speed is not highly relativistic, and so we can comfortably use the classical formula to find the electron’s kinetic energy and convert it to eV as requested.
\[\begin{align*} KE &= \dfrac{1}{2} mv^2 \\[4pt] &= \dfrac{1}{2}(9.11 \times 10^{-31} \, kg)(4.36 \times 10^6 \times 10^6 \, m/s)^2 \\[4pt] &= (86.4 \times 10^{-18} \, J)\left(\dfrac{1 \, eV}{1.601 \times 10^{-19} \, J}\right) \\[4pt] &= 54.0 \, eV \end{align*} \]
Discussion
This low energy means that these 0.167-nm electrons could be obtained by accelerating them through a 54.0-V electrostatic potential, an easy task. The results also confirm the assumption that the electrons are nonrelativistic, since their velocity is just over 1% of the speed of light and the kinetic energy is about 0.01% of the rest energy of an electron (0.511 MeV). If the electrons had turned out to be relativistic, we would have had to use more involved calculations employing relativistic formulas.
Electron Microscopes
One consequence or use of the wave nature of matter is found in the electron microscope. As we have discussed, there is a limit to the detail observed with any probe having a wavelength. Resolution, or observable detail, is limited to about one wavelength. Since a potential of only 54 V can produce electrons with sub-nanometer wavelengths, it is easy to get electrons with much smaller wavelengths than those of visible light (hundreds of nanometers). Electron microscopes can, thus, be constructed to detect much smaller details than optical microscopes (Figure \(\PageIndex{2}\)).
There are basically two types of electron microscopes. The transmission electron microscope (TEM) accelerates electrons that are emitted from a hot filament (the cathode). The beam is broadened and then passes through the sample. A magnetic lens focuses the beam image onto a fluorescent screen, a photographic plate, or (most probably) a CCD (light sensitive camera), from which it is transferred to a computer. The TEM is similar to the optical microscope, but it requires a thin sample examined in a vacuum. However it can resolve details as small as 0.1 nm (\(10^{-10} \, m\)), providing magnifications of 100 million times the size of the original object. The TEM has allowed us to see individual atoms and structure of cell nuclei.
The scanning electron microscope (SEM) provides images by using secondary electrons produced by the primary beam interacting with the surface of the sample (Figure \(\PageIndex{2}\)). The SEM also uses magnetic lenses to focus the beam onto the sample. However, it moves the beam around electrically to “scan” the sample in the x and y directions. A CCD detector is used to process the data for each electron position, producing images like the one at the beginning of this chapter. The SEM has the advantage of not requiring a thin sample and of providing a 3-D view. However, its resolution is about ten times less than a TEM.
Electrons were the first particles with mass to be directly confirmed to have the wavelength proposed by de Broglie. Subsequently, protons, helium nuclei, neutrons, and many others have been observed to exhibit interference when they interact with objects having sizes similar to their de Broglie wavelength. The de Broglie wavelength for massless particles was well established in the 1920s for photons, and it has since been observed that all massless particles have a de Broglie wavelength \(\lambda = h/p\). The wave nature of all particles is a universal characteristic of nature. We shall see in following sections that implications of the de Broglie wavelength include the quantization of energy in atoms and molecules, and an alteration of our basic view of nature on the microscopic scale. The next section, for example, shows that there are limits to the precision with which we may make predictions, regardless of how hard we try. There are even limits to the precision with which we may measure an object’s location or energy.
MAKING CONNECTIONS
The wave nature of matter allows it to exhibit all the characteristics of other, more familiar, waves. Diffraction gratings, for example, produce diffraction patterns for light that depend on grating spacing and the wavelength of the light. This effect, as with most wave phenomena, is most pronounced when the wave interacts with objects having a size similar to its wavelength. For gratings, this is the spacing between multiple slits.) When electrons interact with a system having a spacing similar to the electron wavelength, they show the same types of interference patterns as light does for diffraction gratings, as shown at top left in Figure \(\PageIndex{3}\).
Atoms are spaced at regular intervals in a crystal as parallel planes, as shown in the bottom part of Figure \(\PageIndex{3}\). The spacings between these planes act like the openings in a diffraction grating. At certain incident angles, the paths of electrons scattering from successive planes differ by one wavelength and, thus, interfere constructively. At other angles, the path length differences are not an integral wavelength, and there is partial to total destructive interference. This type of scattering from a large crystal with well-defined lattice planes can produce dramatic interference patterns. It is called Bragg reflection , for the father-and-son team who first explored and analyzed it in some detail. The expanded view also shows the path-length differences and indicates how these depend on incident angle \(\theta\) in a manner similar to the diffraction patterns for x rays reflecting from a crystal.
Let us take the spacing between parallel planes of atoms in the crystal to be \(d\). As mentioned, if the path length difference (PLD) for the electrons is a whole number of wavelengths, there will be constructive interference —that is, \(PLD = n\lambda \, (n = 1, \, 2, \, 3, . . .)\). Because \(AB = BC = d \, \sin \, \theta\), we have constructive interference when
\[n\lambda = 2d \, \sin \, \theta.\]
This relationship is called the Bragg equation and applies not only to electrons but also to x rays.
The wavelength of matter is a submicroscopic characteristic that explains a macroscopic phenomenon such as Bragg reflection. Similarly, the wavelength of light is a submicroscopic characteristic that explains the macroscopic phenomenon of diffraction patterns.
Summary
- Particles of matter also have a wavelength, called the de Broglie wavelength, given by \(\lambda = \frac{h}{p}\), where \(p\) is momentum.
- Matter is found to have the same interference characteristics as any other wave.
Glossary
- de Broglie wavelength
- the wavelength possessed by a particle of matter, calculated by \(\lambda = h/p\)
|
libretexts
|
2025-03-17T19:53:46.562262
| 2016-07-24T08:31:52 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.06%3A_The_Wave_Nature_of_Matter",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.6: The Wave Nature of Matter",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.07%3A_Probability_and_The_Heisenberg_Uncertainty_Principle
|
29.7: Probability and The Heisenberg Uncertainty Principle
Learning Objectives
By the end of this section, you will be able to:
- Use both versions of Heisenberg’s uncertainty principle in calculations.
- Explain the implications of Heisenberg’s uncertainty principle for measurements.
Matter and photons are waves, implying they are spread out over some distance. What is the position of a particle, such as an electron? Is it at the center of the wave? The answer lies in how you measure the position of an electron. Experiments show that you will find the electron at some definite location, unlike a wave. But if you set up exactly the same situation and measure it again, you will find the electron in a different location, often far outside any experimental uncertainty in your measurement. Repeated measurements will display a statistical distribution of locations that appears wavelike (Figure \(\PageIndex{1}\)).
After de Broglie proposed the wave nature of matter, many physicists, including Schrödinger and Heisenberg, explored the consequences. The idea quickly emerged that, because of its wave character, a particle’s trajectory and destination cannot be precisely predicted for each particle individually . However, each particle goes to a definite place (as illustrated in Figure \(\PageIndex{2}\)). After compiling enough data, you get a distribution related to the particle’s wavelength and diffraction pattern. There is a certain probability of finding the particle at a given location, and the overall pattern is called a probability distribution . Those who developed quantum mechanics devised equations that predicted the probability distribution in various circumstances.
It is somewhat disquieting to think that you cannot predict exactly where an individual particle will go, or even follow it to its destination. Let us explore what happens if we try to follow a particle. Consider the double-slit patterns obtained for electrons and photons in Figure \(\PageIndex{2}\). First, we note that these patterns are identical, following \(d \, sin \, \theta = m\lambda\), the equation for double-slit constructive interference developed in Photon Energies and the Electromagnetic Spectrum , where \(d\) is the slit separation and \(\lambda\) is the electron or photon wavelength.
Both patterns build up statistically as individual particles fall on the detector. This can be observed for photons or electrons—for now, let us concentrate on electrons. You might imagine that the electrons are interfering with one another as any waves do. To test this, you can lower the intensity until there is never more than one electron between the slits and the screen. The same interference pattern builds up! This implies that a particle’s probability distribution spans both slits, and the particles actually interfere with themselves. Does this also mean that the electron goes through both slits? An electron is a basic unit of matter that is not divisible. But it is a fair question, and so we should look to see if the electron traverses one slit or the other, or both. One possibility is to have coils around the slits that detect charges moving through them. What is observed is that an electron always goes through one slit or the other; it does not split to go through both. But there is a catch. If you determine that the electron went through one of the slits, you no longer get a double slit pattern—instead, you get single slit interference. There is no escape by using another method of determining which slit the electron went through. Knowing the particle went through one slit forces a single-slit pattern. If you do not observe which slit the electron goes through, you obtain a double-slit pattern.
Heisenberg Uncertainty
How does knowing which slit the electron passed through change the pattern? The answer is fundamentally important— measurement affects the system being observed . Information can be lost, and in some cases it is impossible to measure two physical quantities simultaneously to exact precision. For example, you can measure the position of a moving electron by scattering light or other electrons from it. Those probes have momentum themselves, and by scattering from the electron, they change its momentum in a manner that loses information . There is a limit to absolute knowledge, even in principle.
It was Werner Heisenberg who first stated this limit to knowledge in 1929 as a result of his work on quantum mechanics and the wave characteristics of all particles. (Figure \(\PageIndex{3}\)). Specifically, consider simultaneously measuring the position and momentum of an electron (it could be any particle). There is an uncertainty in position \(\Delta x\) that is approximately equal to the wavelength of the particle. That is,
\[\Delta x \approx \lambda.\]
As discussed above, a wave is not located at one point in space. If the electron’s position is measured repeatedly, a spread in locations will be observed, implying an uncertainty in position \(\Delta x\). To detect the position of the particle, we must interact with it, such as having it collide with a detector. In the collision, the particle will lose momentum. This change in momentum could be anywhere from close to zero to the total momentum of the particle, \(p = h/\lambda\). It is not possible to tell how much momentum will be transferred to a detector, and so there is an uncertainty in momentum \(\Delta p\), too. In fact, the uncertainty in momentum may be as large as the momentum itself, which in equation form means that
\[\Delta p \approx \dfrac{h}{\lambda}.\]
The uncertainty in position can be reduced by using a shorter-wavelength electron, since \(\Delta x \approx \lambda\). But shortening the wavelength increases the uncertainty in momentum, since \(\Delta p \approx h/\lambda\). Conversely, the uncertainty in momentum can be reduced by using a longer-wavelength electron, but this increases the uncertainty in position. Mathematically, you can express this trade-off by multiplying the uncertainties. The wavelength cancels, leaving
\[\Delta x \Delta p \approx h.\]
So if one uncertainty is reduced, the other must increase so that their product is \(\approx h\).
With the use of advanced mathematics, Heisenberg showed that the best that can be done in a simultaneous measurement of position and momentum is
\[\Delta x \Delta p \geq \dfrac{h}{4\pi}.\]
This is known as the Heisenberg uncertainty principle . It is impossible to measure position \(x\) and momentum \(p\) simultaneously with uncertainties \(\Delta x\) and \(\Delta p\) that multiply to be less than \(h/4\pi\). Neither uncertainty can be zero. Neither uncertainty can become small without the other becoming large. A small wavelength allows accurate position measurement, but it increases the momentum of the probe to the point that it further disturbs the momentum of a system being measured. For example, if an electron is scattered from an atom and has a wavelength small enough to detect the position of electrons in the atom, its momentum can knock the electrons from their orbits in a manner that loses information about their original motion. It is therefore impossible to follow an electron in its orbit around an atom. If you measure the electron’s position, you will find it in a definite location, but the atom will be disrupted. Repeated measurements on identical atoms will produce interesting probability distributions for electrons around the atom, but they will not produce motion information. The probability distributions are referred to as electron clouds or orbitals. The shapes of these orbitals are often shown in general chemistry texts and are discussed in The Wave Nature of Matter Causes Quantization .
Example \(\PageIndex{1}\): Heisenberg Uncertainty Principle in Position and Momentum for an Atom
- If the position of an electron in an atom is measured to an accuracy of 0.0100 nm, what is the electron’s uncertainty in velocity?
- If the electron has this velocity, what is its kinetic energy in eV?
Strategy
The uncertainty in position is the accuracy of the measurement, or \(\Delta x = 0.0100 \, nm\). Thus the smallest uncertainty in momentum \(\Delta p\) can be calculated using \(\Delta x \Delta p \geq h/4\pi\). Once the uncertainty in momentum \(\Delta p\) is found, the uncertainty in velocity can be found from \(\Delta p = m\Delta v\).
Solution for (a)
Using the equals sign in the uncertainty principle to express the minimum uncertainty, we have
\[\Delta x \Delta p = \dfrac{h}{4\pi}.\]
Solving for \(\Delta p\) and substituting known values gives
\[p = \dfrac{h}{4\pi \Delta x} = \dfrac{6.63 \times 10^{-34} \, J \cdot}{4 \pi (1.00 \times 10^{-11} \, m)} = 5.28 \times 10^{-24} \, kg \cdot m/s\]
Thus,
\[\Delta p = 5.28 \times 10^{-24} \, kg \cdot m/s = m\Delta v.\]
Solving for \(\Delta v\) and substituting the mass of an electron gives
\[\Delta v = \dfrac{\Delta p}{m} = \dfrac{5.28 \times 10^{-24} \, kg \cdot m/s}{9.11 \times 10^{-31} \, kg} = 5.79 \times 10^6 \, m/s.\]
Solution for (b)
Although large, this velocity is not highly relativistic, and so the electron’s kinetic energy is
\[KE_e = \dfrac{1}{2} mv^2\]
\[= \dfrac{1}{2} (0.11 \times 10^{-31} \, kg)(5.79 \times 10^6 \, m/s)^2\]
\[= (1.53 \times 10^{-17} \, J)\left(\dfrac{1 \, eV}{1.60 \times 10^{-19} \, J}\right) = 95.5 \, eV.\]
Discussion
Since atoms are roughly 0.1 nm in size, knowing the position of an electron to 0.0100 nm localizes it reasonably well inside the atom. This would be like being able to see details one-tenth the size of the atom. But the consequent uncertainty in velocity is large. You certainly could not follow it very well if its velocity is so uncertain. To get a further idea of how large the uncertainty in velocity is, we assumed the velocity of the electron was equal to its uncertainty and found this gave a kinetic energy of 95.5 eV. This is significantly greater than the typical energy difference between levels in atoms (see [link] ), so that it is impossible to get a meaningful energy for the electron if we know its position even moderately well.
Why don’t we notice Heisenberg’s uncertainty principle in everyday life? The answer is that Planck’s constant is very small. Thus the lower limit in the uncertainty of measuring the position and momentum of large objects is negligible. We can detect sunlight reflected from Jupiter and follow the planet in its orbit around the Sun. The reflected sunlight alters the momentum of Jupiter and creates an uncertainty in its momentum, but this is totally negligible compared with Jupiter’s huge momentum. The correspondence principle tells us that the predictions of quantum mechanics become indistinguishable from classical physics for large objects, which is the case here.
Heisenberg Uncertainty for Energy and Time
There is another form of Heisenberg’s uncertainty principle for simultaneous measurements of energy and time . In equation form,
\[\Delta E \Delta t \geq \dfrac{h}{4\pi},\]
where \(\Delta E\) is the uncertainty in energy and \(\Delta t\), is the uncertainty in time . This means that within a time interval \(\Delta t\), it is not possible to measure energy precisely—there will be an uncertainty \(\Delta E\) in the measurement. In order to measure energy more precisely (to make \(\Delta E\) smaller), we must increase \(\Delta t\). This time interval may be the amount of time we take to make the measurement, or it could be the amount of time a particular state exists, as in the next Example .
Example \(\PageIndex{2}\): Heisenberg Uncertainty Principle for Energy and Time for an Atom
An atom in an excited state temporarily stores energy. If the lifetime of this excited state is measured to be \(1.0 \times 10^{-10} \, s\),
what is the minimum uncertainty in the energy of the state in eV?
Strategy
The minimum uncertainty in energy \(\Delta E\) is found by using the equals sign in \(\Delta E\Delta t \geq h/4\pi\) and corresponds to a reasonable choice for the uncertainty in time. The largest the uncertainty in time can be is the full lifetime of the excited state, or \(\Delta t = 1.0 \times 10^{-10} \, s\).
Solution
Solving the uncertainty principle for \(\Delta E\) and substituting known values gives
\[\Delta E = \dfrac{h}{4\pi \Delta t} = \dfrac{6.63 \times 10^{-34} \, J \cdot s}{4\pi (1.0 \times 10^{-10} \, s)} = 5.3 \times 10^{-25} \, J.\]
Now converting to eV yields
\[\Delta E = (5.3 \times 10^{-25} \, J) \left( \dfrac{1 \, eV}{1.6 \times 10^{-19} \, J} \right) = 3.3 \times 10^{-6} \, eV.\]
Discussion
The lifetime of \(10^{-10} \, s\) is typical of excited states in atoms—on human time scales, they quickly emit their stored energy. An uncertainty in energy of only a few millionths of an eV results. This uncertainty is small compared with typical excitation energies in atoms, which are on the order of 1 eV. So here the uncertainty principle limits the accuracy with which we can measure the lifetime and energy of such states, but not very significantly.
The uncertainty principle for energy and time can be of great significance if the lifetime of a system is very short. Then \(\Delta t\) is very small, and \(\Delta E\) is consequently very large. Some nuclei and exotic particles have extremely short lifetimes (as small as \(10^{-25} \, s\)), causing uncertainties in energy as great as many GeV \(10^9 \, eV\). Stored energy appears as increased rest mass, and so this means that there is significant uncertainty in the rest mass of short-lived particles. When measured repeatedly, a spread of masses or decay energies are obtained. The spread is \(\Delta E\). You might ask whether this uncertainty in energy could be avoided by not measuring the lifetime. The answer is no. Nature knows the lifetime, and so its brevity affects the energy of the particle. This is so well established experimentally that the uncertainty in decay energy is used to calculate the lifetime of short-lived states. Some nuclei and particles are so short-lived that it is difficult to measure their lifetime. But if their decay energy can be measured, its spread is \(\Delta E\) and this is used in the uncertainty principle \((\Delta E \Delta t \geq h/4\pi)\) to calculate the lifetime \(\Delta t\).
There is another consequence of the uncertainty principle for energy and time. If energy is uncertain by \(\Delta E\), then conservation of energy can be violated by \(\Delta E\) for a time \(\Delta t\). Neither the physicist nor nature can tell that conservation of energy has been violated, if the violation is temporary and smaller than the uncertainty in energy. While this sounds innocuous enough, we shall see in later chapters that it allows the temporary creation of matter from nothing and has implications for how nature transmits forces over very small distances.
Finally, note that in the discussion of particles and waves, we have stated that individual measurements produce precise or particle-like results. A definite position is determined each time we observe an electron, for example. But repeated measurements produce a spread in values consistent with wave characteristics. The great theoretical physicist Richard Feynman (1918–1988) commented, “What there are, are particles.” When you observe enough of them, they distribute themselves as you would expect for a wave phenomenon. However, what there are as they travel we cannot tell because, when we do try to measure, we affect the traveling.
Summary
- Matter is found to have the same interference characteristics as any other wave.
- There is now a probability distribution for the location of a particle rather than a definite position.
- Another consequence of the wave character of all particles is the Heisenberg uncertainty principle, which limits the precision with which certain physical quantities can be known simultaneously. For position and momentum, the uncertainty principle is \(\Delta x \Delta p \geq \frac{h}{4 \pi}\), where \(\Delta x\) is the uncertainty in position and \(\Delta p\) is the uncertainty in momentum.
- For energy and time, the uncertainty principle is \(\Delta E \Delta t \geq \frac{h}{4 \pi}\), where \(\Delta E\) is the uncertainty in energy and \(\Delta t\) is the uncertainty in time.
- These small limits are fundamentally important on the quantum-mechanical scale.
Glossary
- Heisenberg’s uncertainty principle
- a fundamental limit to the precision with which pairs of quantities (momentum and position, and energy and time) can be measured
- uncertainty in energy
- lack of precision or lack of knowledge of precise results in measurements of energy
- uncertainty in time
- lack of precision or lack of knowledge of precise results in measurements of time
- uncertainty in momentum
- lack of precision or lack of knowledge of precise results in measurements of momentum
- uncertainty in position
- lack of precision or lack of knowledge of precise results in measurements of position
- probability distribution
- the overall spatial distribution of probabilities to find a particle at a given location
|
libretexts
|
2025-03-17T19:53:46.645803
| 2016-07-24T08:32:28 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.07%3A_Probability_and_The_Heisenberg_Uncertainty_Principle",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.7: Probability and The Heisenberg Uncertainty Principle",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.08%3A_The_Particle-Wave_Duality_Reviewed
|
29.8: The Particle-Wave Duality Reviewed
Learning Objectives
By the end of this section, you will be able to:
- Explain the concept of particle-wave duality, and its scope.
Particle-wave duality --the fact that all particles have wave properties--is one of the cornerstones of quantum mechanics. We first came across it in the treatment of photons, those particles of EM radiation that exhibit both particle and wave properties, but not at the same time. Later it was noted that particles of matter have wave properties as well. The dual properties of particles and waves are found for all particles, whether massless like photons, or having a mass like electrons. (See Figure 29.9.1.)
There are many submicroscopic particles in nature. Most have mass and are expected to act as particles, or the smallest units of matter. All these masses have wave properties, with wavelengths given by the de Broglie relationship \(\gamma = h/p\). So, too, do combinations of these particles, such as nuclei, atoms, and molecules. As a combination of masses becomes large, particularly if it is large enough to be called macroscopic, its wave nature becomes difficult to observe. This is consistent with our common experience with matter.
Some particles in nature are massless. We have only treated the photon so far, but all massless entities travel at the speed of light, have a wavelength, and exhibit particle and wave behaviors. They have momentum given by a rearrangement of the de Broglie relationship, \(p = h/\gamma\). In large combinations of these massless particles (such large combinations are common only for photons or EM waves), there is mostly wave behavior upon detection, and the particle nature becomes difficult to observe. This is also consistent with experience. (See Figure 29.9.2.)
The particle-wave duality is a universal attribute. It is another connection between matter and energy. Not only has modern physics been able to describe nature for high speeds and small sizes, it has also discovered new connections and symmetries. There is greater unity and symmetry in nature than was known in the classical era -- but they were dreamt of. A beautiful poem written by the English poet William Blake some two centuries ago contains the following four lines:
To see the World in a Grain of Sand
And a Heaven in a Wild Flower
Hold Infinity in the palm of your hand
And Eternity in an hour
Integrated Concepts
The problem set for this section involves concepts from this chapter and several others. Physics is most interesting when applied to general situations involving more than a narrow set of physical principles. For example, photons have momentum, hence the relevance of "Linear Momentum and Collisions." The following topics are involved in some or all of the problems in this section:
- Dynamics: Newton's Laws of Motion
- Work, Energy, and Energy Resources
- Linear Momentum and Collisions
- Heat and Heat Transfer Methods
- Electrical Potential and Electric Field
- Electric Current, Resistance, and Ohm's Law
- Wave Optics
- Special Relativity
PROBLEM-SOLVING STRATEGY
- Identify which physical principles are involved.
- Solve the problem using strategies outlined in the text.
Example illustrates how these strategies are applied to an integrated-concept problem.
Example \(\PageIndex{1}\): Recoil of a Dust Particle after Absorbing a Photon
The following topics are involved in this integrated concepts worked example
- Photons (quantum mechanics)
- Linear Momentum
A 550-nm photon (visible light) is absorbed by a \(1.00-\mu g\) particle of dust in outer space. (a) Find the momentum of such a photon. (b) What is the recoil velocity of the particle of dust, assuming it is initially at rest?
Strategy Step 1:
To solve an integrated-concept problem , such as those following this example, we must first identify the physical principles involved and identify the chapters in which they are found. Part (a) of this example asks for the momentum of a photon , a topic of the present chapter. Part (b) considers recoil following a collision, a topic of "Linear Momentum and Collisions."
Strategy Step 2:
The following solutions to each part of the example illustrate how specific problem-solving strategies are applied. These involve identifying knowns and unknowns, checking to see if the answer is reasonable, and so on.
Solution for (a):
The momentum of a photon is related to its wavelength by the equation:
\[p = \frac{h}{\lambda}.\label{29.9.1}\]
Entering the known value for Planck’s constant \(h\) and given the wavelength \(\lambda\), we obtain
\[p = \frac{6.63 \times 10^{-34} J \cdot s}{550 \times 10^{-9} m}\] \[= 1.21 \times 10^{-27} kg \cdot m/s.\]
Discussion for (a):
This momentum is small, as expected from discussions in the text and the fact that photons of visible light carry small amounts of energy and momentum compared with those carried by macroscopic objects.
Solution for (b):
Conservation of momentum in the absorption of this photon by a grain of dust can be analyzed using the equation:
\[p_{1} + p_{2} = p'_{1} + p'_{2} \left(F_{net} = 0 \right).\label{29.9.2}\]
The net external force is zero, since the dust is in outer space. Let 1 represent the photon and 2 the dust particle. Before the collision, the dust is at rest (relative to some observer); after the collision, there is no photon (it is absorbed). So conservation of momentum can be written \[p_{1} = p'_{2} = mv, \label{29.9.3}\] where \(p_{1}\) is the photon momentum before the collision and \(p'_{2}\) is the dust momentum after the collision. The mass and recoil velocity of the dust are \(m\) and \(v\), respectively. Solving this for \(v\), the requested quantity, yields \[v = \frac{p}{m},\label{29.9.4}\] where \(p\) is the photon momentum found in part (a). Entering known values (noting that a microgram is \(10^{-9} kg\)) gives \[v = \frac{1.21 \times 10^{27} kg\cdot m/s}{1.00 \times 10^{9} kg}\] \[= 1.21 \times 10^{-18} m/s.\]
Discussion:
The recoil velocity of the particle of dust is extremely small. As we have noted, however, there are immense numbers of photons in sunlight and other macroscopic sources. In time, collisions and absorption of many photons could cause a significant recoil of the dust, as observed in comet tails.
Summary
- The particle-wave duality refers to the fact that all particles -- those with mass and those without mass -- have wave characteristics.
- This is a further connection between mass and energy.
|
libretexts
|
2025-03-17T19:53:46.716368
| 2016-07-24T08:33:11 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.08%3A_The_Particle-Wave_Duality_Reviewed",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.8: The Particle-Wave Duality Reviewed",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.E%3A_Special_Relativity_(Exercise)
|
29.E: Special Relativity (Exercise)
-
- Last updated
- Save as PDF
Conceptual Questions
29.1: Quantization of Energy
1. Give an example of a physical entity that is quantized. State specifically what the entity is and what the limits are on its values.
2. Give an example of a physical entity that is not quantized, in that it is continuous and may have a continuous range of values.
3. What aspect of the blackbody spectrum forced Planck to propose quantization of energy levels in its atoms and molecules?
4. If Planck’s constant were large, say \(\displaystyle 10^{34}\) times greater than it is, we would observe macroscopic entities to be quantized. Describe the motions of a child’s swing under such circumstances.
5. Why don’t we notice quantization in everyday events?
29.2: The Photoelectric Effect
6. Is visible light the only type of EM radiation that can cause the photoelectric effect?
7. Which aspects of the photoelectric effect cannot be explained without photons? Which can be explained without photons? Are the latter inconsistent with the existence of photons?
8. Is the photoelectric effect a direct consequence of the wave character of EM radiation or of the particle character of EM radiation? Explain briefly.
9. Insulators (nonmetals) have a higher BE than metals, and it is more difficult for photons to eject electrons from insulators. Discuss how this relates to the free charges in metals that make them good conductors.
10. If you pick up and shake a piece of metal that has electrons in it free to move as a current, no electrons fall out. Yet if you heat the metal, electrons can be boiled off. Explain both of these facts as they relate to the amount and distribution of energy involved with shaking the object as compared with heating it.
29.3 Photon Energies and the Electromagnetic Spectrum
11. Why are UV, x rays, and \(\displaystyle γ\) rays called ionizing radiation?
12. How can treating food with ionizing radiation help keep it from spoiling? UV is not very penetrating. What else could be used?
13. Some television tubes are CRTs. They use an approximately 30-kV accelerating potential to send electrons to the screen, where the electrons stimulate phosphors to emit the light that forms the pictures we watch. Would you expect x rays also to be created?
14. Tanning salons use “safe” UV with a longer wavelength than some of the UV in sunlight. This “safe” UV has enough photon energy to trigger the tanning mechanism. Is it likely to be able to cause cell damage and induce cancer with prolonged exposure?
15. Your pupils dilate when visible light intensity is reduced. Does wearing sunglasses that lack UV blockers increase or decrease the UV hazard to your eyes? Explain.
16. One could feel heat transfer in the form of infrared radiation from a large nuclear bomb detonated in the atmosphere 75 km from you. However, none of the profusely emitted x rays or γ rays reaches you. Explain.
17. Can a single microwave photon cause cell damage? Explain.
18. In an x-ray tube, the maximum photon energy is given by \(\displaystyle hf=qV\). Would it be technically more correct to say \(\displaystyle hf=qV+BE\), where BE is the binding energy of electrons in the target anode? Why isn’t the energy stated the latter way?
29.4: Photon Momentum
19. Which formula may be used for the momentum of all particles, with or without mass?
20. Is there any measurable difference between the momentum of a photon and the momentum of matter?
21. Why don’t we feel the momentum of sunlight when we are on the beach?
29.6: The Wave Nature of Matter
22. How does the interference of water waves differ from the interference of electrons? How are they analogous?
23. Describe one type of evidence for the wave nature of matter.
24. Describe one type of evidence for the particle nature of EM radiation.
29.7: Probability and The Heisenberg Uncertainty Principle
25. What is the Heisenberg uncertainty principle? Does it place limits on what can be known?
29.8: The Particle-Wave Duality Reviewed
26. In what ways are matter and energy related that were not known before the development of relativity and quantum mechanics?
Problems & Exercises
29.1: Quantization of Energy
27. A LiBr molecule oscillates with a frequency of \(\displaystyle 1.7×10^{13}Hz\).
(a) What is the difference in energy in eV between allowed oscillator states?
(b) What is the approximate value of \(\displaystyle n\) for a state having an energy of 1.0 eV?
Solution
(a) 0.070 eV
(b) 14
28. The difference in energy between allowed oscillator states in HBr molecules is 0.330 eV. What is the oscillation frequency of this molecule?
29. A physicist is watching a 15-kg orangutan at a zoo swing lazily in a tire at the end of a rope. He (the physicist) notices that each oscillation takes 3.00 s and hypothesizes that the energy is quantized.
(a) What is the difference in energy in joules between allowed oscillator states?
(b) What is the value of n for a state where the energy is 5.00 J?
(c) Can the quantization be observed?
Solution
(a) \(\displaystyle 2.21×10^{34}J\)
(b) \(\displaystyle 2.26×10^{34}\)
(c) No
29.2: The Photoelectric Effect
30. What is the longest-wavelength EM radiation that can eject a photoelectron from silver, given that the binding energy is 4.73 eV? Is this in the visible range?
Solution
263 nm
31. Find the longest-wavelength photon that can eject an electron from potassium, given that the binding energy is 2.24 eV. Is this visible EM radiation?
32. What is the binding energy in eV of electrons in magnesium, if the longest-wavelength photon that can eject electrons is 337 nm?
Solution
3.69 eV
33. Calculate the binding energy in eV of electrons in aluminum, if the longest-wavelength photon that can eject them is 304 nm.
34. What is the maximum kinetic energy in eV of electrons ejected from sodium metal by 450-nm EM radiation, given that the binding energy is 2.28 eV?
Solution
0.483 eV
35. UV radiation having a wavelength of 120 nm falls on gold metal, to which electrons are bound by 4.82 eV. What is the maximum kinetic energy of the ejected photoelectrons?
36. Violet light of wavelength 400 nm ejects electrons with a maximum kinetic energy of 0.860 eV from sodium metal. What is the binding energy of electrons to sodium metal?
Solution
2.25 eV
37. UV radiation having a 300-nm wavelength falls on uranium metal, ejecting 0.500-eV electrons. What is the binding energy of electrons to uranium metal?
38. What is the wavelength of EM radiation that ejects 2.00-eV electrons from calcium metal, given that the binding energy is 2.71 eV? What type of EM radiation is this?
Solution
(a) 264 nm
(b) Ultraviolet
39. Find the wavelength of photons that eject 0.100-eV electrons from potassium, given that the binding energy is 2.24 eV. Are these photons visible?
40. What is the maximum velocity of electrons ejected from a material by 80-nm photons, if they are bound to the material by 4.73 eV?
Solution
\(\displaystyle 1.95×10^6m/s\)
41. Photoelectrons from a material with a binding energy of 2.71 eV are ejected by 420-nm photons. Once ejected, how long does it take these electrons to travel 2.50 cm to a detection device?
42. A laser with a power output of 2.00 mW at a wavelength of 400 nm is projected onto calcium metal.
(a) How many electrons per second are ejected?
(b) What power is carried away by the electrons, given that the binding energy is 2.71 eV?
Solution
(a) \(\displaystyle 4.02×10^{15}/s\)
(b) 0.256 mW
43. (a) Calculate the number of photoelectrons per second ejected from a \(\displaystyle 1.00-mm^2\) area of sodium metal by 500-nm EM radiation having an intensity of \(\displaystyle 1.30 kW/m^2\) (the intensity of sunlight above the Earth’s atmosphere).
(b) Given that the binding energy is 2.28 eV, what power is carried away by the electrons?
(c) The electrons carry away less power than brought in by the photons. Where does the other power go? How can it be recovered?
44. Unreasonable Results
Red light having a wavelength of 700 nm is projected onto magnesium metal to which electrons are bound by 3.68 eV.
(a) Use \(\displaystyle KE_e=hf–BE\) to calculate the kinetic energy of the ejected electrons.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) \(\displaystyle –1.90 eV\)
(b) Negative kinetic energy
(c) That the electrons would be knocked free.
45. Unreasonable Results
(a) What is the binding energy of electrons to a material from which 4.00-eV electrons are ejected by 400-nm EM radiation?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
29.3 Photon Energies and the Electromagnetic Spectrum
46. What is the energy in joules and eV of a photon in a radio wave from an AM station that has a 1530-kHz broadcast frequency?
Solution
\(\displaystyle 6.34×10^{−9}eV, 1.01×10^{−27}J\)
47. (a) Find the energy in joules and eV of photons in radio waves from an FM station that has a 90.0-MHz broadcast frequency.
(b) What does this imply about the number of photons per second that the radio station must broadcast?
48. Calculate the frequency in hertz of a 1.00-MeV γ-ray photon.
Solution
\(\displaystyle 2.42×10^{20}Hz\)
49. (a) What is the wavelength of a 1.00-eV photon?
(b) Find its frequency in hertz.
(c) Identify the type of EM radiation.
50. Do the unit conversions necessary to show that \(\displaystyle hc=1240 eV⋅nm\), as stated in the text.
Solution
\(\displaystyle hc=(6.62607×10^{−34}J⋅s)(2.99792×10^8m/s)(\frac{10^9nm}{1 m})(\frac{1.00000 eV}{1.60218×10^{−19}J})=1239.84 eV⋅nm≈1240 eV⋅nm\)
51. Confirm the statement in the text that the range of photon energies for visible light is 1.63 to 3.26 eV, given that the range of visible wavelengths is 380 to 760 nm.
52. (a) Calculate the energy in eV of an IR photon of frequency \(\displaystyle 2.00×10^{13}Hz\).
(b) How many of these photons would need to be absorbed simultaneously by a tightly bound molecule to break it apart?
(c) What is the energy in eV of a γ ray of frequency \(\displaystyle 3.00×10^{20}Hz\)?
(d) How many tightly bound molecules could a single such γ ray break apart?
Solution
(a) 0.0829 eV
(b) 121
(c) 1.24 MeV
(d) \(\displaystyle 1.24×10^5\)
53. Prove that, to three-digit accuracy, \(\displaystyle h=4.14×10^{−15}eV⋅s\), as stated in the text.
54. (a) What is the maximum energy in eV of photons produced in a CRT using a 25.0-kV accelerating potential, such as a color TV?
(b) What is their frequency?
Solution
(a) \(\displaystyle 25.0×10^3eV\)
(b) \(\displaystyle 6.04×10^{18}Hz\)
55. What is the accelerating voltage of an x-ray tube that produces x rays with a shortest wavelength of 0.0103 nm?
56. (a) What is the ratio of power outputs by two microwave ovens having frequencies of 950 and 2560 MHz, if they emit the same number of photons per second?
(b) What is the ratio of photons per second if they have the same power output?
Solution
(a) 2.69
(b) 0.371
57. How many photons per second are emitted by the antenna of a microwave oven, if its power output is 1.00 kW at a frequency of 2560 MHz?
58. Some satellites use nuclear power.
(a) If such a satellite emits a 1.00-W flux of \(\displaystyle γ\) rays having an average energy of 0.500 MeV, how many are emitted per second?
(b) These \(\displaystyle γ\) rays affect other satellites. How far away must another satellite be to only receive one γ ray per second per square meter?
Solution
(a) \(\displaystyle 1.25×10^{13}photons/s\)
(b) 997 km
59. (a) If the power output of a 650-kHz radio station is 50.0 kW, how many photons per second are produced?
(b) If the radio waves are broadcast uniformly in all directions, find the number of photons per second per square meter at a distance of 100 km. Assume no reflection from the ground or absorption by the air.
60. How many x-ray photons per second are created by an x-ray tube that produces a flux of x rays having a power of 1.00 W? Assume the average energy per photon is 75.0 keV.
Solution
\(\displaystyle 8.33×10^{13}photons/s\)
61. (a) How far away must you be from a 650-kHz radio station with power 50.0 kW for there to be only one photon per second per square meter? Assume no reflections or absorption, as if you were in deep outer space.
(b) Discuss the implications for detecting intelligent life in other solar systems by detecting their radio broadcasts.
62. Assuming that 10.0% of a 100-W light bulb’s energy output is in the visible range (typical for incandescent bulbs) with an average wavelength of 580 nm, and that the photons spread out uniformly and are not absorbed by the atmosphere, how far away would you be if 500 photons per second enter the 3.00-mm diameter pupil of your eye? (This number easily stimulates the retina.)
Solution
181 km
63. Construct Your Own Problem
Consider a laser pen. Construct a problem in which you calculate the number of photons per second emitted by the pen. Among the things to be considered are the laser pen’s wavelength and power output. Your instructor may also wish for you to determine the minimum diffraction spreading in the beam and the number of photons per square centimeter the pen can project at some large distance. In this latter case, you will also need to consider the output size of the laser beam, the distance to the object being illuminated, and any absorption or scattering along the way.
29.4: Photon Momentum
64. (a) Find the momentum of a 4.00-cm-wavelength microwave photon.
(b) Discuss why you expect the answer to (a) to be very small.
Solution
(a) \(\displaystyle 1.66×10^{−32}kg⋅m/s\)
(b) The wavelength of microwave photons is large, so the momentum they carry is very small.
65. (a) What is the momentum of a 0.0100-nm-wavelength photon that could detect details of an atom?
(b) What is its energy in MeV?
66. (a) What is the wavelength of a photon that has a momentum of \(\displaystyle 5.00×10^{−29}kg⋅m/s\)?
(b) Find its energy in eV.
Solution
(a) 13.3 μm
(b) \(\displaystyle 9.38×10^{-2} eV\)
67. (a) A γ-ray photon has a momentum of \(\displaystyle 8.00×10^{−21}kg⋅m/s\). What is its wavelength?
(b) Calculate its energy in MeV.
68. (a) Calculate the momentum of a photon having a wavelength of \(\displaystyle 2.50 μm\).
(b) Find the velocity of an electron having the same momentum. (c) What is the kinetic energy of the electron, and how does it compare with that of the photon?
Solution
(a) \(\displaystyle 2.65×10^{−28}kg⋅m/s\)
(b) 291 m/s
(c) electron \(\displaystyle 3.86×10^{−26}J\), photon \(\displaystyle 7.96×10^{−20}J\), ratio \(\displaystyle 2.06×10^6\)
69. Repeat the previous problem for a 10.0-nm-wavelength photon.
70. (a) Calculate the wavelength of a photon that has the same momentum as a proton moving at 1.00% of the speed of light.
(b) What is the energy of the photon in MeV?
(c) What is the kinetic energy of the proton in MeV?
Solution
(a) \(\displaystyle 1.32×10^{−13}m\)
(b) 9.39 MeV
(c) \(\displaystyle 4.70×10^{−2}MeV\)
71. (a) Find the momentum of a 100-keV x-ray photon.
(b) Find the equivalent velocity of a neutron with the same momentum.
(c) What is the neutron’s kinetic energy in keV?
72. Take the ratio of relativistic rest energy, \(\displaystyle E=γmc^2\), to relativistic momentum, \(\displaystyle p=γmu\), and show that in the limit that mass approaches zero, you find \(\displaystyle E/p=c\).
Solution
\(\displaystyle E=γmc^2\) and \(\displaystyle P=γmu\), so
\(\displaystyle \frac{E}{P}=\frac{γmc^2}{γmu}=\frac{c^2}{u}\).
As the mass of particle approaches zero, its velocity u will approach c, so that the ratio of energy to momentum in this limit is
\(\displaystyle \lim_{m→0}\frac{E}{P}=\frac{c^2}{c}=c\)
which is consistent with the equation for photon energy.
73. Construct Your Own Problem
Consider a space sail such as mentioned in Example. Construct a problem in which you calculate the light pressure on the sail in \(\displaystyle N/m^2\) produced by reflecting sunlight. Also calculate the force that could be produced and how much effect that would have on a spacecraft. Among the things to be considered are the intensity of sunlight, its average wavelength, the number of photons per square meter this implies, the area of the space sail, and the mass of the system being accelerated.
74. Unreasonable Results
A car feels a small force due to the light it sends out from its headlights, equal to the momentum of the light divided by the time in which it is emitted.
(a) Calculate the power of each headlight, if they exert a total force of \(\displaystyle 2.00×10^{−2}N\) backward on the car.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) \(\displaystyle 3.00×10^6\)W
(b) Headlights are way too bright.
(c) Force is too large.
29.6: The Wave Nature of Matter
75. At what velocity will an electron have a wavelength of 1.00 m?
Solution
\(\displaystyle 7.28×10^{–4}m\)
76. What is the wavelength of an electron moving at 3.00% of the speed of light?
77. At what velocity does a proton have a 6.00-fm wavelength (about the size of a nucleus)? Assume the proton is nonrelativistic. (1 femtometer = \(\displaystyle 10^{−15}m\).)
Solution
\(\displaystyle 6.62×10^7m/s\)
78. What is the velocity of a 0.400-kg billiard ball if its wavelength is 7.50 cm (large enough for it to interfere with other billiard balls)?
79. Find the wavelength of a proton moving at 1.00% of the speed of light.
Solution
\(\displaystyle 1.32×10^{–13}m\)
80. Experiments are performed with ultracold neutrons having velocities as small as 1.00 m/s.
(a) What is the wavelength of such a neutron?
(b) What is its kinetic energy in eV?
81. (a) Find the velocity of a neutron that has a 6.00-fm wavelength (about the size of a nucleus). Assume the neutron is nonrelativistic.
(b) What is the neutron’s kinetic energy in MeV?
Solution
(a) \(\displaystyle 6.62×10^7m/s\)
(b) \(\displaystyle 22.9 MeV\)
82. What is the wavelength of an electron accelerated through a 30.0-kV potential, as in a TV tube?
83. What is the kinetic energy of an electron in a TEM having a 0.0100-nm wavelength?
Solution
15.1 keV
84. (a) Calculate the velocity of an electron that has a wavelength of \(\displaystyle 1.00 μm\).
(b) Through what voltage must the electron be accelerated to have this velocity?
85. The velocity of a proton emerging from a Van de Graaff accelerator is 25.0% of the speed of light.
(a) What is the proton’s wavelength?
(b) What is its kinetic energy, assuming it is nonrelativistic?
(c) What was the equivalent voltage through which it was accelerated?
Solution
(a) 5.29 fm
(b) \(\displaystyle 4.70×10^{−12}J\)
(c) 29.4 MV
86. The kinetic energy of an electron accelerated in an x-ray tube is 100 keV. Assuming it is nonrelativistic, what is its wavelength?
87. Unreasonable Results
(a) Assuming it is nonrelativistic, calculate the velocity of an electron with a 0.100-fm wavelength (small enough to detect details of a nucleus).
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) \(\displaystyle 7.28×10^{12}m/s\)
(b) This is thousands of times the speed of light (an impossibility).
(c) The assumption that the electron is non-relativistic is unreasonable at this wavelength.
29.7: Probability and The Heisenberg Uncertainty Principle
88. (a) If the position of an electron in a membrane is measured to an accuracy of \(\displaystyle 1.00 μm\), what is the electron’s minimum uncertainty in velocity?
(b) If the electron has this velocity, what is its kinetic energy in eV?
(c) What are the implications of this energy, comparing it to typical molecular binding energies?
Solution
(a) 57.9 m/s
(b) \(\displaystyle 9.55×10^{−9}eV\)
(c) From [link], we see that typical molecular binding energies range from about 1eV to 10 eV, therefore the result in part (b) is approximately 9 orders of magnitude smaller than typical molecular binding energies.
89. (a) If the position of a chlorine ion in a membrane is measured to an accuracy of \(\displaystyle 1.00 μm\), what is its minimum uncertainty in velocity, given its mass is \(\displaystyle 5.86×10^{−26}kg\)?
(b) If the ion has this velocity, what is its kinetic energy in eV, and how does this compare with typical molecular binding energies?
90. Suppose the velocity of an electron in an atom is known to an accuracy of \(\displaystyle 2.0×10^3m/s\) (reasonably accurate compared with orbital velocities). What is the electron’s minimum uncertainty in position, and how does this compare with the approximate 0.1-nm size of the atom?
Solution
29 nm,
290 times greater
91. The velocity of a proton in an accelerator is known to an accuracy of 0.250% of the speed of light. (This could be small compared with its velocity.) What is the smallest possible uncertainty in its position?
92. A relatively long-lived excited state of an atom has a lifetime of 3.00 ms. What is the minimum uncertainty in its energy?
Solution
\(\displaystyle 1.10×10^{−13}eV\)
93. (a) The lifetime of a highly unstable nucleus is \(\displaystyle 10^{−20}s\). What is the smallest uncertainty in its decay energy?
(b) Compare this with the rest energy of an electron.
94. The decay energy of a short-lived particle has an uncertainty of 1.0 MeV due to its short lifetime. What is the smallest lifetime it can have?
Solution
\(\displaystyle 3.3×10^{−22}s\)
95. The decay energy of a short-lived nuclear excited state has an uncertainty of 2.0 eV due to its short lifetime. What is the smallest lifetime it can have?
96. What is the approximate uncertainty in the mass of a muon, as determined from its decay lifetime?
Solution
\(\displaystyle 2.66×10^{−46}kg\)
97. Derive the approximate form of Heisenberg’s uncertainty principle for energy and time, \(\displaystyle ΔEΔt≈h\), using the following arguments: Since the position of a particle is uncertain by \(\displaystyle Δx≈λ\), where \(\displaystyle λ\) is the wavelength of the photon used to examine it, there is an uncertainty in the time the photon takes to traverse \(\displaystyle Δx\). Furthermore, the photon has an energy related to its wavelength, and it can transfer some or all of this energy to the object being examined. Thus the uncertainty in the energy of the object is also related to \(\displaystyle λ\). Find \(\displaystyle Δt\) and \(\displaystyle ΔE\); then multiply them to give the approximate uncertainty principle.
29.8: The Particle-Wave Duality Reviewed
98. Integrated Concepts
The 54.0-eV electron in [link] has a 0.167-nm wavelength. If such electrons are passed through a double slit and have their first maximum at an angle of \(\displaystyle 25.0º\), what is the slit separation d?
Solution
0.395 nm
99. Integrated Concepts
An electron microscope produces electrons with a 2.00-pm wavelength. If these are passed through a 1.00-nm single slit, at what angle will the first diffraction minimum be found?
100. Integrated Concepts
A certain heat lamp emits 200 W of mostly IR radiation averaging 1500 nm in wavelength.
(a) What is the average photon energy in joules?
(b) How many of these photons are required to increase the temperature of a person’s shoulder by \(\displaystyle 2.0ºC\), assuming the affected mass is 4.0 kg with a specific heat of \(\displaystyle 0.83 kcal/kg⋅ºC\). Also assume no other significant heat transfer.
(c) How long does this take?
Solution
(a) \(\displaystyle 1.3×10^{−19}J\)
(b) \(\displaystyle 2.1×10^{23}\)
(c) \(\displaystyle 1.4×10^2s\)
101. Integrated Concepts
On its high power setting, a microwave oven produces 900 W of 2560 MHz microwaves.
(a) How many photons per second is this?
(b) How many photons are required to increase the temperature of a 0.500-kg mass of pasta by \(\displaystyle 45.0ºC\), assuming a specific heat of \(\displaystyle 0.900 kcal/kg⋅ºC\)? Neglect all other heat transfer.
(c) How long must the microwave operator wait for their pasta to be ready?
102. Integrated Concepts
(a) Calculate the amount of microwave energy in joules needed to raise the temperature of 1.00 kg of soup from \(\displaystyle 20.0ºC\) to \(\displaystyle 100ºC\)
.
(b) What is the total momentum of all the microwave photons it takes to do this?
(c) Calculate the velocity of a 1.00-kg mass with the same momentum. (d) What is the kinetic energy of this mass?
Solution
(a) \(\displaystyle 3.35×10^5J\)
(b) \(\displaystyle 1.12×10^{–3}kg⋅m/s\)
(c) \(\displaystyle 1.12×10^{–3}m/s\)
(d) \(\displaystyle 6.23×10^{–7}J\)
103. Integrated Concepts
(a) What is \(\displaystyle γ\) for an electron emerging from the Stanford Linear Accelerator with a total energy of 50.0 GeV?
(b) Find its momentum.
(c) What is the electron’s wavelength?
104. Integrated Concepts
(a) What is \(\displaystyle γ\) for a proton having an energy of 1.00 TeV, produced by the Fermilab accelerator?
(b) Find its momentum.
(c) What is the proton’s wavelength?
Solution
(a) \(\displaystyle 1.06×10^3\)
(b) \(\displaystyle 5.33×10^{−16}kg⋅m/s\)
(c) \(\displaystyle 1.24×10^{−18}m\)
105. Integrated Concepts
An electron microscope passes 1.00-pm-wavelength electrons through a circular aperture \(\displaystyle 2.00 μm\) in diameter. What is the angle between two just- resolvable point sources for this microscope?
106. Integrated Concepts
(a) Calculate the velocity of electrons that form the same pattern as 450-nm light when passed through a double slit.
(b) Calculate the kinetic energy of each and compare them.
(c) Would either be easier to generate than the other? Explain.
Solution
(a) \(\displaystyle 1.62×10^3m/s\)
(b) \(\displaystyle 4.42×10^{−19}J\) for photon, \(\displaystyle 1.19×10^{−24}J\) for electron, photon energy is \(\displaystyle 3.71×10^5\) times greater
(c) The light is easier to make because 450-nm light is blue light and therefore easy to make. Creating electrons with \(\displaystyle 7.43 μeV\) of energy would not be difficult, but would require a vacuum.
107. Integrated Concepts
(a) What is the separation between double slits that produces a second-order minimum at \(\displaystyle 45.0º\) for 650-nm light?
(b) What slit separation is needed to produce the same pattern for 1.00-keV protons.
Solution
(a) \(\displaystyle 2.30×10^{−6}m\)
(b) \(\displaystyle 3.20×10^{−12}m\)
108. Integrated Concepts
A laser with a power output of 2.00 mW at a wavelength of 400 nm is projected onto calcium metal.
(a) How many electrons per second are ejected?
(b) What power is carried away by the electrons, given that the binding energy is 2.71 eV?
(c) Calculate the current of ejected electrons.
(d) If the photoelectric material is electrically insulated and acts like a 2.00-pF capacitor, how long will current flow before the capacitor voltage stops it?
109. Integrated Concepts
One problem with x rays is that they are not sensed. Calculate the temperature increase of a researcher exposed in a few seconds to a nearly fatal accidental dose of x rays under the following conditions. The energy of the x-ray photons is 200 keV, and \(\displaystyle 4.00×10^{13}\) of them are absorbed per kilogram of tissue, the specific heat of which is \(\displaystyle 0.830 kcal/kg⋅ºC\). (Note that medical diagnostic x-ray machines cannot produce an intensity this great.)
Solution
\(\displaystyle 3.69×10^{−4}ºC\)
110. Integrated Concepts
A 1.00-fm photon has a wavelength short enough to detect some information about nuclei.
(a) What is the photon momentum?
(b) What is its energy in joules and MeV?
(c) What is the (relativistic) velocity of an electron with the same momentum?
(d) Calculate the electron’s kinetic energy.
111. Integrated Concepts
The momentum of light is exactly reversed when reflected straight back from a mirror, assuming negligible recoil of the mirror. Thus the change in momentum is twice the photon momentum. Suppose light of intensity \(\displaystyle 1.00 kW/m^2\) reflects from a mirror of area \(\displaystyle 2.00 m^2\).
(a) Calculate the energy reflected in 1.00 s.
(b) What is the momentum imparted to the mirror?
(c) Using the most general form of Newton’s second law, what is the force on the mirror?
(d) Does the assumption of no mirror recoil seem reasonable?
Solution
(a) 2.00 kJ
(b) \(\displaystyle 1.33×10^{−5}kg⋅m/s\)
(c) \(\displaystyle 1.33×10^{−5}N\)
(d) yes
112. Integrated Concepts
Sunlight above the Earth’s atmosphere has an intensity of \(\displaystyle 1.30kW/m^2\). If this is reflected straight back from a mirror that has only a small recoil, the light’s momentum is exactly reversed, giving the mirror twice the incident momentum.
(a) Calculate the force per square meter of mirror.
(b) Very low mass mirrors can be constructed in the near weightlessness of space, and attached to a spaceship to sail it. Once done, the average mass per square meter of the spaceship is 0.100 kg. Find the acceleration of the spaceship if all other forces are balanced.
(c) How fast is it moving 24 hours later?
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:46.858810
| 2018-05-04T03:09:50 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/29%3A_Introduction_to_Quantum_Physics/29.E%3A_Special_Relativity_(Exercise)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "29.E: Special Relativity (Exercise)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics
|
30: Atomic Physics
Atomic physics studies atoms as an isolated system of electrons and an atomic nucleus and is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
-
- 30.0: Prelude to Atomic Physics
- From childhood on, we learn that atoms are a substructure of all things around us, from the air we breathe to the autumn leaves that blanket a forest trail. Invisible to the eye, the existence and properties of atoms are used to explain many phenomena—a theme found throughout this text.
-
- 30.2: Discovery of the Parts of the Atom - Electrons and Nuclei
- Just as atoms are a substructure of matter, electrons and nuclei are substructures of the atom. The experiments that were used to discover electrons and nuclei reveal some of the basic properties of atoms and can be readily understood using ideas such as electrostatic and magnetic force, already covered in previous chapters.
-
- 30.3: Bohr’s Theory of the Hydrogen Atom
- The planetary model of the atom pictures electrons orbiting the nucleus in the way that planets orbit the sun. Bohr used the planetary model to develop the first reasonable theory of hydrogen, the simplest atom. Atomic and molecular spectra are quantized, with hydrogen spectrum wavelengths.
-
- 30.4: X Rays - Atomic Origins and Applications
- Each type of atom (or element) has its own characteristic electromagnetic spectrum. X rays lie at the high-frequency end of an atom’s spectrum and are characteristic of the atom as well. In this section, we explore characteristic x rays and some of their important applications.
-
- 30.5: Applications of Atomic Excitations and De-Excitations
- Many properties of matter and phenomena in nature are directly related to atomic energy levels and their associated excitations and de-excitations. The color of a rose, the output of a laser, and the transparency of air are but a few examples. While it may not appear that glow-in-the-dark pajamas and lasers have much in common, they are in fact different applications of similar atomic de-excitations.
-
- 30.6: The Wave Nature of Matter Causes Quantization
- Why is angular momentum quantized? You already know the answer. Electrons have wave-like properties, as de Broglie later proposed. They can exist only where they interfere constructively, and only certain orbits meet proper conditions, as we shall see in the next module. Following Bohr’s initial work on the hydrogen atom, a decade was to pass before de Broglie proposed that matter has wave properties.
-
- 30.7: Patterns in Spectra Reveal More Quantization
- High-resolution measurements of atomic and molecular spectra show that the spectral lines are even more complex than they first appear. In this section, we will see that this complexity has yielded important new information about electrons and their orbits in atoms.
-
- 30.8: Quantum Numbers and Rules
- hysical characteristics that are quantized -- such as energy, charge, and angular momentum -- are of such importance that names and symbols are given to them. The values of quantized entities are expressed in terms of quantum numbers , and the rules governing them are of the utmost importance in determining what nature is and does. This section covers some of the more important quantum numbers and rules.
-
- 30.9: The Pauli Exclusion Principle
- The state of a system is completely described by a complete set of quantum numbers. This set is written as (n, l, ml, ms). The Pauli exclusion principle says that no two electrons can have the same set of quantum numbers; that is, no two electrons can be in the same state. This exclusion limits the number of electrons in atomic shells and subshells. Each value of n corresponds to a shell, and each value of l corresponds to a subshell.
Thumbnail: In the Bohr model, the transition of an electron with n=3 to the shell n=2 is shown, where a photon is emitted. An electron from shell (n=2) must have been removed beforehand by ionization. (CC-SA-BY-3.0; JabberWok).
|
libretexts
|
2025-03-17T19:53:46.927176
| 2015-11-01T04:24:04 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30: Atomic Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.00%3A_Prelude_to_Atomic_Physics
|
30.0: Prelude to Atomic Physics
From childhood on, we learn that atoms are a substructure of all things around us, from the air we breathe to the autumn leaves that blanket a forest trail. Invisible to the eye, the existence and properties of atoms are used to explain many phenomena—a theme found throughout this text. In this chapter, we discuss the discovery of atoms and their own substructures; we then apply quantum mechanics to the description of atoms, and their properties and interactions. Along the way, we will find, much like the scientists who made the original discoveries, that new concepts emerge with applications far beyond the boundaries of atomic physics.
|
libretexts
|
2025-03-17T19:53:46.983958
| 2016-07-24T08:36:18 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.00%3A_Prelude_to_Atomic_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.0: Prelude to Atomic Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.01%3A_Discovery_of_the_Atom
|
30.1: Discovery of the Atom
Learning Objectives
By the end of this section, you will be able to:
- Describe the basic structure of the atom, the substructure of all matter.
How do we know that atoms are really there if we cannot see them with our eyes? A brief account of the progression from the proposal of atoms by the Greeks to the first direct evidence of their existence follows.
People have long speculated about the structure of matter and the existence of atoms. The earliest significant ideas to survive are due to the ancient Greeks in the fifth century BCE, especially those of the philosophers Leucippus and Democritus. (There is some evidence that philosophers in both India and China made similar speculations, at about the same time.) They considered the question of whether a substance can be divided without limit into ever smaller pieces. There are only a few possible answers to this question. One is that infinitesimally small subdivision is possible. Another is what Democritus in particular believed—that there is a smallest unit that cannot be further subdivided. Democritus called this the atom . We now know that atoms themselves can be subdivided, but their identity is destroyed in the process, so the Greeks were correct in a respect. The Greeks also felt that atoms were in constant motion, another correct notion.
The Greeks and others speculated about the properties of atoms, proposing that only a few types existed and that all matter was formed as various combinations of these types. The famous proposal that the basic elements were earth, air, fire, and water was brilliant, but incorrect. The Greeks had identified the most common examples of the four states of matter (solid, gas, plasma, and liquid), rather than the basic elements. More than 2000 years passed before observations could be made with equipment capable of revealing the true nature of atoms.
Over the centuries, discoveries were made regarding the properties of substances and their chemical reactions. Certain systematic features were recognized, but similarities between common and rare elements resulted in efforts to transmute them (lead into gold, in particular) for financial gain. Secrecy was endemic. Alchemists discovered and rediscovered many facts but did not make them broadly available. As the Middle Ages ended, alchemy gradually faded, and the science of chemistry arose. It was no longer possible, nor considered desirable, to keep discoveries secret. Collective knowledge grew, and by the beginning of the 19th century, an important fact was well established—the masses of reactants in specific chemical reactions always have a particular mass ratio. This is very strong indirect evidence that there are basic units (atoms and molecules) that have these same mass ratios. The English chemist John Dalton (1766–1844) did much of this work, with significant contributions by the Italian physicist Amedeo Avogadro (1776–1856). It was Avogadro who developed the idea of a fixed number of atoms and molecules in a mole, and this special number is called Avogadro’s number in his honor. The Austrian physicist Johann Josef Loschmidt was the first to measure the value of the constant in 1865 using the kinetic theory of gases.
PATTERNS AND SYSTEMATICS
The recognition and appreciation of patterns has enabled us to make many discoveries. The periodic table of elements was proposed as an organized summary of the known elements long before all elements had been discovered, and it led to many other discoveries. We shall see in later chapters that patterns in the properties of subatomic particles led to the proposal of quarks as their underlying structure, an idea that is still bearing fruit.
Knowledge of the properties of elements and compounds grew, culminating in the mid-19th-century development of the periodic table of the elements by Dmitri Mendeleev (1834–1907), the great Russian chemist. Mendeleev proposed an ingenious array that highlighted the periodic nature of the properties of elements. Believing in the systematics of the periodic table, he also predicted the existence of then-unknown elements to complete it. Once these elements were discovered and determined to have properties predicted by Mendeleev, his periodic table became universally accepted.
Also during the 19th century, the kinetic theory of gases was developed. Kinetic theory is based on the existence of atoms and molecules in random thermal motion and provides a microscopic explanation of the gas laws, heat transfer, and thermodynamics. Kinetic theory works so well that it is another strong indication of the existence of atoms. But it is still indirect evidence—individual atoms and molecules had not been observed. There were heated debates about the validity of kinetic theory until direct evidence of atoms was obtained.
The first truly direct evidence of atoms is credited to Robert Brown, a Scottish botanist. In 1827, he noticed that tiny pollen grains suspended in still water moved about in complex paths. This can be observed with a microscope for any small particles in a fluid. The motion is caused by the random thermal motions of fluid molecules colliding with particles in the fluid, and it is now called Brownian motion (Figure \(\PageIndex{1}\)). Statistical fluctuations in the numbers of molecules striking the sides of a visible particle cause it to move first this way, then that. Although the molecules cannot be directly observed, their effects on the particle can be. By examining Brownian motion, the size of molecules can be calculated. The smaller and more numerous they are, the smaller the fluctuations in the numbers striking different sides.
It was Albert Einstein who, starting in his epochal year of 1905, published several papers that explained precisely how Brownian motion could be used to measure the size of atoms and molecules. (In 1905 Einstein created special relativity, proposed photons as quanta of EM radiation, and produced a theory of Brownian motion that allowed the size of atoms to be determined. All of this was done in his spare time, since he worked days as a patent examiner. Any one of these very basic works could have been the crowning achievement of an entire career—yet Einstein did even more in later years.) Their sizes were only approximately known to be \(10^{-10} \, m\), based on a comparison of latent heat of vaporization and surface tension made in about 1805 by Thomas Young of double-slit fame and the famous astronomer and mathematician Simon Laplace.
Using Einstein’s ideas, the French physicist Jean-Baptiste Perrin (1870–1942) carefully observed Brownian motion; not only did he confirm Einstein’s theory, he also produced accurate sizes for atoms and molecules. Since molecular weights and densities of materials were well established, knowing atomic and molecular sizes allowed a precise value for Avogadro’s number to be obtained. (If we know how big an atom is, we know how many fit into a certain volume.) Perrin also used these ideas to explain atomic and molecular agitation effects in sedimentation, and he received the 1926 Nobel Prize for his achievements. Most scientists were already convinced of the existence of atoms, but the accurate observation and analysis of Brownian motion was conclusive—it was the first truly direct evidence.
A huge array of direct and indirect evidence for the existence of atoms now exists. For example, it has become possible to accelerate ions (much as electrons are accelerated in cathode-ray tubes) and to detect them individually as well as measure their masses (see More Applications of Magnetism for a discussion of mass spectrometers). Other devices that observe individual atoms, such as the scanning tunneling electron microscope, will be discussed elsewhere (Figure \(\PageIndex{2}\)). All of our understanding of the properties of matter is based on and consistent with the atom. The atom’s substructures, such as electron shells and the nucleus, are both interesting and important. The nucleus in turn has a substructure, as do the particles of which it is composed. These topics, and the question of whether there is a smallest basic structure to matter, will be explored in later parts of the text.
Summary
- Atoms are the smallest unit of elements; atoms combine to form molecules, the smallest unit of compounds.
- The first direct observation of atoms was in Brownian motion.
- Analysis of Brownian motion gave accurate sizes for atoms (\(10^{-10} \, m\) on average) and a precise value for Avogadro’s number.
Glossary
- atom
- basic unit of matter, which consists of a central, positively charged nucleus surrounded by negatively charged electrons
- Brownian motion
- the continuous random movement of particles of matter suspended in a liquid or gas
|
libretexts
|
2025-03-17T19:53:47.048572
| 2016-07-24T08:36:51 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.01%3A_Discovery_of_the_Atom",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.1: Discovery of the Atom",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.02%3A_Discovery_of_the_Parts_of_the_Atom_-_Electrons_and_Nuclei
|
30.2: Discovery of the Parts of the Atom - Electrons and Nuclei
Learning Objectives
By the end of this section, you will be able to:
- Describe how electrons were discovered.
- Explain the Millikan oil drop experiment.
- Describe Rutherford’s gold foil experiment.
- Describe Rutherford’s planetary model of the atom.
Just as atoms are a substructure of matter, electrons and nuclei are substructures of the atom. The experiments that were used to discover electrons and nuclei reveal some of the basic properties of atoms and can be readily understood using ideas such as electrostatic and magnetic force, already covered in previous chapters.
CHARGES AND ELECTROMAGNETIC FORCES
In previous discussions, we have noted that positive charge is associated with nuclei and negative charge with electrons. We have also covered many aspects of the electric and magnetic forces that affect charges. We will now explore the discovery of the electron and nucleus as substructures of the atom and examine their contributions to the properties of atoms.
The Electron
Gas discharge tubes, such as that shown in Figure \(\PageIndex{1}\), consist of an evacuated glass tube containing two metal electrodes and a rarefied gas. When a high voltage is applied to the electrodes, the gas glows. These tubes were the precursors to today’s neon lights. They were first studied seriously by Heinrich Geissler, a German inventor and glassblower, starting in the 1860s. The English scientist William Crookes, among others, continued to study what for some time were called Crookes tubes, wherein electrons are freed from atoms and molecules in the rarefied gas inside the tube and are accelerated from the cathode (negative) to the anode (positive) by the high potential. These "cathode rays" collide with the gas atoms and molecules and excite them, resulting in the emission of electromagnetic (EM) radiation that makes the electrons’ path visible as a ray that spreads and fades as it moves away from the cathode.
Gas discharge tubes today are most commonly called cathode-ray tubes , because the rays originate at the cathode. Crookes showed that the electrons carry momentum (they can make a small paddle wheel rotate). He also found that their normally straight path is bent by a magnet in the direction expected for a negative charge moving away from the cathode. These were the first direct indications of electrons and their charge.
The English physicist J. J. Thomson (1856–1940) improved and expanded the scope of experiments with gas discharge tubes. (Figures \(\PageIndex{2}\) and \(\PageIndex{3}\)) He verified the negative charge of the cathode rays with both magnetic and electric fields. Additionally, he collected the rays in a metal cup and found an excess of negative charge. Thomson was also able to measure the ratio of the charge of the electron to its mass, \(q_{e}/m_{e}\) - an important step to finding the actual values of both \(q_{e}\) and \(m_{e}\).
Figure \(\PageIndex{4}\) shows a cathode-ray tube, which produces a narrow beam of electrons that passes through charging plates connected to a high-voltage power supply. An electric field \(E\) is produced between the charging plates, and the cathode-ray tube is placed between the poles of a magnet so that the electric field \(E\) is perpendicular to the magnetic field \(B\) of the magnet. These fields, being perpendicular to each other, produce opposing forces on the electrons. As discussed for mass spectrometers in " More Applications of Magnetism ," if the net force due to the fields vanishes, then the velocity of the charged particle is \(v = E/B\). In this manner, Thomson determined the velocity of the electrons and then moved the beam up and down by adjusting the electric field.
To see how the amount of deflection is used to calculate \(q_{e}/m_{e}\), note that the deflection is proportional to the electric force on the electron: \[F = q_{e}E.\label{30.3.1}\] But the vertical deflection is also related to the electron’s mass, since the electron’s acceleration is
\[a = \frac{F}{m_{e}}.\label{30.3.2}\]
The value of \(F\) is not known, since \(q_{e}\) was not yet known. Substituting the expression for electric force into the expression for acceleration yields
\[a = \frac{F}{m_{e}} = \frac{q_{e}E}{m_{e}}.\label{30.3.3}\]
Gathering terms, we have
\[\frac{q_{e}}{m_{e}} = \frac{a}{E}.\label{30.3.5}\]
The deflection is analyzed to get \(a\), and \(E\) is determined from the applied voltage and distance between the plates; thus, \(\frac{q_{e}}{m_{e}}\) can be determined. With the velocity known, another measurement of \(\frac{q_{e}}{m_{e}}\) can be obtained by bending the beam of electrons with the magnetic field. Since
\[F_{mag} = q_{e}vB = m_{e}a,\label{30.3.6}\]
we have
\[\frac{q_{e}}{m_{e}} = \frac{a}{vB}.\label{30.3.7}\]
Consistent results are obtained using magnetic deflection.
What is so important about \(q_{e}/m_{e}\), the ratio of the electron’s charge to its mass? The value obtained is
\[\frac{q_{e}}{m_{e}} = -1.76 \times 10^{11} C/kg \left(electron\right).\]
This is a huge number, as Thomson realized, and it implies that the electron has a very small mass. It was known from electroplating that about \(10^{8} C/kg\) is needed to plate a material, a factor of about 1000 less than the charge per kilogram of electrons. Thomson went on to do the same experiment for positively charged hydrogen ions (now known to be bare protons) and found a charge per kilogram about 1000 times smaller than that for the electron, implying that the proton is about 1000 times more massive than the electron. Today, we know more precisely that
\[\frac{q_{p}}{m_{p}} = 9.58 \times 10^{7} C/kg \left(proton\right), \label{30.3.8}\]
where \(q_{p}\) is the charge of the proton and \(m_{p}\) is its mass. This ratio (to four significant figures) is 1836 times less charge per kilogram than for the electron. Since the charges of electrons and protons are equal in magnitude, this implies \(m_{p} = 1836 m_{e}\).
Thomson performed a variety of experiments using differing gases in discharge tubes and employing other methods, such as the photoelectric effect, for freeing electrons from atoms. He always found the same properties for the electron, proving it to be an independent particle. For his work, the important pieces of which he began to publish in 1897, Thomson was awarded the 1906 Nobel Prize in Physics. In retrospect, it is difficult to appreciate how astonishing it was to find that the atom has a substructure. Thomson himself said, “It was only when I was convinced that the experiment left no escape from it that I published my belief in the existence of bodies smaller than atoms.”
Thomson attempted to measure the charge of individual electrons, but his method could determine its charge only to the order of magnitude expected.
Since Faraday’s experiments with electroplating in the 1830s, it had been known that about 100,000 C per mole was needed to plate singly ionized ions. Dividing this by the number of ions per mole (that is, by Avogadro’s number), which was approximately known, the charge per ion was calculated to be about \(1.6 \times 10^{-19} C\), close to the actual value.
An American physicist, Robert Millikan (1868–1953) (Figure \(\PageIndex{5}\)), decided to improve upon Thomson’s experiment for measuring \(q_{e}\) and was eventually forced to try another approach, which is now a classic experiment performed by students. The Millikan oil drop experiment is shown in Figure \(\PageIndex{6}\).
In the Millikan oil drop experiment, fine drops of oil are sprayed from an atomizer. Some of these are charged by the process and can then be suspended between metal plates by a voltage between the plates. In this situation, the weight of the drop is balanced by the electric force: \[m_{drop}g = q_{e}E.\label{30.3.9}\]
The electric field is produced by the applied voltage, hence,
\[E = \frac{V}{d},\label{30.3.10}\]
and \(V\) is adjusted to just balance the drop’s weight. The drops can be seen as points of reflected light using a microscope, but they are too small to directly measure their size and mass. The mass of the drop is determined by observing how fast it falls when the voltage is turned off. Since air resistance is very significant for these submicroscopic drops, the more massive drops fall faster than the less massive, and sophisticated sedimentation calculations can reveal their mass. Oil is used rather than water, because it does not readily evaporate, and so mass is nearly constant. Once the mass of the drop is known, the charge of the electron is given by rearranging the previous equation:
\[q = \frac{m_{drop}g}{E} = \frac{m_{drop}gd}{V}, \label{30.3.11}\]
where \(d\) is the separation of the plates and \(V\) is the voltage that holds the drop motionless. (The same drop can be observed for several hours to see that it really is motionless.) By 1913 Millikan had measured the charge of the electron \(q_{e}\) to an accuracy of 1%, and he improved this by a factor of 10 within a few years to a value of \(-1.60 \times 10^{-19} C\). He also observed that all charges were multiples of the basic electron charge and that sudden changes could occur in which electrons were added or removed from the drops. For this very fundamental direct measurement of \(q_{e}\) and for his studies of the photoelectric effect, Millikan was awarded the 1923 Nobel Prize in Physics.
With the charge of the electron known and the charge-to-mass ratio known, the electron’s mass can be calculated. It is
\[m = \frac{q_{e}}{\left(\frac{q_{e}}{m_{e}}\right)}.\label{30.3.12}\]
Substituting known values yields \[m_e = \dfrac{-1.60 \times 10^{-19} \, C}{-1.76 \times 10^{11} \, C/kg}\] or \[m_e = 9.11 \times 10^{-31} \, kg \, (electron's \, mass),\] where the round-off errors have been corrected. The mass of the electron has been verified in many subsequent experiments and is now known to an accuracy of better than one part in one million. It is an incredibly small mass and remains the smallest known mass of any particle that has mass. (Some particles, such as photons, are massless and cannot be brought to rest, but travel at the speed of light.) A similar calculation gives the masses of other particles, including the proton. To three digits, the mass of the proton is now known to be
\[m_p = 1.67 \times 10^{-27} \, kg \, (proton's \, mass),\]
which is nearly identical to the mass of a hydrogen atom. What Thomson and Millikan had done was to prove the existence of one substructure of atoms, the electron, and further to show that it had only a tiny fraction of the mass of an atom. The nucleus of an atom contains most of its mass, and the nature of the nucleus was completely unanticipated.
Another important characteristic of quantum mechanics was also beginning to emerge. All electrons are identical to one another. The charge and mass of electrons are not average values; rather, they are unique values that all electrons have. This is true of other fundamental entities at the submicroscopic level. All protons are identical to one another, and so on.
The Nucleus
Here, we examine the first direct evidence of the size and mass of the nucleus. In later chapters, we will examine many other aspects of nuclear physics, but the basic information on nuclear size and mass is so important to understanding the atom that we consider it here.
Nuclear radioactivity was discovered in 1896, and it was soon the subject of intense study by a number of the best scientists in the world. Among them was New Zealander Lord Ernest Rutherford, who made numerous fundamental discoveries and earned the title of “father of nuclear physics.” Born in Nelson, Rutherford did his postgraduate studies at the Cavendish Laboratories in England before taking up a position at McGill University in Canada where he did the work that earned him a Nobel Prize in Chemistry in 1908. In the area of atomic and nuclear physics, there is much overlap between chemistry and physics, with physics providing the fundamental enabling theories. He returned to England in later years and had six future Nobel Prize winners as students. Rutherford used nuclear radiation to directly examine the size and mass of the atomic nucleus. The experiment he devised is shown in Figure \(\PageIndex{7}\). A radioactive source that emits alpha radiation was placed in a lead container with a hole in one side to produce a beam of alpha particles, which are a type of ionizing radiation ejected by the nuclei of a radioactive source. A thin gold foil was placed in the beam, and the scattering of the alpha particles was observed by the glow they caused when they struck a phosphor screen.
Alpha particles were known to be the doubly charged positive nuclei of helium atoms that had kinetic energies on the order of \(5 \, MeV\) when emitted in nuclear decay, which is the disintegration of the nucleus of an unstable nuclide by the spontaneous emission of charged particles. These particles interact with matter mostly via the Coulomb force, and the manner in which they scatter from nuclei can reveal nuclear size and mass. This is analogous to observing how a bowling ball is scattered by an object you cannot see directly. Because the alpha particle’s energy is so large compared with the typical energies associated with atoms \((MeV\) versus \(eV)\), you would expect the alpha particles to simply crash through a thin foil much like a supersonic bowling ball would crash through a few dozen rows of bowling pins. Thomson had envisioned the atom to be a small sphere in which equal amounts of positive and negative charge were distributed evenly. The incident massive alpha particles would suffer only small deflections in such a model. Instead, Rutherford and his collaborators found that alpha particles occasionally were scattered to large angles, some even back in the direction from which they came! Detailed analysis using conservation of momentum and energy—particularly of the small number that came straight back—implied that gold nuclei are very small compared with the size of a gold atom, contain almost all of the atom’s mass, and are tightly bound. Since the gold nucleus is several times more massive than the alpha particle, a head-on collision would scatter the alpha particle straight back toward the source. In addition, the smaller the nucleus, the fewer alpha particles that would hit one head on.
Although the results of the experiment were published by his colleagues in 1909, it took Rutherford two years to convince himself of their meaning. Like Thomson before him, Rutherford was reluctant to accept such radical results. Nature on a small scale is so unlike our classical world that even those at the forefront of discovery are sometimes surprised. Rutherford later wrote: “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backwards ... [meant] ... the greatest part of the mass of the atom was concentrated in a tiny nucleus.” In 1911, Rutherford published his analysis together with a proposed model of the atom. The size of the nucleus was determined to be about \(10^{-15} \, m\) or 100,000 times smaller than the atom. This implies a huge density, on the order of \(10^{15} \, g/cm^3\) vastly unlike any macroscopic matter. Also implied is the existence of previously unknown nuclear forces to counteract the huge repulsive Coulomb forces among the positive charges in the nucleus. Huge forces would also be consistent with the large energies emitted in nuclear radiation.
The small size of the nucleus also implies that the atom is mostly empty inside. In fact, inRutherford’s experiment, most alphas went straight through the gold foil with very little scattering, since electrons have such small masses and since the atom was mostly empty with nothing for the alpha to hit. There were already hints of this at the time Rutherford performed his experiments, since energetic electrons had been observed to penetrate thin foils more easily than expected. Figure \(\PageIndex{7}\) shows a schematic of the atoms in a thin foil with circles representing the size of the atoms (about \(10^{-10} \, m\)) and dots representing the nuclei. (The dots are not to scale—if they were, you would need a microscope to see them.) Most alpha particles miss the small nuclei and are only slightly scattered by electrons. Occasionally, (about once in 8000 times in Rutherford’s experiment), an alpha hits a nucleus head-on and is scattered straight backward.
Based on the size and mass of the nucleus revealed by his experiment, as well as the mass of electrons, Rutherford proposed the planetary model of the atom . The planetary model of the atom pictures low-mass electrons orbiting a large-mass nucleus. The sizes of the electron orbits are large compared with the size of the nucleus, with mostly vacuum inside the atom. This picture is analogous to how low-mass planets in our solar system orbit the large-mass Sun at distances large compared with the size of the sun. In the atom, the attractive Coulomb force is analogous to gravitation in the planetary system (Figure \(\PageIndex{9}\)). Note that a model or mental picture is needed to explain experimental results, since the atom is too small to be directly observed with visible light.
Rutherford’s planetary model of the atom was crucial to understanding the characteristics of atoms, and their interactions and energies, as we shall see in the next few sections. Also, it was an indication of how different nature is from the familiar classical world on the small, quantum mechanical scale. The discovery of a substructure to all matter in the form of atoms and molecules was now being taken a step further to reveal a substructure of atoms that was simpler than the 92 elements then known. We have continued to search for deeper substructures, such as those inside the nucleus, with some success. In later chapters, we will follow this quest in the discussion of quarks and other elementary particles, and we will look at the direction the search seems now to be heading.
PHET EXPLORATIONS: RUTHERFORD SCATTERING
How did Rutherford figure out the structure of the atom without being able to see it? Simulate the famous experiment in which he disproved the Plum Pudding model of the atom by observing alpha particles bouncing off atoms and determining that they must have a small core.
Summary
- Atoms are composed of negatively charged electrons, first proved to exist in cathode-ray-tube experiments, and a positively charged nucleus.
- All electrons are identical and have a charge-to-mass ratio of \[\dfrac{q_e}{m_e} = -1.76 \times 10^{11} \, C/kg. \nonumber\]
- The positive charge in the nuclei is carried by particles called protons, which have a charge-to-mass ratio of \[\dfrac{q_p}{m_p} = 9.57 \times 10^7 \, C/kg.\nonumber\]
- Mass of electron, \[m_e = 9.11 \times 10^{-31} \, kg.\nonumber\]
- Mass of proton, \[m_p = 1.67 \times 10^{-27} \, kg.\nonumber\]
- The planetary model of the atom pictures electrons orbiting the nucleus in the same way that planets orbit the sun.
Glossary
- cathode-ray tube
- a vacuum tube containing a source of electrons and a screen to view images
- planetary model of the atom
- the most familiar model or illustration of the structure of the atom
|
libretexts
|
2025-03-17T19:53:47.130001
| 2016-07-24T08:37:29 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.02%3A_Discovery_of_the_Parts_of_the_Atom_-_Electrons_and_Nuclei",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.2: Discovery of the Parts of the Atom - Electrons and Nuclei",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.03%3A_Bohrs_Theory_of_the_Hydrogen_Atom
|
30.3: Bohr’s Theory of the Hydrogen Atom
Learning Objectives
By the end of this section, you will be able to:
- Describe the mysteries of atomic spectra.
- Explain Bohr’s theory of the hydrogen atom.
- Explain Bohr’s planetary model of the atom.
- Illustrate energy state using the energy-level diagram.
- Describe the triumphs and limits of Bohr’s theory.
The great Danish physicist Niels Bohr (1885–1962) made immediate use of Rutherford’s planetary model of the atom. (Figure \(\PageIndex{1}\)). Bohr became convinced of its validity and spent part of 1912 at Rutherford’s laboratory. In 1913, after returning to Copenhagen, he began publishing his theory of the simplest atom, hydrogen, based on the planetary model of the atom. For decades, many questions had been asked about atomic characteristics. From their sizes to their spectra, much was known about atoms, but little had been explained in terms of the laws of physics. Bohr’s theory explained the atomic spectrum of hydrogen and established new and broadly applicable principles in quantum mechanics.
Mysteries of Atomic Spectra
As noted in "Quantization of Energy," the energies of some small systems are quantized. Atomic and molecular emission and absorption spectra have been known for over a century to be discrete (or quantized) (Figure \(\PageIndex{2}\)). Maxwell and others had realized that there must be a connection between the spectrum of an atom and its structure, something like the resonant frequencies of musical instruments. But, in spite of years of efforts by many great minds, no one had a workable theory. (It was a running joke that any theory of atomic and molecular spectra could be destroyed by throwing a book of data at it, so complex were the spectra.) Following Einstein’s proposal of photons with quantized energies directly proportional to their wavelengths, it became even more evident that electrons in atoms can exist only in discrete orbits.
In some cases, it had been possible to devise formulas that described the emission spectra. As you might expect, the simplest atom -- hydrogen, with its single electron -- has a relatively simple spectrum. The hydrogen spectrum had been observed in the infrared (IR), visible, and ultraviolet (UV), and several series of spectral lines had been observed. (See Figure \(\PageIndex{3}\).) These series are named after early researchers who studied them in particular depth.
The observed hydrogen-spectrum wavelengths can be calculated using the following formula:
\[\frac{1}{\lambda} = R \left( \frac{1}{n_{f}^{2}} - \frac{1}{n_{i}^{2}} \right), \label{30.4.1}\]
where \(\lambda\) is the wavelength of the emitted EM radiation and \(R\) is the Rydberg constant, determined by the experiment to be
\[R = 1.097 \times 10^{7}/m \left( or m^{-1} \right).\label{30.4.2}\]
The constant \(n_{f}\) is a positive integer associated with a specific series. For the Lyman series, \(n_{f} = 1\); for the Balmer series, \(n_{f} = 2\); for the Paschen series, \(n_{f} = 3\); and so on. The Lyman series is entirely in the UV, while part of the Balmer series is visible with the remainder UV. The Paschen series and all the rest are entirely IR. There are apparently an unlimited number of series, although they lie progressively farther into the infrared and become difficult to observe as \(n_{f}\) increases. The constant \(n_{i}\) is a positive integer, but it must be greater than \(n_{f}\). Thus, for the Balmer series, \( n_{f} = 2\) and \(n_{i} = 3, 4, 5, 6, \cdot \cdot \cdot\). Note that \(n_{i}\) can approach infinity. While the formula in the wavelengths equation was just a recipe designed to fit data and was not based on physical principles, it did imply a deeper meaning. Balmer first devised the formula for his series alone, and it was later found to describe all the other series by using different values of \(n_{f}\). Bohr was the first to comprehend the deeper meaning. Again, we see the interplay between experiment and theory in physics. Experimentally, the spectra were well established, an equation was found to fit the experimental data, but the theoretical foundation was missing.
Example \(\PageIndex{1}\): Calculating Wave Interference of a Hydrogen Line
What is the distance between the slits of a grating that produces a first-order maximum for the second Balmer line at an angle of \(15^{\circ}\)?
Strategy and Concept:
For an Integrated Concept problem, we must first identify the physical principles involved. In this example, we need to know (a) the wavelength of light as well as (b) conditions for an interference maximum for the pattern from a double slit. Part (a) deals with a topic of the present chapter, while part (b) considers the wave interference material of "Wave Optics."
Solution for (a):
Hydrogen spectrum wavelength. The Balmer series requires that \(n_{f} = 2\). The first line in the series is taken to be for \(n_{i} = 3\), and so the second would have \(n_{i} = 4\).
The calculation is a straightforward application of the wavelength equation. Entering the determined values for \(n_{f}\) and \(n_{i}\) yields
\[ \begin{align*} \frac{1}{\lambda} = R \left( \frac{1}{n_{f}^{2}} - \frac{1}{n_{i}^{2}} \right) \label{30.4.3} \\[5pt] &= \left(1.097 \times 10^{7} m^{-1}\right) \left( \frac{1}{2^{2}} - \frac{1}{4^{2}} \right) \\[5pt] &= 2.057 \times 10^{6} m^{-1}. \end{align*}\]
Discussion for (a):
This is indeed the experimentally observed wavelength, corresponding to the second (blue-green) line in the Balmer series. More impressive is the fact that the same simple recipe predicts all of the hydrogen spectrum lines, including new ones observed in subsequent experiments. What is nature telling us?
Solution for (b):
Double slit interference ("Wave Optics"). To obtain constructive interference for a double slit, the path length difference from two slits must be an integral multiple of the wavelength. This condition was expressed by the equation
\[d \sin{\theta} = m \lambda, \nonumber \]
where \(d\) is the distance between slits and \(\theta\) is the angle from the original direction of the beam. The number \(m\) is the order of the interference; \(m=1\) in this example. Solving for \(d\) and entering known values yields
\[d = \frac{\left(1\right) \left(486 nm\right)}{\sin{15^{\circ}}} = 1.88 \times 10^{-6} m. \nonumber\]
Discussion for (b):
This number is similar to those used in the interference examples of "Introduction to Quantum Physics" (and is close to the spacing between slits in commonly used diffraction glasses).
Bohr’s Solution for Hydrogen
Bohr was able to derive the formula for the hydrogen spectrum using basic physics, the planetary model of the atom, and some very important new proposals. His first proposal is that only certain orbits are allowed: we say that the orbits of electrons in atoms are quantized. Each orbit has a different energy, and electrons can move to a higher orbit by absorbing energy and drop to a lower orbit by emitting energy. If the orbits are quantized, the amount of energy absorbed or emitted is also quantized, producing discrete spectra. Photon absorption and emission are among the primary methods of transferring energy into and out of atoms. The energies of the photons are quantized, and their energy is explained as being equal to the change in energy of the electron when it moves from one orbit to another. In equation form, this is
\[\Delta E = hf = E_{i} - E_{f}. \label{30.4.5}\]
Here, \(\Delta E\) is the change in energy between the initial and final orbits, and \(hf\) is the energy of the absorbed or emitted photon. It is quite logical (that is, expected from our everyday experience) that energy is involved in changing orbits. A blast of energy is required for the space shuttle, for example, to climb to a higher orbit. What is not expected is that atomic orbits should be quantized. This is not observed for satellites or planets, which can have any orbit given the proper energy. (See Figure \(\PageIndex{4}\).)
Figure \(\PageIndex{5}\). shows an energy-level diagram , a convenient way to display energy states. In the present discussion, we take these to be the allowed energy levels of the electron. Energy is plotted vertically with the lowest or ground state at the bottom and with excited states above. Given the energies of the lines in an atomic spectrum, it is possible (although sometimes very difficult) to determine the energy levels of an atom. Energy-level diagrams are used for many systems, including molecules and nuclei. A theory of the atom or any other system must predict its energies based on the physics of the system.
Bohr was clever enough to find a way to calculate the electron orbital energies in hydrogen. This was an important first step that has been improved upon, but it is well worth repeating here, because it does correctly describe many characteristics of hydrogen. Assuming circular orbits, Bohr proposed that the angular momentum \(L\) of an electron in its orbit is quantized , that is, it has only specific, discrete values. The value for \(L\) is given by the formula \[L = m_{e}vr_{n} = n\frac{h}{2\pi} \left(n = 1,2,3, \cdot \cdot \cdot \right), \label{30.4.6}\] where \(L\) is the angular momentum, \(m_{e}\) is the electron’s mass, \(r_{n}\) is the radius of the \(n\)the orbit, and \(h\) is Planck’s constant. Note that angular momentum is \[L = I \omega. \label{30.4.7}\] For a small object at a radius \(r\), \[I = mr^{2}\label{30.4.8}\] and \[\omega = \frac{v}{r}\label{30.4.9}\], so that \[L = \left(mr^{2}\right)\left(\frac{v}{r}\right) = mvr. \label{30.4.10}\] Quantization says that this value of \(mvr\) can only be equal to \(h/2, 2h/2, 3h/2\), etc. At the time, Bohr himself did not know why angular momentum should be quantized, but using this assumption he was able to calculate the energies in the hydrogen spectrum, something no one else had done at the time.
From Bohr’s assumptions, we will now derive a number of important properties of the hydrogen atom from the classical physics we have covered in the text. We start by noting the centripetal force causing the electron to follow a circular path is supplied by the Coulomb force. To be more general, we note that this analysis is valid for any single-electron atom. So, if a nucleus has \(Z\) protons (\(Z = 1\) for hydrogen, 2 for helium, etc.) and only one electron, that atom is called a hydrogen-like atom . The spectra of hydrogen-like ions are similar to hydrogen, but shifted to higher energy by the greater attractive force between the electron and nucleus. The magnitude of the centripetal force is \(m_e v^2/r_n\), while the Coulomb force is \(k(Zq_e)(q_e)/r_n^2\). The tacit assumption here is that the nucleus is more massive than the stationary electron, and the electron orbits about it. This is consistent with the planetary model of the atom. Equating these,
\[k\dfrac{Zq_e^2}{r_n^2} = \dfrac{m_e v^2}{r_n} (Coulomb = centripetal).\]
Angular momentum quantization is stated in an earlier equation. We solve that equation for \(v\), substitute it into the above, and rearrange the expression to obtain the radius of the orbit. This yields: \[r_n = \dfrac{n^2}{Z} a_B, \, for \, allowed \, orbits \, (n = 1, \, 2, \, 3, . . . ),\]
where \(a_B\) is defined to be the Bohr radius , since for the lowest orbit (\( n = 1 \)) and for hydrogen \((Z = 1)\), \(r_1 = a_B\). It is left for this chapter’s Problems and Exercises to show that the Bohr radius is
\[a_B = \dfrac{h^2}{4 \pi^2 m_e kq_e^2} = 0.529 \times 10^{-10} \, m.\]
These last two equations can be used to calculate the radii of the allowed (quantized) electron orbits in any hydrogen-like atom . It is impressive that the formula gives the correct size of hydrogen, which is measured experimentally to be very close to the Bohr radius. The earlier equation also tells us that the orbital radius is proportional to \(n^2\), as illustrated in Figure \(\PageIndex{6}\).
To get the electron orbital energies, we start by noting that the electron energy is the sum of its kinetic and potential energy: \[ E_n = KE + PE.\] Kinetic energy is the familiar \(KE = (1/2)m_e v^2\), assuming the electron is not moving at relativistic speeds. Potential energy for the electron is electrical, or \(PE = q_e V\), where \(V\) is the potential due to the nucleus, which looks like a point charge. The nucleus has a positive charge \(Zq_e\); thus \(V = kZq_e/r_n\), recalling an earlier equation for the potential due to a point charge. Since the electron’s charge is negative, we see that \(PE = -kZq_e/r_n\). Entering the expressions for \(KE\) and \(PE\), we find \[E_n = \dfrac{1}{2} m_ev^2 - k\dfrac{Zq_e^2}{r_n}.\]
Now we substitute \(r_n\) and \(v\) from earlier equations into the above expression for energy. Algebraic manipulation yields \[E_n = \dfrac{Z^2}{n^2} E_0(n = 1, \, 2, \, 3, ...)\]
for the orbital energies of hydrogen-like atoms . Here, \(E_0\) is the ground-state energy (\(n = 1)\) for hydrogen \((Z = 1)\) and is given by
\[E_0 = \dfrac{2 \pi^2 Q_e^4 m_e k^2}{h^2} = 13.6 \, eV.\]
Thus, for hydrogen,
\[E_n = - \dfrac{13.6 \, eV}{n^2} (n = 1, \, 2, \, 3, ...).\]
Figure \(\PageIndex{7}\) shows an energy-level diagram for hydrogen that also illustrates how the various spectral series for hydrogen are related to transitions between energy levels.
Electron total energies are negative, since the electron is bound to the nucleus, analogous to being in a hole without enough kinetic energy to escape. As \(n\) approaches infinity, the total energy becomes zero. This corresponds to a free electron with no kinetic energy, since \(r_N\) gets very large for large \(n\), and the electric potential energy thus becomes zero. Thus, 13.6 eV is needed to ionize hydrogen (to go from –13.6 eV to 0, or unbound), an experimentally verified number. Given more energy, the electron becomes unbound with some kinetic energy. For example, giving 15.0 eV to an electron in the ground state of hydrogen strips it from the atom and leaves it with 1.4 eV of kinetic energy.
Finally, let us consider the energy of a photon emitted in a downward transition, given by the equation to be \[\Delta E = hf = E_i - E_f.\] Substituting \(E_n = (-13.6 \, eV/n^2)\), we see that \[hf = (13.6 \, eV)\left(\dfrac{1}{n_f^2} - \dfrac{1}{n_i^2}\right).\]
Dividing both sides of this equation by \(hc\) gives an expression for \(1/\lambda\):
\[\dfrac{hf}{hc} = \dfrac{f}{c} = \dfrac{1}{\lambda} = \dfrac{(13.6 \, eV)}{hc} \left(\dfrac{1}{n_f^2} - \dfrac{1}{n_i^2}\right).\] It can be shown that \[\left(\dfrac{13.6 \, eV}{hc} \right) = \dfrac{(13.6 \, eV)(1.602 \times 10^{-19} \, J/eV)}{(6.626 \, 10^{-34} \, J \cdot s)(2.998 \times 10^8 \, m/s)} = 1.097 \times 10^7 \, m^{-1} = R\] is the Rydberg constant . Thus, we have used Bohr’s assumptions to derive the formula first proposed by Balmer years earlier as a recipe to fit experimental data. \[\dfrac{1}{\lambda} = R \left(\dfrac{1}{n_f^2} - \dfrac{1}{n_i^2} \right)\]
We see that Bohr’s theory of the hydrogen atom answers the question as to why this previously known formula describes the hydrogen spectrum. It is because the energy levels are proportional to \(1/n^2\), where \(n\) is a non-negative integer. A downward transition releases energy, and so \(n_i\) must be greater than \(n_f\). The various series are those where the transitions end on a certain level. For the Lyman series, \(n_f = 1\) that is, all the transitions end in the ground state (see also Figure ). For the Balmer series, \(n_f = 2\), or all the transitions end in the first excited state; and so on. What was once a recipe is now based in physics, and something new is emerging—angular momentum is quantized.
Triumphs and Limits of the Bohr Theory
Bohr did what no one had been able to do before. Not only did he explain the spectrum of hydrogen, he correctly calculated the size of the atom from basic physics. Some of his ideas are broadly applicable. Electron orbital energies are quantized in all atoms and molecules. Angular momentum is quantized. The electrons do not spiral into the nucleus, as expected classically (accelerated charges radiate, so that the electron orbits classically would decay quickly, and the electrons would sit on the nucleus—matter would collapse). These are major triumphs.
But there are limits to Bohr’s theory. It cannot be applied to multielectron atoms, even one as simple as a two-electron helium atom. Bohr’s model is what we call semiclassical . The orbits are quantized (nonclassical) but are assumed to be simple circular paths (classical). As quantum mechanics was developed, it became clear that there are no well-defined orbits; rather, there are clouds of probability. Bohr’s theory also did not explain that some spectral lines are doublets (split into two) when examined closely. We shall examine many of these aspects of quantum mechanics in more detail, but it should be kept in mind that Bohr did not fail. Rather, he made very important steps along the path to greater knowledge and laid the foundation for all of atomic physics that has since evolved.
PHET EXPLORATIONS: MODELS OF THE HYDROGEN ATOM
How did scientists figure out the structure of atoms without looking at them? Try out different models by shooting light at the atom. Check how the prediction of the model matches the experimental results.
Figure \(\PageIndex{8}\): Models of the Hydrogen Atom
Summary
- The planetary model of the atom pictures electrons orbiting the nucleus in the way that planets orbit the sun. Bohr used the planetary model to develop the first reasonable theory of hydrogen, the simplest atom. Atomic and molecular spectra are quantized, with hydrogen spectrum wavelengths given by the formula \[\dfrac{1}{\lambda} = R\left(\dfrac{1}{n_f^2} - \dfrac{1}{n_i^2}\right), \nonumber\] where \(\lambda\) is the wavelength of the emitted EM radiation and \(R\) is the Rydberg constant, which has the value \[R = 1.097 \times 10^7 \, m^{-1}. \nonumber\]
- The constants \(n_i\) and \(n_f\) are positive integers, and \(n_i\) must be greater than \(n_f\).
- Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by \[\Delta E = hf = E_i - E_f,\] where \(\Delta E\) is the change in energy between the initial and final orbits and \(hf\) is the energy of an absorbed or emitted photon. It is useful to plot orbital energies on a vertical graph called an energy-level diagram.
- Bohr proposed that the allowed orbits are circular and must have quantized orbital angular momentum given by \[L = m_evr_n = n\dfrac{h}{2\pi} (n = 1, \, 2, \, 3, . . .), \nonumber\] where \(L\) is the angular momentum, \(r_n\) is the radius of the \(n\)th orbit, and \(h\) is Planck’s constant. For all one-electron (hydrogen-like) atoms, the radius of an orbit is given by \[r_n = \dfrac{n^2}{Z}a_B(allowed \, orbits \, n = 1, \, 2, \, 3, ...), \nonumber\] \(Z\) is the atomic number of an element (the number of electrons is has when neutral) and \(a_B\) is defined to be the Bohr radius, which is \[a_B = \dfrac{h^2}{4 \pi^2m_ekq_e^2} = 0.529 \times 10^{-10} \, m. \nonumber\]
- Furthermore, the energies of hydrogen-like atoms are given by \[E_n = - \dfrac{Z^2}{n^2}E_0 (n = 1, \, 2, \, 3 ... ), \nonumber\] where \(E_0\) is the ground-state energy and is given by \[E_0 = \dfrac{2 \pi^2 q_e^4m_ek^2}{h^2} = 13.6 \, eV. \nonumber\] Thus, for hydrogen, \[E_n = - \dfrac{13.6 \, eV}{n^2} (n = 1, \, 2, \, 3, . . .). \nonumber\]
- The Bohr Theory gives accurate values for the energy levels in hydrogen-like atoms, but it has been improved upon in several respects.
Glossary
- hydrogen spectrum wavelengths
- the wavelengths of visible light from hydrogen; can be calculated by \(\frac{1}{\lambda} = R \left(\frac{1}{n_f^2} - \frac{1}{n_i^2} \right)\)
- Rydberg constant
- a physical constant related to the atomic spectra with an established value of \(1.097 \times 10^7 \, m^{-1}\)
- double-slit interference
- an experiment in which waves or particles from a single source impinge upon two slits so that the resulting interference pattern may be observed
- energy-level diagram
- a diagram used to analyze the energy level of electrons in the orbits of an atom
- Bohr radius
- the mean radius of the orbit of an electron around the nucleus of a hydrogen atom in its ground state
- hydrogen-like atom
- any atom with only a single electron
- energies of hydrogen-like atoms
- Bohr formula for energies of electron states in hydrogen-like atoms: \(E_n = - \frac{Z^2}{n^2} E_0 (n = 1, \, 2, \, 3, . . . )\)
|
libretexts
|
2025-03-17T19:53:47.218009
| 2016-07-24T08:38:20 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.03%3A_Bohrs_Theory_of_the_Hydrogen_Atom",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.3: Bohr’s Theory of the Hydrogen Atom",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.04%3A_X_Rays-_Atomic_Origins_and_Applications
|
30.4: X Rays - Atomic Origins and Applications
Learning Objectives
By the end of this section, you will be able to:
- Define x-ray tube and its spectrum.
- Show the x-ray characteristic energy.
- Specify the use of x rays in medical observations.
- Explain the use of x rays in CT scanners in diagnostics.
Each type of atom (or element) has its own characteristic electromagnetic spectrum. X rays lie at the high-frequency end of an atom’s spectrum and are characteristic of the atom as well. In this section, we explore characteristic x rays and some of their important applications.
We have previously discussed x rays as a part of the electromagnetic spectrum in Photon Energies and the Electromagnetic Spectrum . That module illustrated how an x-ray tube (a specialized CRT) produces x rays. Electrons emitted from a hot filament are accelerated with a high voltage, gaining significant kinetic energy and striking the anode.
There are two processes by which x rays are produced in the anode of an x-ray tube. In one process, the deceleration of electrons produces x rays, and these x rays are called bremsstrahlung , or braking radiation. The second process is atomic in nature and produces characteristic x rays , so called because they are characteristic of the anode material. The x-ray spectrum in Figure is typical of what is produced by an x-ray tube, showing a broad curve of bremsstrahlung radiation with characteristic x-ray peaks on it.
The spectrum in Figure is collected over a period of time in which many electrons strike the anode, with a variety of possible outcomes for each hit. The broad range of x-ray energies in the bremsstrahlung radiation indicates that an incident electron’s energy is not usually converted entirely into photon energy. The highest-energy x ray produced is one for which all of the electron’s energy was converted to photon energy. Thus the accelerating voltage and the maximum x-ray energy are related by conservation of energy. Electric potential energy is converted to kinetic energy and then to photon energy, so that \(E_{max} = hf_{max} = q_eV\). Units of electron volts are convenient. For example, a 100-kV accelerating voltage produces x-ray photons with a maximum energy of 100 keV.
Some electrons excite atoms in the anode. Part of the energy that they deposit by collision with an atom results in one or more of the atom’s inner electrons being knocked into a higher orbit or the atom being ionized. When the anode’s atoms de-excite, they emit characteristic electromagnetic radiation. The most energetic of these are produced when an inner-shell vacancy is filled—that is, when an \(n = 1\) or \(n = 2\) shell electron has been excited to a higher level, and another electron falls into the vacant spot. A characteristic x ray (see Photon Energies and the Electromagnetic Spectrum ) is electromagnetic (EM) radiation emitted by an atom when an inner-shell vacancy is filled. Figure shows a representative energy-level diagram that illustrates the labeling of characteristic x rays. X rays created when an electron falls into an \(n = 1\) shell vacancy are called \(K_a\) when they come from the next higher level; that is, an \(n = 2\) to \(n = 1\) transition. The labels \(K, \, L, \, M, . . . \) come from the older alphabetical labeling of shells starting with \(K\) rather than using the principal quantum numbers 1, 2, 3, …. A more energetic \(K_{\beta}\) x ray is produced when an electron falls into an \(n = 1\) shell vacancy from the \(n = 3\) shell; that is, an the \(n = 3\) to the \(n = 1\) transition. Similarly, when an electron falls into the the \(n = 2\) shell from the the \(n = 3\) shell, an \(L_{\alpha}\) x ray is created. The energies of these x rays depend on the energies of electron states in the particular atom and, thus, are characteristic of that element: every element has it own set of x-ray energies. This property can be used to identify elements, for example, to find trace (small) amounts of an element in an environmental or biological sample.
Example \(\PageIndex{1}\): Characteristic X-Ray Energy
Calculate the approximate energy of a \(K_{\alpha}\)
x ray from a tungsten anode in an x-ray tube.
Strategy
How do we calculate energies in a multiple-electron atom? In the case of characteristic x rays, the following approximate calculation is reasonable. Characteristic x rays are produced when an inner-shell vacancy is filled. Inner-shell electrons are nearer the nucleus than others in an atom and thus feel little net effect from the others. This is similar to what happens inside a charged conductor, where its excess charge is distributed over the surface so that it produces no electric field inside. It is reasonable to assume the inner-shell electrons have hydrogen-like energies, as given by \(E_n = -\frac{Z^2}{n^2} E_0 (n = 1, \, 2, \, 3, . . .)\). As noted, a \(K_a\) x ray is produced by an \(n = 2\) to \(n = 1\) transition. Since there are two electrons in a filled \(K\) shell, a vacancy would leave one electron, so that the effective charge would be \(Z - 1\) rather than \(Z\). For tungsten, \(Z = 74\), so that the effective charge is 73.
Solution
\(E_n = - \frac{Z^2}{n^2}E_0(n = 1, \, 2, \, 3, . . .)\) gives the orbital energies for hydrogen-like atoms to be \(E_n = -(Z^2/n^2)E_0\), where \(E_0 = 13.6 \, eV\). As noted, the effective \(Z\) is 73. Now the \(K_{\alpha}\) x-ray energy is given by \[E_{K_{\alpha}} = \Delta E = E_i - E_f = E_2 - E_1,\] where \[E_1 = -\dfrac{Z^2}{1^2}E_0 = - \dfrac{73^2}{1} \left(13.6 \, eV\right) = - 72.5 \, keV\] and \[E_2 = - \dfrac{Z^2}{2^2} E_0 = - \dfrac{73^2}{4}\left(13.6 \, eV\right) = -18.1 \, keV.\] Thus, \[E_{K_{\alpha}} = -18.1 \, keV - (- 72.5 \, keV) = 54.4 \, keV.\]
Discussion
This large photon energy is typical of characteristic x rays from heavy elements. It is large compared with other atomic emissions because it is produced when an inner-shell vacancy is filled, and inner-shell electrons are tightly bound. Characteristic x ray energies become progressively larger for heavier elements because their energy increases approximately as \(Z^2\). Significant accelerating voltage is needed to create these inner-shell vacancies. In the case of tungsten, at least 72.5 kV is needed, because other shells are filled and you cannot simply bump one electron to a higher filled shell. Tungsten is a common anode material in x-ray tubes; so much of the energy of the impinging electrons is absorbed, raising its temperature, that a high-melting-point material like tungsten is required.
Medical and Other Diagnostic Uses of X-rays
All of us can identify diagnostic uses of x-ray photons. Among these are the universal dental and medical x rays that have become an essential part of medical diagnostics. (See Figure and Figure .) X rays are also used to inspect our luggage at airports, as shown in Figure , and for early detection of cracks in crucial aircraft components. An x ray is not only a noun meaning high-energy photon, it also is an image produced by x rays, and it has been made into a familiar verb—to be x-rayed.
The most common x-ray images are simple shadows. Since x-ray photons have high energies, they penetrate materials that are opaque to visible light. The more energy an x-ray photon has, the more material it will penetrate. So an x-ray tube may be operated at 50.0 kV for a chest x ray, whereas it may need to be operated at 100 kV to examine a broken leg in a cast. The depth of penetration is related to the density of the material as well as to the energy of the photon. The denser the material, the fewer x-ray photons get through and the darker the shadow. Thus x rays excel at detecting breaks in bones and in imaging other physiological structures, such as some tumors, that differ in density from surrounding material. Because of their high photon energy, x rays produce significant ionization in materials and damage cells in biological organisms. Modern uses minimize exposure to the patient and eliminate exposure to others. Biological effects of x rays will be explored in the next chapter along with other types of ionizing radiation such as those produced by nuclei.
As the x-ray energy increases, the Compton effect (see Photon Momentum ) becomes more important in the attenuation of the x rays. Here, the x ray scatters from an outer electron shell of the atom, giving the ejected electron some kinetic energy while losing energy itself. The probability for attenuation of the x rays depends upon the number of electrons present (the material’s density) as well as the thickness of the material. Chemical composition of the medium, as characterized by its atomic number \(Z\), is not important here. Low-energy x rays provide better contrast (sharper images). However, due to greater attenuation and less scattering, they are more absorbed by thicker materials. Greater contrast can be achieved by injecting a substance with a large atomic number, such as barium or iodine. The structure of the part of the body that contains the substance (e.g., the gastro-intestinal tract or the abdomen) can easily be seen this way.
Breast cancer is the second-leading cause of death among women worldwide. Early detection can be very effective, hence the importance of x-ray diagnostics. A mammogram cannot diagnose a malignant tumor, only give evidence of a lump or region of increased density within the breast. X-ray absorption by different types of soft tissue is very similar, so contrast is difficult; this is especially true for younger women, who typically have denser breasts. For older women who are at greater risk of developing breast cancer, the presence of more fat in the breast gives the lump or tumor more contrast. MRI (Magnetic resonance imaging) has recently been used as a supplement to conventional x rays to improve detection and eliminate false positives. The subject’s radiation dose from x rays will be treated in a later chapter.
A standard x ray gives only a two-dimensional view of the object. Dense bones might hide images of soft tissue or organs. If you took another x ray from the side of the person (the first one being from the front), you would gain additional information. While shadow images are sufficient in many applications, far more sophisticated images can be produced with modern technology. Figure shows the use of a computed tomography (CT) scanner, also called computed axial tomography (CAT) scanner. X rays are passed through a narrow section (called a slice) of the patient’s body (or body part) over a range of directions. An array of many detectors on the other side of the patient registers the x rays. The system is then rotated around the patient and another image is taken, and so on. The x-ray tube and detector array are mechanically attached and so rotate together. Complex computer image processing of the relative absorption of the x rays along different directions produces a highly-detailed image. Different slices are taken as the patient moves through the scanner on a table. Multiple images of different slices can also be computer analyzed to produce three-dimensional information, sometimes enhancing specific types of tissue, as shown in Figure . G. Hounsfield (UK) and A. Cormack (US) won the Nobel Prize in Medicine in 1979 for their development of computed tomography.
X-Ray Diffraction and Crystallography
Since x-ray photons are very energetic, they have relatively short wavelengths. For example, the 54.4-keV \(K_{\alpha}\) x ray of Example has a wavelength \(\lambda = hc/E = 0.0228 \, nm\). Thus, typical x-ray photons act like rays when they encounter macroscopic objects, like teeth, and produce sharp shadows; however, since atoms are on the order of 0.1 nm in size, x rays can be used to detect the location, shape, and size of atoms and molecules. The process is called x-ray diffraction , because it involves the diffraction and interference of x rays to produce patterns that can be analyzed for information about the structures that scattered the x rays. Perhaps the most famous example of x-ray diffraction is the discovery of the double-helix structure of DNA in 1953 by an international team of scientists working at the Cavendish Laboratory—American James Watson, Englishman Francis Crick, and New Zealand–born Maurice Wilkins. Using x-ray diffraction data produced by Rosalind Franklin, they were the first to discern the structure of DNA that is so crucial to life. For this, Watson, Crick, and Wilkins were awarded the 1962 Nobel Prize in Physiology or Medicine. There is much debate and controversy over the issue that Rosalind Franklin was not included in the prize.
Figure shows a diffraction pattern produced by the scattering of x rays from a crystal. This process is known as x-ray crystallography because of the information it can yield about crystal structure, and it was the type of data Rosalind Franklin supplied to Watson and Crick for DNA. Not only do x rays confirm the size and shape of atoms, they give information on the atomic arrangements in materials. For example, current research in high-temperature superconductors involves complex materials whose lattice arrangements are crucial to obtaining a superconducting material. These can be studied using x-ray crystallography.
Historically, the scattering of x rays from crystals was used to prove that x rays are energetic EM waves. This was suspected from the time of the discovery of x rays in 1895, but it was not until 1912 that the German Max von Laue (1879–1960) convinced two of his colleagues to scatter x rays from crystals. If a diffraction pattern is obtained, he reasoned, then the x rays must be waves, and their wavelength could be determined. (The spacing of atoms in various crystals was reasonably well known at the time, based on good values for Avogadro’s number.) The experiments were convincing, and the 1914 Nobel Prize in Physics was given to von Laue for his suggestion leading to the proof that x rays are EM waves. In 1915, the unique father-and-son team of Sir William Henry Bragg and his son Sir William Lawrence Bragg were awarded a joint Nobel Prize for inventing the x-ray spectrometer and the then-new science of x-ray analysis. The elder Bragg had migrated to Australia from England just after graduating in mathematics. He learned physics and chemistry during his career at the University of Adelaide. The younger Bragg was born in Adelaide but went back to the Cavendish Laboratories in England to a career in x-ray and neutron crystallography; he provided support for Watson, Crick, and Wilkins for their work on unraveling the mysteries of DNA and to Max Perutz for his 1962 Nobel Prize-winning work on the structure of hemoglobin. Here again, we witness the enabling nature of physics—establishing instruments and designing experiments as well as solving mysteries in the biomedical sciences.
Certain other uses for x rays will be studied in later chapters. X rays are useful in the treatment of cancer because of the inhibiting effect they have on cell reproduction. X rays observed coming from outer space are useful in determining the nature of their sources, such as neutron stars and possibly black holes. Created in nuclear bomb explosions, x rays can also be used to detect clandestine atmospheric tests of these weapons. X rays can cause excitations of atoms, which then fluoresce (emitting characteristic EM radiation), making x-ray-induced fluorescence a valuable analytical tool in a range of fields from art to archaeology.
Summary
- X rays are relatively high-frequency EM radiation. They are produced by transitions between inner-shell electron levels, which produce x rays characteristic of the atomic element, or by accelerating electrons.
- X rays have many uses, including medical diagnostics and x-ray diffraction.
Glossary
- x rays
- a form of electromagnetic radiation
- x-ray diffraction
- a technique that provides the detailed information about crystallographic structure of natural and manufactured materials
|
libretexts
|
2025-03-17T19:53:47.295423
| 2016-07-24T08:39:11 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.04%3A_X_Rays-_Atomic_Origins_and_Applications",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.4: X Rays - Atomic Origins and Applications",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.05%3A_Applications_of_Atomic_Excitations_and_De-Excitations
|
30.5: Applications of Atomic Excitations and De-Excitations
Learning Objectives
By the end of this section, you will be able to:
- Define and discuss fluorescence.
- Define metastable.
- Describe how laser emission is produced.
- Explain population inversion.
- Define and discuss holography.
Many properties of matter and phenomena in nature are directly related to atomic energy levels and their associated excitations and de-excitations. The color of a rose, the output of a laser, and the transparency of air are but a few examples. (See Figure .) While it may not appear that glow-in-the-dark pajamas and lasers have much in common, they are in fact different applications of similar atomic de-excitations.
The color of a material is due to the ability of its atoms to absorb certain wavelengths while reflecting or reemitting others. A simple red material, for example a tomato, absorbs all visible wavelengths except red. This is because the atoms of its hydrocarbon pigment (lycopene) have levels separated by a variety of energies corresponding to all visible photon energies except red. Air is another interesting example. It is transparent to visible light, because there are few energy levels that visible photons can excite in air molecules and atoms. Visible light, thus, cannot be absorbed. Furthermore, visible light is only weakly scattered by air, because visible wavelengths are so much greater than the sizes of the air molecules and atoms. Light must pass through kilometers of air to scatter enough to cause red sunsets and blue skies.
Fluorescence and Phosphorescence
The ability of a material to emit various wavelengths of light is similarly related to its atomic energy levels. Figure shows a scorpion illuminated by a UV lamp, sometimes called a black light. Some rocks also glow in black light, the particular colors being a function of the rock’s mineral composition. Black lights are also used to make certain posters glow.
In the fluorescence process, an atom is excited to a level several steps above its ground state by the absorption of a relatively high-energy UV photon. This is called atomic excitation . Once it is excited, the atom can de-excite in several ways, one of which is to re-emit a photon of the same energy as excited it, a single step back to the ground state. This is called atomic de-excitation . All other paths of de-excitation involve smaller steps, in which lower-energy (longer wavelength) photons are emitted. Some of these may be in the visible range, such as for the scorpion in Figure . Fluorescence is defined to be any process in which an atom or molecule, excited by a photon of a given energy, and de-excites by emission of a lower-energy photon.
Fluorescence can be induced by many types of energy input. Fluorescent paint, dyes, and even soap residues in clothes make colors seem brighter in sunlight by converting some UV into visible light. X rays can induce fluorescence, as is done in x-ray fluoroscopy to make brighter visible images. Electric discharges can induce fluorescence, as in so-called neon lights and in gas-discharge tubes that produce atomic and molecular spectra. Common fluorescent lights use an electric discharge in mercury vapor to cause atomic emissions from mercury atoms. The inside of a fluorescent light is coated with a fluorescent material that emits visible light over a broad spectrum of wavelengths. By choosing an appropriate coating, fluorescent lights can be made more like sunlight or like the reddish glow of candlelight, depending on needs. Fluorescent lights are more efficient in converting electrical energy into visible light than incandescent filaments (about four times as efficient), the blackbody radiation of which is primarily in the infrared due to temperature limitations.
This atom is excited to one of its higher levels by absorbing a UV photon. It can de-excite in a single step, re-emitting a photon of the same energy, or in several steps. The process is called fluorescence if the atom de-excites in smaller steps, emitting energy different from that which excited it. Fluorescence can be induced by a variety of energy inputs, such as UV, x-rays, and electrical discharge.
The spectacular Waitomo caves on North Island in New Zealand provide a natural habitat for glow-worms. The glow-worms hang up to 70 silk threads of about 30 or 40 cm each to trap prey that fly towards them in the dark. The fluorescence process is very efficient, with nearly 100% of the energy input turning into light. (In comparison, fluorescent lights are about 20% efficient.)
Fluorescence has many uses in biology and medicine. It is commonly used to label and follow a molecule within a cell. Such tagging allows one to study the structure of DNA and proteins. Fluorescent dyes and antibodies are usually used to tag the molecules, which are then illuminated with UV light and their emission of visible light is observed. Since the fluorescence of each element is characteristic, identification of elements within a sample can be done this way.
Figure shows a commonly used fluorescent dye called fluorescein. Below that, Figure reveals the diffusion of a fluorescent dye in water by observing it under UV light.
Nano-Crystals
Recently, a new class of fluorescent materials has appeared—“nano-crystals.” These are single-crystal molecules less than 100 nm in size. The smallest of these are called “quantum dots.” These semiconductor indicators are very small (2–6 nm) and provide improved brightness. They also have the advantage that all colors can be excited with the same incident wavelength. They are brighter and more stable than organic dyes and have a longer lifetime than conventional phosphors. They have become an excellent tool for long-term studies of cells, including migration and morphology. ( Figure .)
Once excited, an atom or molecule will usually spontaneously de-excite quickly. (The electrons raised to higher levels are attracted to lower ones by the positive charge of the nucleus.) Spontaneous de-excitation has a very short mean lifetime of typically about \(10^{-8}\space s.\) However, some levels have significantly longer lifetimes, ranging up to milliseconds to minutes or even hours. These energy levels are inhibited and are slow in de-exciting because their quantum numbers differ greatly from those of available lower levels. Although these level lifetimes are short in human terms, they are many orders of magnitude longer than is typical and, thus, are said to be metastable , meaning relatively stable. Phosphorescence is the de-excitation of a metastable state. Glow-in-the-dark materials, such as luminous dials on some watches and clocks and on children’s toys and pajamas, are made of phosphorescent substances. Visible light excites the atoms or molecules to metastable states that decay slowly, releasing the stored excitation energy partially as visible light. In some ceramics, atomic excitation energy can be frozen in after the ceramic has cooled from its firing. It is very slowly released, but the ceramic can be induced to phosphoresce by heating—a process called “thermoluminescence.” Since the release is slow, thermoluminescence can be used to date antiquities. The less light emitted, the older the ceramic. (See Figure .)
Lasers
Lasers today are commonplace. Lasers are used to read bar codes at stores and in libraries, laser shows are staged for entertainment, laser printers produce high-quality images at relatively low cost, and lasers send prodigious numbers of telephone messages through optical fibers. Among other things, lasers are also employed in surveying, weapons guidance, tumor eradication, retinal welding, and for reading music CDs and computer CD-ROMs.
Why do lasers have so many varied applications? The answer is that lasers produce single-wavelength EM radiation that is also very coherent—that is, the emitted photons are in phase. Laser output can, thus, be more precisely manipulated than incoherent mixed-wavelength EM radiation from other sources. The reason laser output is so pure and coherent is based on how it is produced, which in turn depends on a metastable state in the lasing material. Suppose a material had the energy levels shown in Figure . When energy is put into a large collection of these atoms, electrons are raised to all possible levels. Most return to the ground state in less than about \(10^{-8} \, s\) but those in the metastable state linger. This includes those electrons originally excited to the metastable state and those that fell into it from above. It is possible to get a majority of the atoms into the metastable state, a condition called a population inversion .
Once a population inversion is achieved, a very interesting thing can happen, as shown in Figure . An electron spontaneously falls from the metastable state, emitting a photon. This photon finds another atom in the metastable state and stimulates it to decay, emitting a second photon of the same wavelength and in phase with the first, and so on. Stimulated emission is the emission of electromagnetic radiation in the form of photons of a given frequency, triggered by photons of the same frequency. For example, an excited atom, with an electron in an energy orbit higher than normal, releases a photon of a specific frequency when the electron drops back to a lower energy orbit. If this photon then strikes another electron in the same high-energy orbit in another atom, another photon of the same frequency is released. The emitted photons and the triggering photons are always in phase, have the same polarization, and travel in the same direction. The probability of absorption of a photon is the same as the probability of stimulated emission, and so a majority of atoms must be in the metastable state to produce energy. Einstein (again Einstein, and back in 1917!) was one of the important contributors to the understanding of stimulated emission of radiation. Among other things, Einstein was the first to realize that stimulated emission and absorption are equally probable. The laser acts as a temporary energy storage device that subsequently produces a massive energy output of single-wavelength, in-phase photons.
The name laser is an acronym for light amplification by stimulated emission of radiation, the process just described. The process was proposed and developed following the advances in quantum physics. A joint Nobel Prize was awarded in 1964 to American Charles Townes (1915–), and Nikolay Basov (1922–2001) and Aleksandr Prokhorov (1916–2002), from the Soviet Union, for the development of lasers. The Nobel Prize in 1981 went to Arthur Schawlow (1921-1999) for pioneering laser applications. The original devices were called masers, because they produced microwaves. The first working laser was created in 1960 at Hughes Research labs (CA) by T. Maiman. It used a pulsed high-powered flash lamp and a ruby rod to produce red light. Today the name laser is used for all such devices developed to produce a variety of wavelengths, including microwave, infrared, visible, and ultraviolet radiation. Figure shows how a laser can be constructed to enhance the stimulated emission of radiation. Energy input can be from a flash tube, electrical discharge, or other sources, in a process sometimes called optical pumping. A large percentage of the original pumping energy is dissipated in other forms, but a population inversion must be achieved. Mirrors can be used to enhance stimulated emission by multiple passes of the radiation back and forth through the lasing material. One of the mirrors is semitransparent to allow some of the light to pass through. The laser output from a laser is a mere 1% of the light passing back and forth in a laser.
Lasers are constructed from many types of lasing materials, including gases, liquids, solids, and semiconductors. But all lasers are based on the existence of a metastable state or a phosphorescent material. Some lasers produce continuous output; others are pulsed in bursts as brief as \(10^{-14} \, s\). Some laser outputs are fantastically powerful—some greater than \(10^{12} \, W\) —but the more common, everyday lasers produce something on the order of \(10^3\space W\). The helium-neon laser that produces a familiar red light is very common. Figure shows the energy levels of helium and neon, a pair of noble gases that work well together. An electrical discharge is passed through a helium-neon gas mixture in which the number of atoms of helium is ten times that of neon. The first excited state of helium is metastable and, thus, stores energy. This energy is easily transferred by collision to neon atoms, because they have an excited state at nearly the same energy as that in helium. That state in neon is also metastable, and this is the one that produces the laser output. (The most likely transition is to the nearby state, producing 1.96 eV photons, which have a wavelength of 633 nm and appear red.) A population inversion can be produced in neon, because there are so many more helium atoms and these put energy into the neon. Helium-neon lasers often have continuous output, because the population inversion can be maintained even while lasing occurs. Probably the most common lasers in use today, including the common laser pointer, are semiconductor or diode lasers, made of silicon. Here, energy is pumped into the material by passing a current in the device to excite the electrons. Special coatings on the ends and fine cleavings of the semiconductor material allow light to bounce back and forth and a tiny fraction to emerge as laser light. Diode lasers can usually run continually and produce outputs in the milliwatt range.
There are many medical applications of lasers. Lasers have the advantage that they can be focused to a small spot. They also have a well-defined wavelength. Many types of lasers are available today that provide wavelengths from the ultraviolet to the infrared. This is important, as one needs to be able to select a wavelength that will be preferentially absorbed by the material of interest. Objects appear a certain color because they absorb all other visible colors incident upon them. What wavelengths are absorbed depends upon the energy spacing between electron orbitals in that molecule. Unlike the hydrogen atom, biological molecules are complex and have a variety of absorption wavelengths or lines. But these can be determined and used in the selection of a laser with the appropriate wavelength. Water is transparent to the visible spectrum but will absorb light in the UV and IR regions. Blood (hemoglobin) strongly reflects red but absorbs most strongly in the UV.
Laser surgery uses a wavelength that is strongly absorbed by the tissue it is focused upon. One example of a medical application of lasers is shown in Figure . A detached retina can result in total loss of vision. Burns made by a laser focused to a small spot on the retina form scar tissue that can hold the retina in place, salvaging the patient’s vision. Other light sources cannot be focused as precisely as a laser due to refractive dispersion of different wavelengths. Similarly, laser surgery in the form of cutting or burning away tissue is made more accurate because laser output can be very precisely focused and is preferentially absorbed because of its single wavelength. Depending upon what part or layer of the retina needs repairing, the appropriate type of laser can be selected. For the repair of tears in the retina, a green argon laser is generally used. This light is absorbed well by tissues containing blood, so coagulation or “welding” of the tear can be done.
In dentistry, the use of lasers is rising. Lasers are most commonly used for surgery on the soft tissue of the mouth. They can be used to remove ulcers, stop bleeding, and reshape gum tissue. Their use in cutting into bones and teeth is not quite so common; here the erbium YAG (yttrium aluminum garnet) laser is used.
The massive combination of lasers shown in Figure can be used to induce nuclear fusion, the energy source of the sun and hydrogen bombs. Since lasers can produce very high power in very brief pulses, they can be used to focus an enormous amount of energy on a small glass sphere containing fusion fuel. Not only does the incident energy increase the fuel temperature significantly so that fusion can occur, it also compresses the fuel to great density, enhancing the probability of fusion. The compression or implosion is caused by the momentum of the impinging laser photons.
Music CDs are now so common that vinyl records are quaint antiquities. CDs (and DVDs) store information digitally and have a much larger information-storage capacity than vinyl records. An entire encyclopedia can be stored on a single CD. Figure illustrates how the information is stored and read from the CD. Pits made in the CD by a laser can be tiny and very accurately spaced to record digital information. These are read by having an inexpensive solid-state infrared laser beam scatter from pits as the CD spins, revealing their digital pattern and the information encoded upon them.
Holograms, such as those in Figure , are true three-dimensional images recorded on film by lasers. Holograms are used for amusement, decoration on novelty items and magazine covers, security on credit cards and driver’s licenses (a laser and other equipment is needed to reproduce them), and for serious three-dimensional information storage. You can see that a hologram is a true three-dimensional image, because objects change relative position in the image when viewed from different angles.
The name hologram means “entire picture” (from the Greek holo , as in holistic), because the image is three-dimensional. Holography is the process of producing holograms and, although they are recorded on photographic film, the process is quite different from normal photography. Holography uses light interference or wave optics, whereas normal photography uses geometric optics. Figure shows one method of producing a hologram. Coherent light from a laser is split by a mirror, with part of the light illuminating the object. The remainder, called the reference beam, shines directly on a piece of film. Light scattered from the object interferes with the reference beam, producing constructive and destructive interference. As a result, the exposed film looks foggy, but close examination reveals a complicated interference pattern stored on it. Where the interference was constructive, the film (a negative actually) is darkened. Holography is sometimes called lensless photography, because it uses the wave characteristics of light as contrasted to normal photography, which uses geometric optics and so requires lenses.
Light falling on a hologram can form a three-dimensional image. The process is complicated in detail, but the basics can be understood as shown in Figure , in which a laser of the same type that exposed the film is now used to illuminate it. The myriad tiny exposed regions of the film are dark and block the light, while less exposed regions allow light to pass. The film thus acts much like a collection of diffraction gratings with various spacings. Light passing through the hologram is diffracted in various directions, producing both real and virtual images of the object used to expose the film. The interference pattern is the same as that produced by the object. Moving your eye to various places in the interference pattern gives you different perspectives, just as looking directly at the object would. The image thus looks like the object and is three-dimensional like the object.
The hologram illustrated in Figure is a transmission hologram. Holograms that are viewed with reflected light, such as the white light holograms on credit cards, are reflection holograms and are more common. White light holograms often appear a little blurry with rainbow edges, because the diffraction patterns of various colors of light are at slightly different locations due to their different wavelengths. Further uses of holography include all types of 3-D information storage, such as of statues in museums and engineering studies of structures and 3-D images of human organs. Invented in the late 1940s by Dennis Gabor (1900–1970), who won the 1971 Nobel Prize in Physics for his work, holography became far more practical with the development of the laser. Since lasers produce coherent single-wavelength light, their interference patterns are more pronounced. The precision is so great that it is even possible to record numerous holograms on a single piece of film by just changing the angle of the film for each successive image. This is how the holograms that move as you walk by them are produced—a kind of lensless movie.
In a similar way, in the medical field, holograms have allowed complete 3-D holographic displays of objects from a stack of images. Storing these images for future use is relatively easy. With the use of an endoscope, high-resolution 3-D holographic images of internal organs and tissues can be made.
Glossary
- metastable
- a state whose lifetime is an order of magnitude longer than the most short-lived states
- atomic excitation
- a state in which an atom or ion acquires the necessary energy to promote one or more of its electrons to electronic states higher in energy than their ground state
- atomic de-excitation
- process by which an atom transfers from an excited electronic state back to the ground state electronic configuration; often occurs by emission of a photon
- laser
- acronym for light amplification by stimulated emission of radiation
- phosphorescence
- the de-excitation of a metastable state
- population inversion
- the condition in which the majority of atoms in a sample are in a metastable state
- stimulated emission
- emission by atom or molecule in which an excited state is stimulated to decay, most readily caused by a photon of the same energy that is necessary to excite the state
- hologram
- means entire picture (from the Greek word holo , as in holistic), because the image produced is three dimensional
- holography
- the process of producing holograms
- fluorescence
- any process in which an atom or molecule, excited by a photon of a given energy, de-excites by emission of a lower-energy photon
|
libretexts
|
2025-03-17T19:53:47.382445
| 2016-07-24T08:40:00 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.05%3A_Applications_of_Atomic_Excitations_and_De-Excitations",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.5: Applications of Atomic Excitations and De-Excitations",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.06%3A_The_Wave_Nature_of_Matter_Causes_Quantization
|
30.6: The Wave Nature of Matter Causes Quantization
Learning Objectives
By the end of this section, you will be able to:
- Explain Bohr’s model of atom.
- Define and describe quantization of angular momentum.
- Calculate the angular momentum for an orbit of atom.
- Define and describe the wave-like properties of matter.
After visiting some of the applications of different aspects of atomic physics, we now return to the basic theory that was built upon Bohr’s atom. Einstein once said it was important to keep asking the questions we eventually teach children not to ask. Why is angular momentum quantized? You already know the answer. Electrons have wave-like properties, as de Broglie later proposed. They can exist only where they interfere constructively, and only certain orbits meet proper conditions, as we shall see in the next module.
Following Bohr’s initial work on the hydrogen atom, a decade was to pass before de Broglie proposed that matter has wave properties. The wave-like properties of matter were subsequently confirmed by observations of electron interference when scattered from crystals. Electrons can exist only in locations where they interfere constructively. How does this affect electrons in atomic orbits? When an electron is bound to an atom, its wavelength must fit into a small space, something like a standing wave on a string. (See Figure .) Allowed orbits are those orbits in which an electron constructively interferes with itself. Not all orbits produce constructive interference. Thus only certain orbits are allowed—the orbits are quantized.
For a circular orbit, constructive interference occurs when the electron’s wavelength fits neatly into the circumference, so that wave crests always align with crests and wave troughs align with troughs, as shown in Figure (b). More precisely, when an integral multiple of the electron’s wavelength equals the circumference of the orbit, constructive interference is obtained. In equation form, the condition for constructive interference and an allowed electron orbit is
\[n \lambda_n = 2 \pi r_n (n = 1, \, 2, \, 3, ...),\] where \(\lambda_n\) is the electron’s wavelength and \(r_n\) is the radius of that circular orbit. The de Broglie wavelength is \(\lambda = h/p = h/mv\), and so here \(\lambda = h/m_e v\). Substituting this into the previous condition for constructive interference produces an interesting result:
\[\dfrac{nh}{m_ev} = 2\pi r_n.\] Rearranging terms, and noting that \(L = mvr\) for a circular orbit, we obtain the quantization of angular momentum as the condition for allowed orbits:
\[L = m_e vr_n = n\dfrac{h}{2\pi} (n = 1, \, 2, \, 3, ...).\] This is what Bohr was forced to hypothesize as the rule for allowed orbits, as stated earlier. We now realize that it is the condition for constructive interference of an electron in a circular orbit. Figure illustrates this for \(n = 3\) and \(n = 4\).
Waves and Quantization
The wave nature of matter is responsible for the quantization of energy levels in bound systems. Only those states where matter interferes constructively exist, or are “allowed.” Since there is a lowest orbit where this is possible in an atom, the electron cannot spiral into the nucleus. It cannot exist closer to or inside the nucleus. The wave nature of matter is what prevents matter from collapsing and gives atoms their sizes.
Because of the wave character of matter, the idea of well-defined orbits gives way to a model in which there is a cloud of probability, consistent with Heisenberg’s uncertainty principle. Figure shows how this applies to the ground state of hydrogen. If you try to follow the electron in some well-defined orbit using a probe that has a small enough wavelength to get some details, you will instead knock the electron out of its orbit. Each measurement of the electron’s position will find it to be in a definite location somewhere near the nucleus. Repeated measurements reveal a cloud of probability like that in the figure, with each speck the location determined by a single measurement. There is not a well-defined, circular-orbit type of distribution. Nature again proves to be different on a small scale than on a macroscopic scale.
There are many examples in which the wave nature of matter causes quantization in bound systems such as the atom. Whenever a particle is confined or bound to a small space, its allowed wavelengths are those which fit into that space. For example, the particle in a box model describes a particle free to move in a small space surrounded by impenetrable barriers. This is true in blackbody radiators (atoms and molecules) as well as in atomic and molecular spectra. Various atoms and molecules will have different sets of electron orbits, depending on the size and complexity of the system. When a system is large, such as a grain of sand, the tiny particle waves in it can fit in so many ways that it becomes impossible to see that the allowed states are discrete. Thus the correspondence principle is satisfied. As systems become large, they gradually look less grainy, and quantization becomes less evident. Unbound systems (small or not), such as an electron freed from an atom, do not have quantized energies, since their wavelengths are not constrained to fit in a certain volume.
PHET EXPLORATIONS: QUANTUM WAVE INTERFERENCE
When do photons, electrons, and atoms behave like particles and when do they behave like waves? Watch waves spread out and interfere as they pass through a double slit, then get detected on a screen as tiny dots. Use quantum detectors to explore how measurements change the waves and the patterns they produce on the screen.
Summary
- Quantization of orbital energy is caused by the wave nature of matter. Allowed orbits in atoms occur for constructive interference of electrons in the orbit, requiring an integral number of wavelengths to fit in an orbit’s circumference; that is, \[n\lambda_n = 2\pi r_n (n = 1, \, 2, \, 3, ...),\] where \(\lambda_n\) is the electron’s de Broglie wavelength.
- Owing to the wave nature of electrons and the Heisenberg uncertainty principle, there are no well-defined orbits; rather, there are clouds of probability.
- Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by \[\Delta E = hf = E_i - E_f,\] where \(\Delta E\) is the change in energy between the initial and final orbits and \(hf\) is the energy of an absorbed or emitted photon.
- It is useful to plot orbit energies on a vertical graph called an energy-level diagram.
- The allowed orbits are circular, Bohr proposed, and must have quantized orbital angular momentum given by \[L = m_evr_n = n\dfrac{h}{2\pi} (n = 1, \, 2, \, 3, . . .),\] where \(L\) is the angular momentum, \(r_n\) is the radius of orbit \(n\), and \(h\) is Planck’s constant.
|
libretexts
|
2025-03-17T19:53:47.524835
| 2016-07-24T08:41:26 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.06%3A_The_Wave_Nature_of_Matter_Causes_Quantization",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.6: The Wave Nature of Matter Causes Quantization",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.07%3A_Patterns_in_Spectra_Reveal_More_Quantization
|
30.7: Patterns in Spectra Reveal More Quantization
Learning Objectives
By the end of this section, you will be able to:
- State and discuss the Zeeman effect.
- Define orbital magnetic field.
- Define orbital angular momentum.
- Define space quantization.
High-resolution measurements of atomic and molecular spectra show that the spectral lines are even more complex than they first appear. In this section, we will see that this complexity has yielded important new information about electrons and their orbits in atoms.
In order to explore the substructure of atoms (and knowing that magnetic fields affect moving charges), the Dutch physicist Hendrik Lorentz (1853–1930) suggested that his student Pieter Zeeman (1865–1943) study how spectra might be affected by magnetic fields. What they found became known as the Zeeman effect , which involved spectral lines being split into two or more separate emission lines by an external magnetic field, as shown in Figure \(\PageIndex{1}\). For their discoveries, Zeeman and Lorentz shared the 1902 Nobel Prize in Physics.
Zeeman splitting is complex. Some lines split into three lines, some into five, and so on. But one general feature is that the amount the split lines are separated is proportional to the applied field strength, indicating an interaction with a moving charge. The splitting means that the quantized energy of an orbit is affected by an external magnetic field, causing the orbit to have several discrete energies instead of one. Even without an external magnetic field, very precise measurements showed that spectral lines are doublets (split into two), apparently by magnetic fields within the atom itself.
Bohr’s theory of circular orbits is useful for visualizing how an electron’s orbit is affected by a magnetic field. The circular orbit forms a current loop, which creates a magnetic field of its own, \(B_{orb}\) as seen in Figure \(\PageIndex{2}\). Note that the orbital magnetic field \(B_{orb}\) and the orbital angular momentum \(L_{orb}\) are along the same line. The external magnetic field and the orbital magnetic field interact; a torque is exerted to align them. A torque rotating a system through some angle does work so that there is energy associated with this interaction. Thus, orbits at different angles to the external magnetic field have different energies. What is remarkable is that the energies are quantized—the magnetic field splits the spectral lines into several discrete lines that have different energies. This means that only certain angles are allowed between the orbital angular momentum and the external field, as seen in Figure \(\PageIndex{3}\).
We already know that the magnitude of angular momentum is quantized for electron orbits in atoms. The new insight is that the direction of the orbital angular momentum is also quantized . The fact that the orbital angular momentum can have only certain directions is called space quantization . Like many aspects of quantum mechanics, this quantization of direction is totally unexpected. On the macroscopic scale, orbital angular momentum, such as that of the moon around the earth, can have any magnitude and be in any direction.
Detailed treatment of space quantization began to explain some complexities of atomic spectra, but certain patterns seemed to be caused by something else. As mentioned, spectral lines are actually closely spaced doublets, a characteristic called fine structure , as shown in Figure \(\PageIndex{4}\). The doublet changes when a magnetic field is applied, implying that whatever causes the doublet interacts with a magnetic field. In 1925, Sem Goudsmit and George Uhlenbeck, two Dutch physicists, successfully argued that electrons have properties analogous to a macroscopic charge spinning on its axis. Electrons, in fact, have an internal or intrinsic angular momentum called intrinsic spin \(S\). Since electrons are charged, their intrinsic spin creates an intrinsic magnetic field \(B_{orb}\), which interacts with their orbital magnetic field \(B_{orb}\). Furthermore, electron intrinsic spin is quantized in magnitude and direction , analogous to the situation for orbital angular momentum. The spin of the electron can have only one magnitude, and its direction can be at only one of two angles relative to a magnetic field, as seen in Figure \(\PageIndex{5}\). We refer to this as spin up or spin down for the electron. Each spin direction has a different energy; hence, spectroscopic lines are split into two. Spectral doublets are now understood as being due to electron spin.
These two new insights—that the direction of angular momentum, whether orbital or spin, is quantized, and that electrons have intrinsic spin—help to explain many of the complexities of atomic and molecular spectra. In magnetic resonance imaging, it is the way that the intrinsic magnetic field of hydrogen and biological atoms interact with an external field that underlies the diagnostic fundamentals.
Summary
- The Zeeman effect—the splitting of lines when a magnetic field is applied—is caused by other quantized entities in atoms.
- Both the magnitude and direction of orbital angular momentum are quantized.
- The same is true for the magnitude and direction of the intrinsic spin of electrons.
Glossary
- Zeeman effect
- the effect of external magnetic fields on spectral lines
- intrinsic spin
- the internal or intrinsic angular momentum of electrons
- orbital angular momentum
- an angular momentum that corresponds to the quantum analog of classical angular momentum
- fine structure
- the splitting of spectral lines of the hydrogen spectrum when the spectral lines are examined at very high resolution
- space quantization
- the fact that the orbital angular momentum can have only certain directions
- intrinsic magnetic field
- the magnetic field generated due to the intrinsic spin of electrons
- orbital magnetic field
- the magnetic field generated due to the orbital motion of electrons
|
libretexts
|
2025-03-17T19:53:47.594086
| 2016-07-24T08:42:18 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.07%3A_Patterns_in_Spectra_Reveal_More_Quantization",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.7: Patterns in Spectra Reveal More Quantization",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.08%3A_Quantum_Numbers_and_Rules
|
30.8: Quantum Numbers and Rules
Learning Objectives
By the end of this section, you will be able to:
- Define quantum number.
- Calculate angle of angular momentum vector with an axis.
- Define spin quantum number.
Physical characteristics that are quantized -- such as energy, charge, and angular momentum -- are of such importance that names and symbols are given to them. The values of quantized entities are expressed in terms of quantum numbers , and the rules governing them are of the utmost importance in determining what nature is and does. This section covers some of the more important quantum numbers and rules—all of which apply in chemistry, material science, and far beyond the realm of atomic physics, where they were first discovered. Once again, we see how physics makes discoveries which enable other fields to grow.
The energy states of bound systems are quantized, because the particle wavelength can fit into the bounds of the system in only certain ways. This was elaborated for the hydrogen atom, for which the allowed energies are expressed as
\[E_{n} \propto \frac{1}{n^{2}},\label{30.9.1}\]
where \(n = 1,2,3, \cdot \cdot \cdot\). We define \(n\) to be the principal quantum number that labels the basic states of a system. The lowest-energy state has \(n = 1\), the first excited state has \(n=2\), and so on. Thus the allowed values for the principal quantum number are
\[n = 1, 2, 3, ...\label{30.9.2}\]
This is more than just a numbering scheme, since the energy of the system, such as the hydrogen atom, can be expressed as some function of \(n\), as can other characteristics (such as the orbital radii of the hydrogen atom).
The fact that the magnitude of angular momentum is quantized was first recognized by Bohr in relation to the hydrogen atom; it is now known to be true in general. With the development of quantum mechanics, it was found that the magnitude of angular momentum \(L\) can only have the values
\[L = \sqrt{l \left( l+1 \right) } \frac{h}{2\pi} \left(l = 0, 1, 2, ..., n-1\right), \label{30.9.3}\]
where \(l\) is defined to be the angular momentum quantum number . The rule for \(l\) in atoms is given in the parentheses. Given \(n\), the value \(l\) can be any integer from zero up to \(n-1\). For example, if \(n = 4\), then \(l\) can be 0, 1, 2, or 3.
Note that for \(n = 1\), \(l\) can only be zero. This means that the ground-state angular momentum for hydrogen is actually zero, not \(h/2\pi\) as Bohr proposed. The picture of circular orbits is not valid, because there would be angular momentum for any circular orbit. A more valid picture is the cloud of probability shown for the ground state of hydrogen in this link . The electron actually spends time in and near the nucleus. The reason the electron does not remain in the nucleus is related to Heisenberg’s uncertainty principle -- the electron’s energy would have to be much too large to be confined to the small space of the nucleus. Now the first excited state of hydrogen has \(n=2\), so that \(l\) can be either 0 or 1, according to the rule in Equation \(\ref{30.9.3}\). Similarly, for \(n =3\), \(l\) can be 0, 1, or 2. It is often most convenient to state the value of \(l\), a simple integer, rather than calculating the value of \(L\) from Equation \(\ref{30.9.3}\). For example, for \(l = 2\), we see that
\[L = \sqrt{l \left( l+1 \right) } \frac{h}{2\pi} = \sqrt{6} \frac{h}{2 \pi} = 0.390 h = 2.58 \times 10^{-34} J \cdot s.\]
It is much simpler to state \(l = 2\). As recognized in the Zeeman effect, the direction of angular momentum is quantized. We now know this is true in all circumstances. It is found that the component of angular momentum along one direction in space, usually called the -axis, can have only certain values of \(L_{z}\). The direction in space must be related to something physical, such as the direction of the magnetic field at that location. This is an aspect of relativity. Direction has no meaning if there is nothing that varies with direction, as does magnetic force. The allowed values of \(L_{z}\) are \[L_{z} = m_{l} \frac{h}{2\pi} \left( m_{l} = -l, -l + 1, ..., -1, 0, 1, ... l-1, l\right),\label{30.9.4}\] where \(L_{z}\) is the z-component of the angular momentum and \(m_{l}\) is the angular momentum projection quantum number. The rule in parentheses for the values of \(m_{l}\) is that it can range from \(-l\) to \(l\) in steps of one. For example, if \(l = 2\), then \(m_{l}\) can have the five values –2, –1, 0, 1, and 2. Each \(m_{l}\) corresponds to a different energy in the presence of a magnetic field, so that they are related to the splitting of spectral lines into discrete parts, as discussed in the preceding section. If the \(z\)- component of angular momentum can have only certain values, then the angular momentum can have only certain directions, as illustrated in Figure 30.9.1.
Example \(\PageIndex{1}\): What are the Allowed Directions?
Calculate the angles that the angular momentum vector \(L\) can make with the z-axis for \(l = 1\), as illustrated in Figure 30.9.1.
Strategy:
Figure 30.9.1. represents the vectors \(L\) and \(L_{z}\) as usual, with arrows proportional to their magnitudes and pointing in the correct directions. \(L\) and \(L_{z}\) form a right triangle, with \(L\) being the hypotenuse and \(L_{z}\) the adjacent side. This means that the ratio of \(L_{z}\) to \(L\) is the cosine of the angle of interest. We can find \(L\) and \(L_{z}\) using \(L = \sqrt{l\left(l + 1\right)}\frac{h}{2\pi}\) and \(L_{z} = m \frac{h}{2\pi}\).
Solution
We are give \(l = 1\), so that \(m_{l}\) can be =1, 0, or -1. Thus \(L\) has the value given by \(L = \sqrt{l\left(l + 1\right)}\frac{h}{2\pi}\).
\[L = \frac{\sqrt{l\left(l + 1\right)}h}{2\pi} = \frac{\sqrt{2}h}{2\pi} \label{30.9.5}\]
\(L_{z}\) can have three values, given by \(L_{z} = m_{l} \frac{h}{2\pi}\). \[L_{z} = m_{l} \frac{h}{2\pi} = \begin{cases} \frac{h}{2\pi}, ~ m_{l} = +1 \\[2ex] 0, ~ m_{l} = 0 \\[2ex] \frac{h}{2\pi}, ~ m_{l} = -1 \end{cases} \label{30.9.6}\]
As can be seen in Figure \(\cos{\theta} = L_{z}/L\), and so for \(m_{l} \pm 1\), we have \[\cos{\theta_{1}} = \frac{L_{z}}{L} = \frac{\frac{h}{2\pi}}{\frac{\sqrt{2}h}{2\pi}} = \frac{1}{\sqrt{2}} = -.707.\label{30.9.7}\]
Thus,
\[\theta_{1} = \cos{0.707}^{-1} = 45.0 ^{\circ}.\label{30.9.8}\]
Similarly, for \(m_{l} = 0\), we find \(\cos_{2} = 0\); thus,
\[\theta_{2} = \cos{0}^{-1} = 90.0. ^{\circ} \label{30.9.9}\]
And for \(m_{l} = -1\),
\[\cos{\theta_{3}} = \frac{L_{z}}{L} = \frac{-\frac{h}{2\pi}}{\frac{\sqrt{2}h}{2\pi}} = -\frac{1}{\sqrt{2}} = -0.707, \label{30.9.10}\]
so that
\[\theta_{3} = \cos{\left(-0.707\right)}^{-1} = 135.0^{\circ}.\label{30.9.11}\]
Discussion:
The angles are consistent with the figure. Only the angle relative to the z-axis is quantized. \(L\) can point in any direction as long as it makes the proper angle with the z-axis. Thus the angular momentum vectors lie on cones as illustrated. This behavior is not observed on the large scale. To see how the correspondence principle holds here, consider that the smallest angle (\(\theta_{1}\) in the example) is for the maximum value of \(m_{l} = 0\), namely \(m_{l} = l\). For that smallest angle,
\[\cos{\theta} = \frac{L_{z}}{L} = \frac{l}{\sqrt{l\left(l + 1 \right) }},\label{30.9.12}\]
which approaches 1 as \(l\) becomes very large. If \(\cos{\theta} = 1\), then \(\theta = 0^{\circ}\). Furthermore, for large \(l\), there are many values of \(m_{l}\), so that all angles become possible as \(l\) gets very large.
Intrinsic Spin Angular Momentum Is Quantized in Magnitude and Direction
There are two more quantum numbers of immediate concern. Both were first discovered for electrons in conjunction with fine structure in atomic spectra. It is now well established that electrons and other fundamental particles have intrinsic spin, roughly analogous to a planet spinning on its axis. This spin is a fundamental characteristic of particles, and only one magnitude of intrinsic spin is allowed for a given type of particle. Intrinsic angular momentum is quantized independently of orbital angular momentum. Additionally, the direction of the spin is also quantized. It has been found that the magnitude of the intrinsic (internal) spin angular momentum, \(S\), of an electron is given by
\[S = \sqrt{s\left(s+1\right)}\frac{h}{2\pi} \left( s = 1/2 ~ for ~ electrons\right), \label{30.9.13}\]
where \(s\) is defined to be the spin quantum number . This is very similar to the quantization of \(L\) given in \(L = \sqrt{l\left(l+1\right)}\frac{h}{2\pi}\), except that the only value allowed for \(s\) for electrons is 1/2.
The direction of intrinsic spin is quantized, just as is the direction of orbital angular momentum. The direction of spin angular momentum along one direction in space, again called the z-axis, can have only the values
\[s_{z} = m_{s} \frac{h}{2\pi} \left( m_{s} = -\frac{1}{2}, +\frac{1}{2} \right) \label{30.9.14}\]
for electrons. \(s_{z}\0 is the z-component of spin angular momentum and \(m_{s}\) is the spin projection quantum number . For electrons, \(s\) can only be 1/2, and \(m_{s}\) can be either +1/2 or –1/2. Spin projection \(m_{s} = + 1/2\) is referred to as spin up , whereas \(m_{s} = -1/2\) is called \(m_{s} = - 1/2\) is called spin down. These are illustrated in this link .
INTRINSIC SPIN
In later chapters, we will see that intrinsic spin is a characteristic of all subatomic particles. For some particles \(s\) is half-integral, whereas for others \(s\) is integral -- there are crucial differences between half-integral spin particles and integral spin particles. Protons and neutrons, like electrons, have \(s = 1/2\), whereas photons have \(s = 1\), and other particles called pions have \(s = 0\), and so on.
To summarize, the state of a system, such as the precise nature of an electron in an atom, is determined by its particular quantum numbers. These are expressed in the form \(n, l, m_{l}, m_{s}\) -- see Table For electrons in atoms, the principal quantum number can have the values \(n = 1, 2, 3, ...\). Once \(n\) is known, the values of the angular momentum quantum number are limited to \(l = 1, 2, 3, ..., n-1\). For a given value of \(l\), the angular momentum projection quantum number can have only the values \(m_{l} = -l, -l + 1, ..., -1, 0, 1, ..., l-1, l\). Electron spin is independent of \(n\), \(l\), and \(m_{l}\), always having \(s = 1/2\). The spin projection quantum number can have two values. \(m_{s} = 1/2 ~ or ~ -1/2\).
| Name | Symbol | Allowed Values |
|---|---|---|
| Principal quantum number | \(n\) | \(1,2,3,...\) |
| Angular momentum | \(l\) | \(0, 1, 2, ... n-1\) |
| Angular momentum projection | \(m_{l}\) | \(-l, -l+1, ..., -1, 0, 1, ..., l-1, l \left(or ~ 0, \pm 1, \pm 2, ..., \pm l \right)\) |
| Spin | \(s\) | \(1/2 \left(electrons\right)\) |
| Spin projection | \(m_{s}\) | \(\pm 1/2\) |
Figure 30.9.2. shows several hydrogen states corresponding to different sets of quantum numbers. Note that these clouds of probability are the locations of electrons as determined by making repeated measurements -- each measurement finds the electron in a definite location, with a greater chance of finding the electron in some places rather than others. With repeated measurements, the pattern of probability shown in the figure emerges. The clouds of probability do not look like nor do they correspond to classical orbits. The uncertainty principle actually prevents us and nature from knowing how the electron gets from one place to another, and so an orbit really does not exist as such. Nature on a small scale is again much different from that on the large scale.
We will see that the quantum numbers discussed in this section are valid for a broad range of particles and other systems, such as nuclei. Some quantum numbers, such as intrinsic spin, are related to fundamental classifications of subatomic particles, and they obey laws that will give us further insight into the substructure of matter and its interactions.
PHET EXPLORATIONS: STERN-GERLACH EXPERIMENT
The classic Stern-Gerlach Experiment shows that atoms have a property called spin. Spin is a kind of intrinsic angular momentum, which has no classical counterpart. When the z-component of the spin is measured, one always gets one of two values: spin up or spin down.
Summary
- Quantum numbers are used to express the allowed values of quantized entities. The principal quantum number \(n\) labels the basic states of a system and is given by \(n = 1, 2, 3, ... \)
- The magnitude of angular momentum is given by \(L = \sqrt{l \left( l+1 \right) } \frac{h}{2\pi} \left(l = 0, 1, 2, ..., n-1\right),\) where \(l\) is the angular momentum quantum number. The direction of angular momentum is quantized, in that its component along an axis defined by a magnetic field, called the z-axis is given by \(L_{z} = m_{l} \frac{h}{2\pi} ~ \left(m_{l} = -l, -l+1, ..., -1, 0, 1, ... l-1, l\right),\) where \(L_{z}\) is the z-component of the angular momentum and \(m_{l}\) is the angular momentum projection quantum number. Similarly, the electron’s intrinsic spin angular momentum \(S\) is given by \(S = \sqrt{s\left(s+1\right)}\frac{h}{2\pi} ~ \left(s = 1/2 ~ for ~ electrons \right),\) where \(S_{z}\) is the z-component of spin angular momentum and \(m_{s}\) is the spin projection quantum number. Spin projection \(m_{s} = +1/2\) is referred to as spin up, whereas \(m_{s} = -1/2\) is called spin down. The table summarizes the atomic quantum numbers and their allowed values.
|
libretexts
|
2025-03-17T19:53:47.671016
| 2016-07-24T08:43:21 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.08%3A_Quantum_Numbers_and_Rules",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.8: Quantum Numbers and Rules",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.09%3A_The_Pauli_Exclusion_Principle
|
30.9: The Pauli Exclusion Principle
Learning Objectives
By the end of this section, you will be able to:
- Define the composition of an atom along with its electrons, neutrons, and protons.
- Explain the Pauli exclusion principle and its application to the atom.
- Specify the shell and subshell symbols and their positions.
- Define the position of electrons in different shells of an atom.
- State the position of each element in the periodic table according to shell filling.
Multiple-Electron Atoms
All atoms except hydrogen are multiple-electron atoms. The physical and chemical properties of elements are directly related to the number of electrons a neutral atom has. The periodic table of the elements groups elements with similar properties into columns. This systematic organization is related to the number of electrons in a neutral atom, called the atomic number , \(Z\). We shall see in this section that the exclusion principle is key to the underlying explanations, and that it applies far beyond the realm of atomic physics.
In 1925, the Austrian physicist Wolfgang Pauli (see Figure ) proposed the following rule: No two electrons can have the same set of quantum numbers. That is, no two electrons can be in the same state. This statement is known as the Pauli exclusion principle , because it excludes electrons from being in the same state. The Pauli exclusion principle is extremely powerful and very broadly applicable. It applies to any identical particles with half-integral intrinsic spin—that is, having \(s = 1/2, \, 3/2, . . .\) Thus no two electrons can have the same set of quantum numbers.
Pauli Exclusion Principle
No two electrons can have the same set of quantum numbers. That is, no two electrons can be in the same state.
Let us examine how the exclusion principle applies to electrons in atoms. The quantum numbers involved were defined in Quantum Numbers and Rules as \(n, \, l, \, m_l, \, s\) and \(m_s\). Since \(s\) is always \(1/2\) for electrons, it is redundant to list \(s\), and so we omit it and specify the state of an electron by a set of four numbers \((n, \, l, \, m_l, \, m_s)\). For example, the quantum numbers \((2, \, 1, \, 0, - 1/2)\) completely specify the state of an electron in an atom.
Since no two electrons can have the same set of quantum numbers, there are limits to how many of them can be in the same energy state. Note that \(n\) determines the energy state in the absence of a magnetic field. So we first choose \(n\), and then we see how many electrons can be in this energy state or energy level. Consider the \(n = 1\) level, for example. The only value \(l\) can have is 0 (see [link] for a list of possible values once \(n\) is known), and thus \(m_l\) can only be 0. The spin projection \(m_s\) can be either \(+1/2\) or \(-1/2\), and so there can be two electrons in the \(n = 1\) state. One has quantum numbers \((1,\space 0, \, 0, \, +1/2)\), and the other has \((1, \, 0, \, 0, \, -1/2)\). Figure illustrates that there can be one or two electrons having \(n = 1\), but not three.
Shells and Subshells
Because of the Pauli exclusion principle, only hydrogen and helium can have all of their electrons in the \(n = 1\) state. Lithium (see the periodic table) has three electrons, and so one must be in the \(n = 2\) level. This leads to the concept of shells and shell filling. As we progress up in the number of electrons, we go from hydrogen to helium, lithium, beryllium, boron, and so on, and we see that there are limits to the number of electrons for each value of \(n\). Higher values of the shell \(n\) correspond to higher energies, and they can allow more electrons because of the various combinations of \(l, \, m_l\), and \(m_s\) that are possible. Each value of the principal quantum number \(n\) thus corresponds to an atomic shell into which a limited number of electrons can go. Shells and the number of electrons in them determine the physical and chemical properties of atoms, since it is the outermost electrons that interact most with anything outside the atom.
The probability clouds of electrons with the lowest value of \(l\) are closest to the nucleus and, thus, more tightly bound. Thus when shells fill, they start with \(l = 0\), progress to \(l = 1\), and so on. Each value of \(l\) thus corresponds to a subshell .
The table given below lists symbols traditionally used to denote shells and subshells.
| Shell | Subshell | Symbol |
|---|---|---|
| \(n\) | \(l\) | |
| 1 | 0 | \(s\) |
| 2 | 1 | \(p\) |
| 3 | 2 | \(d\) |
| 4 | 3 | \(f\) |
| 5 | 4 | \(g\) |
| 6 | \(h\) | |
| \(6^i\) | \(i\) |
To denote shells and subshells, we write \(nl\) with a number for \(n\) and a letter for \(l\). For example, an electron in the \(n = 1\) state must have \(l = 0\) and it is denoted as a \(1s\) electron. Two electrons in the \(n = 1\) state is denoted as \(1s^2\). Another example is an electron in the \(n = 2\) state with \(l = 1\), written as \(2p\).The case of three electrons with these quantum numbers is written \(2p^3\). This notation, called spectroscopic notation, is generalized as shown in Figure .
Counting the number of possible combinations of quantum numbers allowed by the exclusion principle, we can determine how many electrons it takes to fill each subshell and shell.
Example \(\PageIndex{1}\): How Many Electrons Can Be in This Shell?
List all the possible sets of quantum numbers for the \(n = 2\) shell, and determine the number of electrons that can be in the shell and each of its subshells.
Strategy
Given \(n = 2\) for the shell, the rules for quantum numbers limit \(l\) to be 0 or 1. The shell therefore has two subshells, labeled \(2s\) and \(2p\). Since the lowest \(l\) subshell fills first, we start with the \(2s\) subshell possibilities and then proceed with the \(2p\) subshell.
Solution
It is convenient to list the possible quantum numbers in a table, as shown below.
Discussion
It is laborious to make a table like this every time we want to know how many electrons can be in a shell or subshell. There exist general rules that are easy to apply, as we shall now see.
The number of electrons that can be in a subshell depends entirely on the value of \(l\). Once \(l\) is known, there are a fixed number of values of \(m_l\), each of which can have two values for \(m_s\). First, since \(m_l\) goes from \(-l\) to \(l\) in steps of 1, there are \(2l + 1\) possibilities. This number is multiplied by 2, since each electron can be spin up or spin down. Thus the maximum number of electrons that can be in a subshell is \(2(2l + 1)\).
For example, the \(2s\) subshell in Example has a maximum of 2 electrons in it, since \(2(2l + 1) = 2(0 + 1) = 2\) for this subshell. Similarly, the \(2p\) subshell has a maximum of 6 electrons, since \(2(2l + 1) = 2(2 + 1) = 6\). For a shell, the maximum number is the sum of what can fit in the subshells. Some algebra shows that the maximum number of electrons that can be in a shell is \(2n^2\).
For example, for the first shell \(n = 1\), and so \(2n^2 = 2\). We have already seen that only two electrons can be in the \(n = 1\) shell. Similarly, for the second shell, \(n = 2\), and so \(2n^2 = 8\). As found in Example , the total number of electrons in the \(n = 2\) shell is 8.
Example \(\PageIndex{2}\): Subshells and Totals for \(n = 3\)
How many subshells are in the \(n = 3\) shell? Identify each subshell, calculate the maximum number of electrons that will fit into each, and verify that the total is \(2n^2\).
Strategy
Subshells are determined by the value of \(l\); thus, we first determine which values of \(l\) are allowed, and then we apply the equation “maximum number of electrons that can be in a subshell \(= 2(2l + 1)\)" to find the number of electrons in each subshell.
Solution
Since \(n = 3\), we know that \(l\) can be \(0,\space 1\) or \(2\), thus, there are three possible subshells. In standard notation, they are labeled the \(3s, \, 3p,\) and \(3d\) subshells. We have already seen that 2 electrons can be in an \(s\) state, and 6 in a \(p\) state, but let us use the equation “maximum number of electrons that can be in a subshell \(= 2(2l + 1)\)" to calculate the maximum number in each:
\[3s \, has \, l = 0; \, thus, \, 2(2l + 1) = 2(0 + 1) = 2\]
\[3p \, has \, l = 1; \, thus, \, 2(2l + 1) = 2(2 + 1) = 6\]
\[3d \, has \, l = 2; \, thus, \, 2(2l + 1) = 2(4 + 1) = 10\]
\[Total = 18\]
\[(in \, the \, n = 3 \, shell)\]
The equation “maximum number of electrons that can be in a shell \(= 2n^2\)" gives the maximum number in the \(n = 3\) shell to be
\[Maximum \, number \, of \, electrons = 2n^2 = 2(3)^2 = 2(9) = 18.\]
Discussion
The total number of electrons in the three possible subshells is thus the same as the formula \(2n^2\). In standard (spectroscopic) notation, a filled \(n = 3\) shell is denoted as \(3s^2 3p^6 3d^10\). Shells do not fill in a simple manner. Before the \(n = 3\) shell is completely filled, for example, we begin to find electrons in the \(n = 4\) shell.
Shell Filling and the Periodic Table
Table shows electron configurations for the first 20 elements in the periodic table, starting with hydrogen and its single electron and ending with calcium. The Pauli exclusion principle determines the maximum number of electrons allowed in each shell and subshell. But the order in which the shells and subshells are filled is complicated because of the large numbers of interactions between electrons.
| Element | Number of electrons (Z) | Ground state configuration |
|---|---|---|
| H | 1 | \(1s^1\) |
| He | 2 | \(1s^2\) |
| Li | 3 | \(1s^2 \, 2s^1\) |
| Be | 4 | \( " \, 2s^2\) |
| B | 5 | \( " \, 2s^2 \, 2p^1\) |
| C | 6 | \( " \, 2s^2 \, 2p^2\) |
| N | 7 | \( " \, 2s^2 \, 2p^3\) |
| O | 8 | \( " \, 2s^2 \, 2p^4\) |
| F | 9 | \( " \, 2s^2 \, 2p^5\) |
| Ne | 10 | \( " \, 2s^2 \, 2p^6\) |
| Na | 11 | \( " \, 2s^2 \, 2p^6 \, 3s^1\) |
| Mg | 12 | \( " \, " \, " \, 3s^2\) |
| Al | 13 | \( " \, " \, " \, 3s^2 \, 3p^1\) |
| Si | 14 | \( " \, " \space" \, 3s^2 \, 3p^2\) |
| P | 15 | \( " \space " \, " \, 3s^2 \, 3p^3\) |
| S | 16 | \( " \, " \, " \, 3s^2 \, 3p^4\) |
| Cl | 17 | \( " \, " \, " \, 3s^2 \, 3p^5\) |
| Ar | 18 | \( " \, " \, " \, 3s^2 \, 3p^6\) |
| K | 19 | \( " \, " \, " \, 3s^2 \, 3p^6 \, 4s^1\) |
| Ca | 20 | \( " \, " \, " \, " \, " \, 4s^2\) |
Examining the above table, you can see that as the number of electrons in an atom increases from 1 in hydrogen to 2 in helium and so on, the lowest-energy shell gets filled first—that is, the \(n = 1\) shell fills first, and then the \(n = 2\) shell begins to fill. Within a shell, the subshells fill starting with the lowest \(l\), or with the \(s\) subshell, then the \(p\), and so on, usually until all subshells are filled. The first exception to this occurs for potassium, where the \(4s\) subshell begins to fill before any electrons go into the \(3d\) subshell. The next exception is not shown in Table ; it occurs for rubidium, where the \(5s\) subshell starts to fill before the \(4d\) subshell. The reason for these exceptions is that \(l = 0\) electrons have probability clouds that penetrate closer to the nucleus and, thus, are more tightly bound (lower in energy).
Figure shows the periodic table of the elements, through element 118. Of special interest are elements in the main groups, namely, those in the columns numbered 1, 2, 13, 14, 15, 16, 17, and 18.
The number of electrons in the outermost subshell determines the atom’s chemical properties, since it is these electrons that are farthest from the nucleus and thus interact most with other atoms. If the outermost subshell can accept or give up an electron easily, then the atom will be highly reactive chemically. Each group in the periodic table is characterized by its outermost electron configuration. Perhaps the most familiar is Group 18 (Group VIII), the noble gases (helium, neon, argon, etc.). These gases are all characterized by a filled outer subshell that is particularly stable. This means that they have large ionization energies and do not readily give up an electron. Furthermore, if they were to accept an extra electron, it would be in a significantly higher level and thus loosely bound. Chemical reactions often involve sharing electrons. Noble gases can be forced into unstable chemical compounds only under high pressure and temperature.
Group 17 (Group VII) contains the halogens, such as fluorine, chlorine, iodine and bromine, each of which has one less electron than a neighboring noble gas. Each halogen has 5\(p\) electrons (a \(p^5\) configuration), while the \(p\) subshell can hold 6 electrons. This means the halogens have one vacancy in their outermost subshell. They thus readily accept an extra electron (it becomes tightly bound, closing the shell as in noble gases) and are highly reactive chemically. The halogens are also likely to form singly negative ions, such as \(Cl^-\) fitting an extra electron into the vacancy in the outer subshell. In contrast, alkali metals, such as sodium and potassium, all have a single \(s\) electron in their outermost subshell (an \(s^1\) configuration) and are members of Group 1 (Group I). These elements easily give up their extra electron and are thus highly reactive chemically. As you might expect, they also tend to form singly positive ions, such as \(Na^+\), by losing their loosely bound outermost electron. They are metals (conductors), because the loosely bound outer electron can move freely.
Of course, other groups are also of interest. Carbon, silicon, and germanium, for example, have similar chemistries and are in Group 4 (Group IV). Carbon, in particular, is extraordinary in its ability to form many types of bonds and to be part of long chains, such as inorganic molecules. The large group of what are called transitional elements is characterized by the filling of the \(d\) subshells and crossing of energy levels. Heavier groups, such as the lanthanide series, are more complex—their shells do not fill in simple order. But the groups recognized by chemists such as Mendeleev have an explanation in the substructure of atoms.
PHET EXPLORATIONS: BUILD AN ATOM
Build an atom out of protons, neutrons, and electrons, and see how the element, charge, and mass change. Then play a game to test your ideas!
Summary
- The state of a system is completely described by a complete set of quantum numbers. This set is written as \((n, \, l, \, m_i, \, m_s)\).
- The Pauli exclusion principle says that no two electrons can have the same set of quantum numbers; that is, no two electrons can be in the same state.
- This exclusion limits the number of electrons in atomic shells and subshells. Each value of \(n\) corresponds to a shell, and each value of \(l\) corresponds to a subshell.
- The maximum number of electrons that can be in a subshell is \(2(2l + 1)\).
- The maximum number of electrons that can be in a shell is \(2n^2\).
Footnotes
- 1 It is unusual to deal with subshells having \(l\) greater than 6, but when encountered, they continue to be labeled in alphabetical order.
Glossary
- atomic number
- the number of protons in the nucleus of an atom
- Pauli exclusion principle
- a principle that states that no two electrons can have the same set of quantum numbers; that is, no two electrons can be in the same state
- shell
- a probability cloud for electrons that has a single principal quantum number
- subshell
- the probability cloud for electrons that has a single angular momentum quantum number
|
libretexts
|
2025-03-17T19:53:47.771853
| 2016-07-24T08:44:21 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.09%3A_The_Pauli_Exclusion_Principle",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.9: The Pauli Exclusion Principle",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.E%3A_Atomic_Physics_(Exercises)
|
30.E: Atomic Physics (Exercises)
-
- Last updated
- Save as PDF
Conceptual Questions
30.1: Discovery of the Atom
1. Name three different types of evidence for the existence of atoms.
2. Explain why patterns observed in the periodic table of the elements are evidence for the existence of atoms, and why Brownian motion is a more direct type of evidence for their existence.
3. If atoms exist, why can’t we see them with visible light?
30.2: Discovery of the Parts of the Atom: Electrons and Nuclei
4. What two pieces of evidence allowed the first calculation of me, the mass of the electron? Justify your response.
(a) The ratios \(\displaystyle q_e/m_e\) and \(\displaystyle q_p/m_p\).
(b) The values of \(\displaystyle q_e\) and \(\displaystyle E_B\).
(c) The ratio \(\displaystyle q_e/m_e\) and \(\displaystyle q_e\).
5. How do the allowed orbits for electrons in atoms differ from the allowed orbits for planets around the sun? Explain how the correspondence principle applies here.
30.3: Bohr’s Theory of the Hydrogen Atom
6. How do the allowed orbits for electrons in atoms differ from the allowed orbits for planets around the sun? Explain how the correspondence principle applies here.
7. Explain how Bohr’s rule for the quantization of electron orbital angular momentum differs from the actual rule.
8. What is a hydrogen-like atom, and how are the energies and radii of its electron orbits related to those in hydrogen?
30.4: X Rays- Atomic Origins and Applications
9. Explain why characteristic x rays are the most energetic in the EM emission spectrum of a given element.
10. Why does the energy of characteristic x rays become increasingly greater for heavier atoms?
11. Observers at a safe distance from an atmospheric test of a nuclear bomb feel its heat but receive none of its copious x rays. Why is air opaque to x rays but transparent to infrared?
12. Lasers are used to burn and read CDs. Explain why a laser that emits blue light would be capable of burning and reading more information than one that emits infrared.
13. Crystal lattices can be examined with x rays but not UV. Why?
14. CT scanners do not detect details smaller than about 0.5 mm. Is this limitation due to the wavelength of x rays? Explain.
30.5: Applications of Atomic Excitations and De-Excitations
15. How do the allowed orbits for electrons in atoms differ from the allowed orbits for planets around the sun? Explain how the correspondence principle applies here.
16. Atomic and molecular spectra are discrete. What does discrete mean, and how are discrete spectra related to the quantization of energy and electron orbits in atoms and molecules?
17. Hydrogen gas can only absorb EM radiation that has an energy corresponding to a transition in the atom, just as it can only emit these discrete energies. When a spectrum is taken of the solar corona, in which a broad range of EM wavelengths are passed through very hot hydrogen gas, the absorption spectrum shows all the features of the emission spectrum. But when such EM radiation passes through room-temperature hydrogen gas, only the Lyman series is absorbed. Explain the difference.
18. Lasers are used to burn and read CDs. Explain why a laser that emits blue light would be capable of burning and reading more information than one that emits infrared.
19. The coating on the inside of fluorescent light tubes absorbs ultraviolet light and subsequently emits visible light. An inventor claims that he is able to do the reverse process. Is the inventor’s claim possible?
20. What is the difference between fluorescence and phosphorescence?
21. How can you tell that a hologram is a true three-dimensional image and that those in 3-D movies are not?
30.6: The Wave Nature of Matter Causes Quantization
22. How is the de Broglie wavelength of electrons related to the quantization of their orbits in atoms and molecules?
30.7: Patterns in Spectra Reveal More Quantization
23. What is the Zeeman effect, and what type of quantization was discovered because of this effect?
30.8: Quantum Numbers and Rules
24. Define the quantum numbers \(\displaystyle n, l,m_l, s\), and \(\displaystyle m_s\).
25. For a given value of \(\displaystyle n\), what are the allowed values of \(\displaystyle l\)?
26. For a given value of \(\displaystyle l\), what are the allowed values of \(\displaystyle m_l\)? What are the allowed values of \(\displaystyle m_l\) for a given value of \(\displaystyle n\)? Give an example in each case.
27. List all the possible values of \(\displaystyle s\) and \(\displaystyle m_s\) for an electron. Are there particles for which these values are different? The same?
30.9: The Pauli Exclusion Principle
28. Identify the shell, subshell, and number of electrons for the following:
(a) \(\displaystyle 2p^3\).
(b) \(\displaystyle 4d^9\).
(c) \(\displaystyle 3s^1\).
(d) \(\displaystyle 5g^{16}\).
29. Which of the following are not allowed? State which rule is violated for any that are not allowed.
(a) \(\displaystyle 1p^3\)
(b) \(\displaystyle 2p^8\)
(c) \(\displaystyle 3g^{11}\)
(d) \(\displaystyle 4f^2\)
Problems & Exercises
30.1: Discovery of the Atom
30. Using the given charge-to-mass ratios for electrons and protons, and knowing the magnitudes of their charges are equal, what is the ratio of the proton’s mass to the electron’s? (Note that since the charge-to-mass ratios are given to only three-digit accuracy, your answer may differ from the accepted ratio in the fourth digit.)
Solution
\(\displaystyle 1.84×10^3\)
31. (a) Calculate the mass of a proton using the charge-to-mass ratio given for it in this chapter and its known charge.
(b) How does your result compare with the proton mass given in this chapter?
32. If someone wanted to build a scale model of the atom with a nucleus 1.00 m in diameter, how far away would the nearest electron need to be?
Solution
50 km
30.2: Discovery of the Parts of the Atom: Electrons and Nuclei
33. Rutherford found the size of the nucleus to be about 10−15m. This implied a huge density. What would this density be for gold?
Solution
\(\displaystyle 6×10^{20}kg/m^3\)
34. In Millikan’s oil-drop experiment, one looks at a small oil drop held motionless between two plates. Take the voltage between the plates to be 2033 V, and the plate separation to be 2.00 cm. The oil drop (of density \(\displaystyle 0.81 g/cm^3\)) has a diameter of \(\displaystyle 4.0×10^{−6}m\). Find the charge on the drop, in terms of electron units.
35. (a) An aspiring physicist wants to build a scale model of a hydrogen atom for her science fair project. If the atom is 1.00 m in diameter, how big should she try to make the nucleus?
(b) How easy will this be to do?
Solution
(a) \(\displaystyle 10.0 μm\)
(b) It isn’t hard to make one of approximately this size. It would be harder to make it exactly 10.0 μm.
30.3: Bohr’s Theory of the Hydrogen Atom
36. By calculating its wavelength, show that the first line in the Lyman series is UV radiation.
Solution
\(\displaystyle \frac{1}{λ}=R(\frac{1}{n^2_f}−\frac{1}{n^2_i})⇒λ=\frac{1}{R}[\frac{(n_i⋅n_f)^2}{n^2_i−n^2_f}];n_i=2,n_f=1,\) so that
\(\displaystyle λ=(\frac{m}{1.097×10^7})[\frac{(2×1)^2}{2^2−1^2}]=1.22×10^{−7}m=122 nm\) , which is UV radiation.
37. Find the wavelength of the third line in the Lyman series, and identify the type of EM radiation.
38. Look up the values of the quantities in \(\displaystyle a_B=\frac{h^2}{4π^2m_ekq^2_e}\), and verify that the Bohr radius \(\displaystyle a_B\) is \(\displaystyle 0.529×10^{−10}m\).
Solution
\(\displaystyle a_B=\frac{h^2}{4π^2mekZq^2_e}=\frac{(6.626×10^{−34}J⋅s)^2}{4π^2(9.109×10^{−31}kg)(8.988×10^9N⋅m^2/C^2)(1)(1.602×10^{−19}C)^2}=0.529×10^{−10}m\)
39. Verify that the ground state energy \(\displaystyle E_0\) is 13.6 eV by using \(\displaystyle E0=\frac{2π^2q^4_em_ek^2}{h^2}\).
40. If a hydrogen atom has its electron in the \(\displaystyle n=4\) state, how much energy in eV is needed to ionize it?
Solution
0.850 eV
41. A hydrogen atom in an excited state can be ionized with less energy than when it is in its ground state. What is n for a hydrogen atom if 0.850 eV of energy can ionize it?
42 . Find the radius of a hydrogen atom in the \(\displaystyle n=2\) state according to Bohr’s theory.
Solution
\(\displaystyle 2.12×10^{–10}m\)
43. Show that \(\displaystyle (13.6 eV)/hc=1.097×10^7m=R\) (Rydberg’s constant), as discussed in the text.
44. What is the smallest-wavelength line in the Balmer series? Is it in the visible part of the spectrum?
Solution
365 nm
It is in the ultraviolet.
45. Show that the entire Paschen series is in the infrared part of the spectrum. To do this, you only need to calculate the shortest wavelength in the series.
46. Do the Balmer and Lyman series overlap? To answer this, calculate the shortest-wavelength Balmer line and the longest-wavelength Lyman line.
Solution
No overlap
365 nm
122 nm
47. (a) Which line in the Balmer series is the first one in the UV part of the spectrum?
(b) How many Balmer series lines are in the visible part of the spectrum?
(c) How many are in the UV?
48. A wavelength of \(\displaystyle 4.653 μm\) is observed in a hydrogen spectrum for a transition that ends in the \(\displaystyle n_f=5\) level. What was \(\displaystyle n_i\) for the initial level of the electron?
Solution
7
49. A singly ionized helium ion has only one electron and is denoted \(\displaystyle He^+\). What is the ion’s radius in the ground state compared to the Bohr radius of hydrogen atom?
50. A beryllium ion with a single electron (denoted \(\displaystyle Be^{3+}\)) is in an excited state with radius the same as that of the ground state of hydrogen.
(a) What is \(\displaystyle n\) for the \(\displaystyle Be^{3+}\) ion?
(b) How much energy in eV is needed to ionize the ion from this excited state?
Solution
(a) 2
(b) 54.4 eV
51. Atoms can be ionized by thermal collisions, such as at the high temperatures found in the solar corona. One such ion is \(\displaystyle C^{+5}\), a carbon atom with only a single electron.
(a) By what factor are the energies of its hydrogen-like levels greater than those of hydrogen?
(b) What is the wavelength of the first line in this ion’s Paschen series?
(c) What type of EM radiation is this?
52. Verify Equations \(\displaystyle r_n=\frac{n^2}{Z}a_B\) and \(\displaystyle a_B=\frac{h^2}{4π^2m_ekq^2_e}=0.529×10^{−10}m\) using the approach stated in the text. That is, equate the Coulomb and centripetal forces and then insert an expression for velocity from the condition for angular momentum quantization.
Solution
\(\displaystyle \frac{kZq^2_e}{r^2_n}=\frac{m_eV^2}{r_n}\), so that \(\displaystyle r_n=\frac{kZq^2_e}{m_eV^2}=\frac{kZq^2_e}{m_e}\frac{1}{V^2}\). From the equation \(\displaystyle m_evr_n=n\frac{h}{2π}\), we can substitute for the velocity, giving: \(\displaystyle r_n=\frac{kZq^2_e}{m_e}⋅\frac{4π^2m^2_er^2_n}{n^2h^2}\) so that \(\displaystyle r_n=\frac{n^2}{Z}\frac{h^2}{4π^2m_ekq^2_e}=\frac{n^2}{Z}a_B\), where \(\displaystyle a_B=\frac{h^2}{4π^2m_ekq^2_e}\).
53. The wavelength of the four Balmer series lines for hydrogen are found to be 410.3, 434.2, 486.3, and 656.5 nm. What average percentage difference is found between these wavelength numbers and those predicted by \(\displaystyle \frac{1}{λ}=R(\frac{1}{n^2_f}−\frac{1}{n^2+i})\)? It is amazing how well a simple formula (disconnected originally from theory) could duplicate this phenomenon.
30.4: X Rays- Atomic Origins and Applications
54. (a) What is the shortest-wavelength x-ray radiation that can be generated in an x-ray tube with an applied voltage of 50.0 kV?
(b) Calculate the photon energy in eV.
(c) Explain the relationship of the photon energy to the applied voltage.
Solution
(a) \(\displaystyle 0.248×10^{−10}m\)
(b) 50.0 keV
(c) The photon energy is simply the applied voltage times the electron charge, so the value of the voltage in volts is the same as the value of the energy in electron volts.
55. A color television tube also generates some x rays when its electron beam strikes the screen. What is the shortest wavelength of these x rays, if a 30.0-kV potential is used to accelerate the electrons? (Note that TVs have shielding to prevent these x rays from exposing viewers.)
56. An x ray tube has an applied voltage of 100 kV.
(a) What is the most energetic x-ray photon it can produce? Express your answer in electron volts and joules.
(b) Find the wavelength of such an X–ray.
Solution
(a) \(\displaystyle 100×10^3eV, 1.60×10^{−14}J\)
(b) \(\displaystyle 0.124×10^{−10}m\)
57. The maximum characteristic x-ray photon energy comes from the capture of a free electron into a \(\displaystyle K\) shell vacancy. What is this photon energy in keV for tungsten, assuming the free electron has no initial kinetic energy?
58. What are the approximate energies of the \(\displaystyle K_α\) and \(\displaystyle K_β\) x rays for copper?
Solution
(a) 8.00 keV
(b) 9.48 keV
30.5: Applications of Atomic Excitations and De-Excitations
59. Figure shows the energy-level diagram for neon.
(a) Verify that the energy of the photon emitted when neon goes from its metastable state to the one immediately below is equal to 1.96 eV.
(b) Show that the wavelength of this radiation is 633 nm.
(c) What wavelength is emitted when the neon makes a direct transition to its ground state?
Solution
(a) 1.96 eV
(b) \(\displaystyle (1240 eV⋅nm)/(1.96 eV)=633 nm\)
(c) 60.0 nm
60. A helium-neon laser is pumped by electric discharge. What wavelength electromagnetic radiation would be needed to pump it? See Figure for energy-level information.
61. Ruby lasers have chromium atoms doped in an aluminum oxide crystal. The energy level diagram for chromium in a ruby is shown in Figure. What wavelength is emitted by a ruby laser?
Chromium atoms in an aluminum oxide crystal have these energy levels, one of which is metastable. This is the basis of a ruby laser. Visible light can pump the atom into an excited state above the metastable state to achieve a population inversion.
Solution
693 nm
62. (a) What energy photons can pump chromium atoms in a ruby laser from the ground state to its second and third excited states?
(b) What are the wavelengths of these photons? Verify that they are in the visible part of the spectrum.
63. Some of the most powerful lasers are based on the energy levels of neodymium in solids, such as glass, as shown in Figure.
(a) What average wavelength light can pump the neodymium into the levels above its metastable state?
(b) Verify that the 1.17 eV transition produces \(\displaystyle 1.06 μm\) radiation.
Neodymium atoms in glass have these energy levels, one of which is metastable. The group of levels above the metastable state is convenient for achieving a population inversion, since photons of many different energies can be absorbed by atoms in the ground state.
Solution
(a) 590 nm
(b) \(\displaystyle (1240 eV⋅nm)/(1.17 eV)=1.06 μm\)
30.8: Quantum Numbers and Rules
64. If an atom has an electron in the \(\displaystyle n=5\) state with \(\displaystyle m_l=3\), what are the possible values of \(\displaystyle l\)?
Solution
\(\displaystyle l=4, 3\) are possible since \(\displaystyle l<n\) and \(\displaystyle ∣m_l∣≤l\).
65. An atom has an electron with \(\displaystyle m_l=2\). What is the smallest value of \(\displaystyle n\) for this electron?
66. What are the possible values of \(\displaystyle m_l\) for an electron in the \(\displaystyle n=4\) state?
Solution
\(\displaystyle n=4⇒l=3, 2, 1, 0⇒m_l=±3,±2,±1, 0\) are possible.
67. What, if any, constraints does a value of \(\displaystyle m_l=1\) place on the other quantum numbers for an electron in an atom?
68. (a) Calculate the magnitude of the angular momentum for an \(\displaystyle l=1\) electron.
(b) Compare your answer to the value Bohr proposed for the \(\displaystyle n=1\) state.
Solution
(a) \(\displaystyle 1.49×10^{−34}J⋅s\)
(b) \(\displaystyle 1.06×10^{−34}J⋅s\)
69. (a) What is the magnitude of the angular momentum for an \(\displaystyle l=1\) electron?
(b) Calculate the magnitude of the electron’s spin angular momentum.
(c) What is the ratio of these angular momenta?
70. Repeat Exercise for \(\displaystyle l=3\).
Solution
(a) \(\displaystyle 3.66×10^{−34}J⋅s\)
(b) \(\displaystyle s=9.13×10^{−35}J⋅s\)
(c) \(\displaystyle \frac{L}{S}=\frac{\sqrt{12}}{\sqrt{3/4}}=4\)
71. (a) How many angles can \(\displaystyle L\) make with the z-axis for an \(\displaystyle l=2\) electron?
(b) Calculate the value of the smallest angle.
72. What angles can the spin \(\displaystyle S\)of an electron make with the z-axis?
Solution
\(\displaystyle θ=54.7º, 125.3º\)
30.9: The Pauli Exclusion Principle
73. (a) How many electrons can be in the \(\displaystyle n=4\) shell?
(b) What are its subshells, and how many electrons can be in each?
Solution
(a) 32.
(b) \(\displaystyle 2\) in \(\displaystyle s\), \(\displaystyle 6\) in \(\displaystyle p\),\(\displaystyle 10\) in \(\displaystyle d\), and 14 in \(\displaystyle f\), for a total of 32.
74. (a) What is the minimum value of 1 for a subshell that has 11 electrons in it?
(b) If this subshell is in the \(\displaystyle n=5\) shell, what is the spectroscopic notation for this atom?
75. (a) If one subshell of an atom has 9 electrons in it, what is the minimum value of \(\displaystyle l\)?
(b) What is the spectroscopic notation for this atom, if this subshell is part of the \(\displaystyle n=3\) shell?
Solution
(a) 2
(b) \(\displaystyle 3d^9\)
76. (a) List all possible sets of quantum numbers \(\displaystyle (n,l,m_l,m_s)\) for the \(\displaystyle n=3\) shell, and determine the number of electrons that can be in the shell and each of its subshells.
(b) Show that the number of electrons in the shell equals \(\displaystyle 2n^2\) and that the number in each subshell is \(\displaystyle 2(2l+1)\).
77. Which of the following spectroscopic notations are not allowed? State which rule is violated for each that is not allowed.
(a) \(\displaystyle 5s^1\)
(b) \(\displaystyle 1d^1\)
(c) \(\displaystyle 4s^3\)
(d) \(\displaystyle 3p^7\)
(e) \(\displaystyle 5g^{15}\).
Solution
(b) \(\displaystyle n≥l\) is violated,
(c) cannot have 3 electrons in \(\displaystyle s\) subshell since \(\displaystyle 3>(2l+1)=2\)
(d) cannot have 7 electrons in \(\displaystyle p\) subshell since \(\displaystyle 7>(2l+1)=2(2+1)=6\)
78. Which of the following spectroscopic notations are allowed (that is, which violate none of the rules regarding values of quantum numbers)?
(a) \(\displaystyle 1s^1\)
(b) \(\displaystyle 1d^3\)
(c) \(\displaystyle 4s^2\)
(d) \(\displaystyle 3p^7\)
(e) \(\displaystyle 6h^{20}\)
79. (a) Using the Pauli exclusion principle and the rules relating the allowed values of the quantum numbers \(\displaystyle (n,l,m_l,m_s)\), prove that the maximum number of electrons in a subshell is \(\displaystyle 2n^2\).
(b) In a similar manner, prove that the maximum number of electrons in a shell is \(\displaystyle 2n^2\).
Solution
(a) The number of different values of \(\displaystyle m_l\) is \(\displaystyle ±l,±(l−1),...,0\) for each \(\displaystyle l>0\) and one for \(\displaystyle l=0⇒(2l+1)\). Also an overall factor of 2 since each \(\displaystyle m_l\) can have \(\displaystyle m_s\) equal to either \(\displaystyle +1/2\) or \(\displaystyle −1/2⇒2(2l+1)\).
(b) for each value of \(\displaystyle l\), you get \(\displaystyle 2(2l+1)=0, 1, 2, ...,(n–1)⇒2{[(2)(0)+1]+[(2)(1)+1]+....+[(2)(n−1)+1]}=2[1+3+...+(2n−3)+(2n−1)]\) to see that the expression in the box is \(\displaystyle =n^2\), imagine taking \(\displaystyle (n−1)\) from the last term and adding it to first term \(\displaystyle =2[1+(n–1)+3+...+(2n−3)+(2n−1)–(n−1)]=2[n+3+....+(2n−3)+n]\). Now take \(\displaystyle (n−3)\) from penultimate term and add to the second term \(\displaystyle 2[n+n+...+n+n]=2n^2\).
80. Integrated Concepts
Estimate the density of a nucleus by calculating the density of a proton, taking it to be a sphere 1.2 fm in diameter. Compare your result with the value estimated in this chapter.
81. Integrated Concepts
The electric and magnetic forces on an electron in the CRT in [link] are supposed to be in opposite directions. Verify this by determining the direction of each force for the situation shown. Explain how you obtain the directions (that is, identify the rules used).
Solution
The electric force on the electron is up (toward the positively charged plate). The magnetic force is down (by the RHR).
82. (a) What is the distance between the slits of a diffraction grating that produces a first-order maximum for the first Balmer line at an angle of \(\displaystyle 20.0º\)?
(b) At what angle will the fourth line of the Balmer series appear in first order?
(c) At what angle will the second-order maximum be for the first line?
83. Integrated Concepts
A galaxy moving away from the earth has a speed of \(\displaystyle 0.0100c\). What wavelength do we observe for an \(\displaystyle n_i=7\) to \(\displaystyle n_f=2\) transition for hydrogen in that galaxy?
Solution
401 nm
84. Integrated Concepts
Calculate the velocity of a star moving relative to the earth if you observe a wavelength of 91.0 nm for ionized hydrogen capturing an electron directly into the lowest orbital (that is, a \(\displaystyle n_i=∞\) to \(\displaystyle n_f=1\), or a Lyman series transition).
85. Integrated Concepts
In a Millikan oil-drop experiment using a setup like that in [link], a 500-V potential difference is applied to plates separated by 2.50 cm.
(a) What is the mass of an oil drop having two extra electrons that is suspended motionless by the field between the plates?
(b) What is the diameter of the drop, assuming it is a sphere with the density of olive oil?
Solution
(a) \(\displaystyle 6.54×10^{−16}kg\)
(b) \(\displaystyle 5.54×10^{−7}m\)
86. Integrated Concepts
What double-slit separation would produce a first-order maximum at \(\displaystyle 3.00º\) for 25.0-keV x rays? The small answer indicates that the wave character of x rays is best determined by having them interact with very small objects such as atoms and molecules.
87. Integrated Concepts
In a laboratory experiment designed to duplicate Thomson’s determination of \(\displaystyle q_e/m_e\), a beam of electrons having a velocity of \(\displaystyle 6.00×10^7m/s\) enters a \(\displaystyle 5.00×10^{−3}T\) magnetic field. The beam moves perpendicular to the field in a path having a 6.80-cm radius of curvature. Determine \(\displaystyle q_e/m_e\) from these observations, and compare the result with the known value.
Solution
\(\displaystyle 1.76×10^{11}C/kg\), which agrees with the known value of \(\displaystyle 1.759×10^{11}C/kg\) to within the precision of the measurement
88. Integrated Concepts
Find the value of \(\displaystyle l\), the orbital angular momentum quantum number, for the moon around the earth. The extremely large value obtained implies that it is impossible to tell the difference between adjacent quantized orbits for macroscopic objects.
89. Integrated Concepts
Particles called muons exist in cosmic rays and can be created in particle accelerators. Muons are very similar to electrons, having the same charge and spin, but they have a mass 207 times greater. When muons are captured by an atom, they orbit just like an electron but with a smaller radius, since the mass in \(\displaystyle a_B=\frac{h^2}{4π^2m_ekq^2_e}=0.529×10^{−10}m\) is \(\displaystyle 207 m_e\).
(a) Calculate the radius of the \(\displaystyle n=1\) orbit for a muon in a uranium ion \(\displaystyle (Z=92)\).
(b) Compare this with the 7.5-fm radius of a uranium nucleus. Note that since the muon orbits inside the electron, it falls into a hydrogen-like orbit. Since your answer is less than the radius of the nucleus, you can see that the photons emitted as the muon falls into its lowest orbit can give information about the nucleus.
Solution
(a) 2.78 fm
(b) 0.37 of the nuclear radius.
90. Integrated Concepts
Calculate the minimum amount of energy in joules needed to create a population inversion in a helium-neon laser containing \(\displaystyle 1.00×10^{−4}\) moles of neon.
91. Integrated Concepts
A carbon dioxide laser used in surgery emits infrared radiation with a wavelength of \(\displaystyle 10.6 μm\). In 1.00 ms, this laser raised the temperature of \(\displaystyle 1.00 cm^3\) of flesh to \(\displaystyle 100ºC\) and evaporated it.
(a) How many photons were required? You may assume flesh has the same heat of vaporization as water.
(b) What was the minimum power output during the flash?
Solution
(a) \(\displaystyle 1.34×10^{23}\)
(b) 2.52 MW
92. Integrated Concepts
Suppose an MRI scanner uses 100-MHz radio waves.
(a) Calculate the photon energy.
(b) How does this compare to typical molecular binding energies?
93. Integrated Concepts
(a) An excimer laser used for vision correction emits 193-nm UV. Calculate the photon energy in eV.
(b) These photons are used to evaporate corneal tissue, which is very similar to water in its properties. Calculate the amount of energy needed per molecule of water to make the phase change from liquid to gas. That is, divide the heat of vaporization in kJ/kg by the number of water molecules in a kilogram.
(c) Convert this to eV and compare to the photon energy. Discuss the implications.
Solution
(a) 6.42 eV
(b) \(\displaystyle 7.27×10^{−20}J/molecule\)
(c) 0.454 eV, 14.1 times less than a single UV photon. Therefore, each photon will evaporate approximately 14 molecules of tissue. This gives the surgeon a rather precise method of removing corneal tissue from the surface of the eye.
94. Integrated Concepts
A neighboring galaxy rotates on its axis so that stars on one side move toward us as fast as 200 km/s, while those on the other side move away as fast as 200 km/s. This causes the EM radiation we receive to be Doppler shifted by velocities over the entire range of ±200 km/s. What range of wavelengths will we observe for the 656.0-nm line in the Balmer series of hydrogen emitted by stars in this galaxy. (This is called line broadening.)
95. Integrated Concepts
A pulsar is a rapidly spinning remnant of a supernova. It rotates on its axis, sweeping hydrogen along with it so that hydrogen on one side moves toward us as fast as 50.0 km/s, while that on the other side moves away as fast as 50.0 km/s. This means that the EM radiation we receive will be Doppler shifted over a range of \(\displaystyle ±50.0 km/s\). What range of wavelengths will we observe for the 91.20-nm line in the Lyman series of hydrogen? (Such line broadening is observed and actually provides part of the evidence for rapid rotation.)
Solution
91.18 nm to 91.22 nm
96. Integrated Concepts
Prove that the velocity of charged particles moving along a straight path through perpendicular electric and magnetic fields is \(\displaystyle v=E/B\). Thus crossed electric and magnetic fields can be used as a velocity selector independent of the charge and mass of the particle involved.
97. Unreasonable Results
(a) What voltage must be applied to an X-ray tube to obtain 0.0100-fm-wavelength X-rays for use in exploring the details of nuclei?
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
Solution
(a) \(\displaystyle 1.24×10^{11}V\)
(b) The voltage is extremely large compared with any practical value.
(c) The assumption of such a short wavelength by this method is unreasonable.
98. Unreasonable Results
A student in a physics laboratory observes a hydrogen spectrum with a diffraction grating for the purpose of measuring the wavelengths of the emitted radiation. In the spectrum, she observes a yellow line and finds its wavelength to be 589 nm.
(a) Assuming this is part of the Balmer series, determine \(\displaystyle n_i\), the principal quantum number of the initial state.
(b) What is unreasonable about this result?
(c) Which assumptions are unreasonable or inconsistent?
99. Construct Your Own Problem
The solar corona is so hot that most atoms in it are ionized. Consider a hydrogen-like atom in the corona that has only a single electron. Construct a problem in which you calculate selected spectral energies and wavelengths of the Lyman, Balmer, or other series of this atom that could be used to identify its presence in a very hot gas. You will need to choose the atomic number of the atom, identify the element, and choose which spectral lines to consider.
100. Construct Your Own Problem
Consider the Doppler-shifted hydrogen spectrum received from a rapidly receding galaxy. Construct a problem in which you calculate the energies of selected spectral lines in the Balmer series and examine whether they can be described with a formula like that in the equation \(\displaystyle \frac{1}{λ}=R(\frac{1}{n^2_f}−\frac{1}{n^2_i})\), but with a different constant \(\displaystyle R\).
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:47.908925
| 2017-10-04T13:34:25 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/30%3A_Atomic_Physics/30.E%3A_Atomic_Physics_(Exercises)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "30.E: Atomic Physics (Exercises)",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics
|
31: Radioactivity and Nuclear Physics
The exploration of radioactivity and the nucleus revealed fundamental and previously unknown particles, forces, and conservation laws. That exploration has evolved into a search for further underlying structures, such as quarks. In this chapter, the fundamentals of nuclear radioactivity and the nucleus are explored. The following two chapters explore the more important applications of nuclear physics in the field of medicine. We will also explore the basics of what we know about quarks and other substructures smaller than nuclei.
-
- 31.0: Prelude to Radioactivity and Nuclear Physics
- There is an ongoing quest to find substructures of matter. At one time, it was thought that atoms would be the ultimate substructure, but just when the first direct evidence of atoms was obtained, it became clear that they have a substructure and a tiny nucleus. The nucleus itself has spectacular characteristics.
-
- 31.1: Nuclear Radioactivity
- The discovery and study of nuclear radioactivity quickly revealed evidence of revolutionary new physics. In addition, uses for nuclear radiation also emerged quickly—for example, people such as Ernest Rutherford used it to determine the size of the nucleus and devices were painted with radon-doped paint to make them glow in the dark. We therefore begin our study of nuclear physics with the discovery and basic features of nuclear radioactivity.
-
- 31.2: Radiation Detection and Detectors
- It is well known that ionizing radiation affects us but does not trigger nerve impulses. Newspapers carry stories about unsuspecting victims of radiation poisoning who fall ill with radiation sickness, such as burns and blood count changes, but who never felt the radiation directly. This makes the detection of radiation by instruments more than an important research tool. This section is a brief overview of radiation detection and some of its applications.
-
- 31.3: Substructure of the Nucleus
- What is inside the nucleus? Why are some nuclei stable while others decay? Why are there different types of decay ( α , β and γ )? Why are nuclear decay energies so large? Pursuing natural questions like these has led to far more fundamental discoveries than you might imagine.
-
- 31.4: Nuclear Decay and Conservation Laws
- Nuclear decay has provided an amazing window into the realm of the very small. Nuclear decay gave the first indication of the connection between mass and energy, and it revealed the existence of two of the four basic forces in nature. In this section, we explore the major modes of nuclear decay; and, like those who first explored them, we will discover evidence of previously unknown particles and conservation laws.
-
- 31.5: Half-Life and Activity
- Unstable nuclei decay. However, some nuclides decay faster than others. For example, radium and polonium, discovered by the Curies, decay faster than uranium. This means they have shorter lifetimes, producing a greater rate of decay. In this section we explore half-life and activity, the quantitative terms for lifetime and rate of decay.
-
- 31.6: Binding Energy
- The more tightly bound a system is, the stronger the forces that hold it together and the greater the energy required to pull it apart. We can therefore learn about nuclear forces by examining how tightly bound the nuclei are. We define the binding energy (BE) of a nucleus to be the energy required to completely disassemble it into separate protons and neutrons. We can determine the BE of a nucleus from its rest mass. The two are connected through Einstein’s famous relationship: E=mc².
|
libretexts
|
2025-03-17T19:53:47.976960
| 2015-11-01T04:24:37 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31: Radioactivity and Nuclear Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.00%3A_Prelude_to_Radioactivity_and_Nuclear_Physics
|
31.0: Prelude to Radioactivity and Nuclear Physics
There is an ongoing quest to find substructures of matter. At one time, it was thought that atoms would be the ultimate substructure, but just when the first direct evidence of atoms was obtained, it became clear that they have a substructure and a tiny nucleus . The nucleus itself has spectacular characteristics. For example, certain nuclei are unstable, and their decay emits radiations with energies millions of times greater than atomic energies. Some of the mysteries of nature, such as why the core of the earth remains molten and how the sun produces its energy, are explained by nuclear phenomena. The exploration of radioactivity and the nucleus revealed fundamental and previously unknown particles, forces, and conservation laws. That exploration has evolved into a search for further underlying structures, such as quarks. In this chapter, the fundamentals of nuclear radioactivity and the nucleus are explored. The following two chapters explore the more important applications of nuclear physics in the field of medicine. We will also explore the basics of what we know about quarks and other substructures smaller than nuclei.
|
libretexts
|
2025-03-17T19:53:48.034691
| 2016-07-24T08:46:41 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.00%3A_Prelude_to_Radioactivity_and_Nuclear_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.0: Prelude to Radioactivity and Nuclear Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.01%3A_Nuclear_Radioactivity
|
31.1: Nuclear Radioactivity
Learning Objectives
By the end of this section, you will be able to:
- Explain nuclear radiation.
- Explain the types of radiation—alpha emission, beta emission, and gamma emission.
- Explain the ionization of radiation in an atom.
- Define the range of radiation.
The discovery and study of nuclear radioactivity quickly revealed evidence of revolutionary new physics. In addition, uses for nuclear radiation also emerged quickly—for example, people such as Ernest Rutherford used it to determine the size of the nucleus and devices were painted with radon-doped paint to make them glow in the dark (Figure \(\PageIndex{1}\)). We therefore begin our study of nuclear physics with the discovery and basic features of nuclear radioactivity.
Discovery of Nuclear Radioactivity
In 1896, the French physicist Antoine Henri Becquerel (1852–1908) accidentally found that a uranium-rich mineral called pitchblende emits invisible, penetrating rays that can darken a photographic plate enclosed in an opaque envelope. The rays therefore carry energy; but amazingly, the pitchblende emits them continuously without any energy input. This is an apparent violation of the law of conservation of energy, one that we now understand is due to the conversion of a small amount of mass into energy, as related in Einstein’s famous equation \(E = mc^2\). It was soon evident that Becquerel’s rays originate in the nuclei of the atoms and have other unique characteristics. The emission of these rays is called nuclear radioactivity or simply radioactivity . The rays themselves are called nuclear radiation . A nucleus that spontaneously destroys part of its mass to emit radiation is said to decay (a term also used to describe the emission of radiation by atoms in excited states). A substance or object that emits nuclear radiation is said to be radioactive .
Two types of experimental evidence imply that Becquerel’s rays originate deep in the heart (or nucleus) of an atom. First, the radiation is found to be associated with certain elements, such as uranium. Radiation does not vary with chemical state—that is, uranium is radioactive whether it is in the form of an element or compound. In addition, radiation does not vary with temperature, pressure, or ionization state of the uranium atom. Since all of these factors affect electrons in an atom, the radiation cannot come from electron transitions, as atomic spectra do. The huge energy emitted during each event is the second piece of evidence that the radiation cannot be atomic. Nuclear radiation has energies of the order of \(10^6 \, eV\) per event, which is much greater than the typical atomic energies (a few \(eV\)) such as that observed in spectra and chemical reactions, and more than ten times as high as the most energetic characteristic x rays. Becquerel did not vigorously pursue his discovery for very long. In 1898, Marie Curie (1867–1934), then a graduate student married the already well-known French physicist Pierre Curie (1859–1906), began her doctoral study of Becquerel’s rays. She and her husband soon discovered two new radioactive elements, which she named polonium (after her native land) and radium (because it radiates). These two new elements filled holes in the periodic table and, further, displayed much higher levels of radioactivity per gram of material than uranium. Over a period of four years, working under poor conditions and spending their own funds, the Curies processed more than a ton of uranium ore to isolate a gram of radium salt. Radium became highly sought after, because it was about two million times as radioactive as uranium. Curie’s radium salt glowed visibly from the radiation that took its toll on them and other unaware researchers. Shortly after completing her Ph.D., both Curies and Becquerel shared the 1903 Nobel Prize in physics for their work on radioactivity. Pierre was killed in a horse cart accident in 1906, but Marie continued her study of radioactivity for nearly 30 more years. Awarded the 1911 Nobel Prize in chemistry for her discovery of two new elements, she remains the only person to win Nobel Prizes in physics and chemistry. Marie’s radioactive fingerprints on some pages of her notebooks can still expose film, and she suffered from radiation-induced lesions. She died of leukemia likely caused by radiation, but she was active in research almost until her death in 1934. The following year, her daughter and son-in-law, Irene and Frederic Joliot-Curie, were awarded the Nobel Prize in chemistry for their discovery of artificially induced radiation, adding to a remarkable family legacy.
Alpha, Beta, and Gamma
Research begun by people such as New Zealander Ernest Rutherford soon after the discovery of nuclear radiation indicated that different types of rays are emitted. Eventually, three types were distinguished and named alpha \(\alpha\), bet a \(\beta\) and gamm a \(\gamma\) because, like x-rays, their identities were initially unknown. Figure \(\PageIndex{2}\) shows what happens if the rays are passed through a magnetic field. The \(\gamma\)s are unaffected, while the \(\alpha\)s and \(\beta\)s are deflected in opposite directions, indicating the \(\alpha\)s are positive, the \(\beta\)s negative, and the \(\gamma\)s uncharged. Rutherford used both magnetic and electric fields to show that \(\alpha\)s have a positive charge twice the magnitude of an electron, or \(+2|q_e|\). In the process, he found the \(\alpha\)s charge to mass ratio to be several thousand times smaller than the electron’s. Later on, Rutherford collected \(\alpha\)s from a radioactive source and passed an electric discharge through them, obtaining the spectrum of recently discovered helium gas. Among many important discoveries made by Rutherford and his collaborators was the proof that \(\alpha\) radiation is the emission of a helium nucleus . Rutherford won the Nobel Prize in chemistry in 1908 for his early work. He continued to make important contributions until his death in 1934.
Other researchers had already proved that \(\beta\)s are negative and have the same mass and same charge-to-mass ratio as the recently discovered electron. By 1902, it was recognized that \(\beta\) radiation is the emission of an electron . Although \(\beta\)s are electrons, they do not exist in the nucleus before it decays and are not ejected atomic electrons—the electron is created in the nucleus at the instant of decay.
Since \(\gamma\)s remain unaffected by electric and magnetic fields, it is natural to think they might be photons. Evidence for this grew, but it was not until 1914 that this was proved by Rutherford and collaborators. By scattering \(\gamma\) radiation from a crystal and observing interference, they demonstrated that \(\gamma\) radiation is the emission of a high-energy photon by a nucleus . In fact, \(\gamma\) radiation comes from the de-excitation of a nucleus, just as an x ray comes from the de-excitation of an atom. The names " \(\gamma\) ray" and "x ray" identify the source of the radiation. At the same energy, \(\gamma\) rays and x rays are otherwise identical.
| Type of Radiation | Range |
|---|---|
| \(\alpha\) particles | A sheet of paper, a few cm of air, fractions of a mm of tissue |
| \(\beta\) particles | A thin aluminum plate, or tens of cm of tissue. |
| \(\gamma\) rays | Several cm of lead or meters of concrete. |
Ionization and Range
Two of the most important characteristics of \(\alpha\), \(\beta\) and \(\gamma\) rays were recognized very early. All three types of nuclear radiation produce ionization in materials, but they penetrate different distances in materials—that is, they have different ranges . Let us examine why they have these characteristics and what are some of the consequences.
Like x rays, nuclear radiation in the form of \(\alpha\)s, \(\beta\)s, and \(\gamma\)s has enough energy per event to ionize atoms and molecules in any material. The energy emitted in various nuclear decays ranges from a few \(keV\) to more than \(10 \, MeV\), while only a few \(eV\)s are needed to produce ionization. The effects of x rays and nuclear radiation on biological tissues and other materials, such as solid state electronics, are directly related to the ionization they produce. All of them, for example, can damage electronics or kill cancer cells. In addition, methods for detecting x rays and nuclear radiation are based on ionization, directly or indirectly. All of them can ionize the air between the plates of a capacitor, for example, causing it to discharge. This is the basis of inexpensive personal radiation monitors, such as pictured in Figure \(\PageIndex{3}\). Apart from \(\alpha\), \(\beta\) and \(\gamma\), there are other forms of nuclear radiation as well, and these also produce ionization with similar effects. We define ionizing radiation as any form of radiation that produces ionization whether nuclear in origin or not, since the effects and detection of the radiation are related to ionization.
The range of radiation is defined to be the distance it can travel through a material. Range is related to several factors, including the energy of the radiation, the material encountered, and the type of radiation (Figure \(\PageIndex{4}\)). The higher the energy , the greater the range, all other factors being the same. This makes good sense, since radiation loses its energy in materials primarily by producing ionization in them, and each ionization of an atom or a molecule requires energy that is removed from the radiation. The amount of ionization is, thus, directly proportional to the energy of the particle of radiation, as is its range.
Radiation can be absorbed or shielded by materials, such as the lead aprons dentists drape on us when taking x rays. Lead is a particularly effective shield compared with other materials, such as plastic or air. How does the range of radiation depend on material ? Ionizing radiation interacts best with charged particles in a material. Since electrons have small masses, they most readily absorb the energy of the radiation in collisions. The greater the density of a material and, in particular, the greater the density of electrons within a material, the smaller the range of radiation.
Collisions
Conservation of energy and momentum often results in energy transfer to a less massive object in a collision.
Different types of radiation have different ranges when compared at the same energy and in the same material. Alphas have the shortest range, betas penetrate farther, and gammas have the greatest range. This is directly related to charge and speed of the particle or type of radiation. At a given energy, each \(\alpha\), \(\beta\) or \(\gamma\) will produce the same number of ionizations in a material (each ionization requires a certain amount of energy on average). The more readily the particle produces ionization, the more quickly it will lose its energy. The effect of charge is as follows: The \(\alpha\) has a charge of \(+2q_e\), the \(\beta\) has a charge of \(-q_e\), and the \(\gamma\) is uncharged. The electromagnetic force exerted by the \(\alpha\) is thus twice as strong as that exerted by the \(\beta\) and it is more likely to produce ionization. Although chargeless, the \(\gamma\) does interact weakly because it is an electromagnetic wave, but it is less likely to produce ionization in any encounter. More quantitatively, the change in momentum \(\Delta p\) given to a particle in the material is \(\Delta p = F \Delta t\), where \(F\) is the force the \(\alpha\), \(\beta\), or \(\gamma\) exerts over a time \(\Delta t\). The smaller the charge, the smaller is \(F\) and the smaller is the momentum (and energy) lost. Since the speed of alphas is about 5% to 10% of the speed of light, classical (non-relativistic) formulas apply.
The speed at which they travel is the other major factor affecting the range of \(\alpha\)s, \(\beta\)s, and \(\gamma\)s. The faster they move, the less time they spend in the vicinity of an atom or a molecule, and the less likely they are to interact. Since \(\alpha\)s and \(\beta\)s are particles with mass (helium nuclei and electrons, respectively), their energy is kinetic, given classically by \(\frac{1}{2}mv^2\). The mass of the \(\beta\) particle is thousands of times less than that of the \(\alpha\)s, so that \(\beta\)s must travel much faster than \(\alpha\)s to have the same energy. Since \(\beta\)s move faster (most at relativistic speeds), they have less time to interact than \(\alpha\)s. Gamma rays are photons, which must travel at the speed of light. They are even less likely to interact than a \(\beta\), since they spend even less time near a given atom (and they have no charge). The range of \(\gamma\)s is thus greater than the range of \(\beta\)s.
Alpha radiation from radioactive sources has a range much less than a millimeter of biological tissues, usually not enough to even penetrate the dead layers of our skin. On the other hand, the same \(\alpha\) radiation can penetrate a few centimeters of air, so mere distance from a source prevents \(\alpha\) radiation from reaching us. This makes \(\alpha\) radiation relatively safe for our body compared to \(\beta\) and \(\gamma\) radiation. Typical \(\beta\) radiation can penetrate a few millimeters of tissue or about a meter of air. Beta radiation is thus hazardous even when not ingested. The range of \(\beta\)s in lead is about a millimeter, and so it is easy to store \(beta\) sources in lead radiation-proof containers. Gamma rays have a much greater range than either \(\alpha\)s, or \(\beta\)s. In fact, if a given thickness of material, like a lead brick, absorbs 90% of the \(\gamma\)s, then a second lead brick will only absorb 90% of what got through the first. Thus, \(\gamma\)s do not have a well-defined range; we can only cut down the amount that gets through. Typically, \(\gamma\)s can penetrate many meters of air, go right through our bodies, and are effectively shielded (that is, reduced in intensity to acceptable levels) by many centimeters of lead. One benefit of \(\gamma\)s is that they can be used as radioactive tracers (Figure \(\PageIndex{5}\)).
PHET EXPLORATIONS: BETA DECAY
Watch beta decay occur for a collection of nuclei or for an individual nucleus.
Summary
- Some nuclei are radioactive—they spontaneously decay destroying some part of their mass and emitting energetic rays, a process called nuclear radioactivity.
- Nuclear radiation, like x rays, is ionizing radiation, because energy sufficient to ionize matter is emitted in each decay.
- The range (or distance traveled in a material) of ionizing radiation is directly related to the charge of the emitted particle and its energy, with greater-charge and lower-energy particles having the shortest ranges.
- Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials.
Glossary
- alpha rays
- one of the types of rays emitted from the nucleus of an atom
- beta rays
- one of the types of rays emitted from the nucleus of an atom
- gamma rays
- one of the types of rays emitted from the nucleus of an atom
- ionizing radiation
- radiation (whether nuclear in origin or not) that produces ionization whether nuclear in origin or not
- nuclear radiation
- rays that originate in the nuclei of atoms, the first examples of which were discovered by Becquerel
- radioactivity
- the emission of rays from the nuclei of atoms
- radioactive
- a substance or object that emits nuclear radiation
- range of radiation
- the distance that the radiation can travel through a material
Glossary
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:48.118701
| 2016-07-24T08:47:15 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.01%3A_Nuclear_Radioactivity",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.1: Nuclear Radioactivity",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.02%3A_Radiation_Detection_and_Detectors
|
31.2: Radiation Detection and Detectors
Learning Objectives
By the end of this section, you will be able to:
- Explain the working principle of a Geiger tube.
- Define and discuss radiation detectors.
It is well known that ionizing radiation affects us but does not trigger nerve impulses. Newspapers carry stories about unsuspecting victims of radiation poisoning who fall ill with radiation sickness, such as burns and blood count changes, but who never felt the radiation directly. This makes the detection of radiation by instruments more than an important research tool. This section is a brief overview of radiation detection and some of its applications.
Human Application
The first direct detection of radiation was Becquerel’s fogged photographic plate. Photographic film is still the most common detector of ionizing radiation, being used routinely in medical and dental x rays. Nuclear radiation is also captured on film, such as seen in Figure \(\PageIndex{1}\). The mechanism for film exposure by ionizing radiation is similar to that by photons. A quantum of energy interacts with the emulsion and alters it chemically, thus exposing the film. The quantum come from an \(\alpha\)-particle, \(\beta\)-particle, or photon, provided it has more than the few eV of energy needed to induce the chemical change (as does all ionizing radiation). The process is not 100% efficient, since not all incident radiation interacts and not all interactions produce the chemical change. The amount of film darkening is related to exposure, but the darkening also depends on the type of radiation, so that absorbers and other devices must be used to obtain energy, charge, and particle-identification information.
Another very common radiation detector is the Geiger tube . The clicking and buzzing sound we hear in dramatizations and documentaries, as well as in our own physics labs, is usually an audio output of events detected by a Geiger counter. These relatively inexpensive radiation detectors are based on the simple and sturdy Geiger tube, shown schematically in Figure \(\PageIndex{1b}\). A conducting cylinder with a wire along its axis is filled with an insulating gas so that a voltage applied between the cylinder and wire produces almost no current. Ionizing radiation passing through the tube produces free ion pairs that are attracted to the wire and cylinder, forming a current that is detected as a count. The word count implies that there is no information on energy, charge, or type of radiation with a simple Geiger counter. They do not detect every particle, since some radiation can pass through without producing enough ionization to be detected. However, Geiger counters are very useful in producing a prompt output that reveals the existence and relative intensity of ionizing radiation.
Another radiation detection method records light produced when radiation interacts with materials. The energy of the radiation is sufficient to excite atoms in a material that may fluoresce, such as the phosphor used by Rutherford’s group. Materials called scintillators use a more complex collaborative process to convert radiation energy into light. Scintillators may be liquid or solid, and they can be very efficient. Their light output can provide information about the energy, charge, and type of radiation. Scintillator light flashes are very brief in duration, enabling the detection of a huge number of particles in short periods of time. Scintillator detectors are used in a variety of research and diagnostic applications. Among these are the detection by satellite-mounted equipment of the radiation from distant galaxies, the analysis of radiation from a person indicating body burdens, and the detection of exotic particles in accelerator laboratories.
Light from a scintillator is converted into electrical signals by devices such as the photomultiplier tube shown schematically in Figure \(\PageIndex{3}\). These tubes are based on the photoelectric effect, which is multiplied in stages into a cascade of electrons, hence the name photomultiplier. Light entering the photomultiplier strikes a metal plate, ejecting an electron that is attracted by a positive potential difference to the next plate, giving it enough energy to eject two or more electrons, and so on. The final output current can be made proportional to the energy of the light entering the tube, which is in turn proportional to the energy deposited in the scintillator. Very sophisticated information can be obtained with scintillators, including energy, charge, particle identification, direction of motion, and so on.
Solid-state radiation detectors convert ionization produced in a semiconductor (like those found in computer chips) directly into an electrical signal. Semiconductors can be constructed that do not conduct current in one particular direction. When a voltage is applied in that direction, current flows only when ionization is produced by radiation, similar to what happens in a Geiger tube. Further, the amount of current in a solid-state detector is closely related to the energy deposited and, since the detector is solid, it can have a high efficiency (since ionizing radiation is stopped in a shorter distance in solids fewer particles escape detection). As with scintillators, very sophisticated information can be obtained from solid-state detectors.
PHET EXPLORATIONS: RADIOACTIVE DATING GAME
Learn about different types of radiometric dating, such as carbon dating with the PhET Radioactive Dating Game . Understand how decay and half life work to enable radiometric dating to work. Play a game that tests your ability to match the percentage of the dating element that remains to the age of the object.
Summary
- Radiation detectors are based directly or indirectly upon the ionization created by radiation, as are the effects of radiation on living and inert materials.
Glossary
- Geiger tube
- a very common radiation detector that usually gives an audio output
- photomultiplier
- a device that converts light into electrical signals
- radiation detector
- a device that is used to detect and track the radiation from a radioactive reaction
- scintillators
- a radiation detection method that records light produced when radiation interacts with materials
- solid-state radiation detectors
- semiconductors fabricated to directly convert incident radiation into electrical current
|
libretexts
|
2025-03-17T19:53:48.186951
| 2016-07-24T08:47:52 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.02%3A_Radiation_Detection_and_Detectors",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.2: Radiation Detection and Detectors",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.03%3A_Substructure_of_the_Nucleus
|
31.3: Substructure of the Nucleus
Learning Objectives
By the end of this section, you will be able to:
- Define and discuss the nucleus in an atom.
- Define atomic number.
- Define and discuss isotopes.
- Calculate the density of the nucleus.
- Explain nuclear force.
What is inside the nucleus? Why are some nuclei stable while others decay? (Figure \(\PageIndex{1}\)) Why are there different types of decay (\(\alpha\), \(\beta\) and \(\gamma\))? Why are nuclear decay energies so large? Pursuing natural questions like these has led to far more fundamental discoveries than you might imagine.
We have already identified protons as the particles that carry positive charge in the nuclei. However, there are actually two types of particles in the nuclei—the proton and the neutron , referred to collectively as nucleons , the constituents of nuclei. As its name implies, the neutron is a neutral particle (\(q = 0\)) that has nearly the same mass and intrinsic spin as the proton. Table \(\PageIndex{1}\) compares the masses of protons, neutrons, and electrons. Note how close the proton and neutron masses are, but the neutron is slightly more massive once you look past the third digit. Both nucleons are much more massive than an electron. In fact, \(m_p = 1836 \, m_e\) (as noted in Medical Applications of Nuclear Physics and \(m_n = 1839 \, m_e\).
| Particle | Symbol | kg | u | \(MeVc^2\) |
|---|---|---|---|---|
| Proton | p | \(1.67262 \times 10^{-27}\) | 1.007276 | 938.27 |
| Neutron | n | \(1.67493 \times 10^{-27}\) | 1.008665 | 939.57 |
| Electron | e | \(9.1094 \times 10^{-31}\) | 0.00054858 | 0.511 |
Table \(\PageIndex{1}\) also gives masses in terms of mass units that are more convenient than kilograms on the atomic and nuclear scale. The first of these is the unified atomic mass unit (u), defined as
\[1 \, u = 1.6605 \times 10^{-27} \, kg\]
This unit is defined so that a neutral carbon \(^{12}C\) atom has a mass of exactly 12 u. Masses are also expressed in units of \(MeV/c^2\). These units are very convenient when considering the conversion of mass into energy (and vice versa), as is so prominent in nuclear processes. Using \(E = mc^3\) and units of \(m\) in \(MeV/c^2\) we find that \(c^2\) cancels and \(E\) comes out conveniently in MeV. For example, if the rest mass of a proton is converted entirely into energy, then
\[E = mc^2 = (938.27 \, MeV/c^2)c^2 = 938.27 \, MeV.\]
It is useful to note that 1 u of mass converted to energy produces 931.5 MeV, or
\[1 \, u = 031.5 \, MeV/c^2.\]
All properties of a nucleus are determined by the number of protons and neutrons it has. A specific combination of protons and neutrons is called a nuclide and is a unique nucleus. The following notation is used to represent a particular nuclide:
\[_Z^AX_N,\]
where the symbols \(A, \, X, \, Z\) and \(N\) are defined as follows: The number of protons in a nucleus is the atomic number \(Z\), as defined in Medical Applications of Nuclear Physics . X is the symbol for the element , such as Ca for calcium. However, once \(Z\) is known, the element is known; hence, \(Z\) and \(X\) are redundant. For example, \(Z = 20\) is always calcium, and calcium always has \(Z = 20\). \(N\) is the number of neutrons in a nucleus. In the notation for a nuclide, the subscript \(N\) is usually omitted. The symbol \(A\) is defined as the number of nucleons or the total number of protons and neutrons ,
\[A = N + Z,\]
where \(A\) is also called the mass number . This name for \(A\) is logical; the mass of an atom is nearly equal to the mass of its nucleus, since electrons have so little mass. The mass of the nucleus turns out to be nearly equal to the sum of the masses of the protons and neutrons in it, which is proportional to \(A\). In this context, it is particularly convenient to express masses in units of u. Both protons and neutrons have masses close to 1 u, and so the mass of an atom is close to \(A\) u. For example, in an oxygen nucleus with eight protons and eight neutrons, \(A = 16\), and its mass is 16 u. As noticed, the unified atomic mass unit is defined so that a neutral carbon atom (actually a \(^{12}C\) atom) has a mass of exactly 12 u. Carbon was chosen as the standard, partly because of its importance in organic chemistry (see Appendix A ).
Let us look at a few examples of nuclides expressed in the \(_A^ZX_N\) notation. The nucleus of the simplest atom, hydrogen, is a single proton, or \(_1^1H_2\) (the zero for no neutrons is often omitted). To check this symbol, refer to the periodic table—you see that the atomic number \(Z\) of hydrogen is 1. Since you are given that there are no neutrons, the mass number \(A\) is also 1. Suppose you are told that the helium nucleus or \(\alpha\) particle has two protons and two neutrons. You can then see that it is written \(_2^4He_2\). There is a scarce form of hydrogen found in nature called deuterium; its nucleus has one proton and one neutron and, hence, twice the mass of common hydrogen. The symbol for deuterium is, thus, \(_1^2H_1\) (sometimes \(D\) is used, as for deuterated water \(D_2O\)). An even rarer—and radioactive—form of hydrogen is called tritium, since it has a single proton and two neutrons, and it is written \(_1^3H_2\). These three varieties of hydrogen have nearly identical chemistries, but the nuclei differ greatly in mass, stability, and other characteristics. Nuclei (such as those of hydrogen) having the same \(Z\) and different \(N\)s are defined to be isotopes of the same element.
There is some redundancy in the symbols \(A, \, X, \, Z,\) and \(N\). If the element \(X\) is known, then \(Z\) can be found in a periodic table and is always the same for a given element. If both \(A\) and \(X\) are known, then \(N\) can also be determined (first find \(Z\); then, \(N = A - Z\)). Thus the simpler notation for nuclides is
\[^AX,\]
which is sufficient and is most commonly used. For example, in this simpler notation, the three isotopes of hydrogen are \(^1H, \, ^2H\) and \(^3H\) while the \(\alpha\)-particle is \(^4He\). We read this backward, saying helium-4 for \(^4He\), or uranium-238 for \(^{238}U\). So for \(^{238}U\) should we need to know, we can determine that \(Z = 92\) for uranium from the periodic table, and, thus, \(N = 238 - 92 = 146\).
A variety of experiments indicate that a nucleus behaves something like a tightly packed ball of nucleons, as illustrated in Figure . These nucleons have large kinetic energies and, thus, move rapidly in very close contact. Nucleons can be separated by a large force, such as in a collision with another nucleus, but resist strongly being pushed closer together. The most compelling evidence that nucleons are closely packed in a nucleus is that the radius of a nucleus ,\(r\) is found to be given approximately by
\[r = r_0A^{1/3},\] where \(r_0 = 1.2 \, fm\) and \(A\) is the mass number of the nucleus. Note that \(r^3 \propto A\). Since many nuclei are spherical, and the volume of a sphere is \(V = (4/3)\pi r^3\), we see that \(V \propto A\) —that is, the volume of a nucleus is proportional to the number of nucleons in it. This is what would happen if you pack nucleons so closely that there is no empty space between them.
Nucleons are held together by nuclear forces and resist both being pulled apart and pushed inside one another. The volume of the nucleus is the sum of the volumes of the nucleons in it, here shown in different colors to represent protons and neutrons.
Example \(\PageIndex{1}\): How Small and Dense Is a Nucleus?
- Find the radius of an iron-56 nucleus.
- Find its approximate density in \(kg/m^3\), approximating the mass of \(^{56}Fe\) to be 56 u.
Strategy and Concept
- Finding the radius of \(^{56}Fe\) is a straightforward application of \(r = r_0A^{1/3}\), given \(A = 56\).
- To find the approximate density, we assume the nucleus is spherical (this one actually is), calculate its volume using the radius found in part (a), and then find its density from \(\rho = m/V\). Finally, we will need to convert density from units of \(u/fm^3\) to \(kg/m^3\)
Solution
(a) The radius of a nucleus is given by \[r = r_0A^{1/3}.\nonumber\]
Substituting the values for \(r_0\) and \(A\) yields
\[r = (1.2 \, fm)(56)^{1/3} = (1.2 \, fm)(3.83) = 4.6 \, fm.\nonumber\]
(b) Density is defined to be \(\rho = m/V\), which for a sphere of radius \(r\) is
\[\rho = \dfrac{m}{V} = \dfrac{m}{(4/3)\pi r^3}. \nonumber\]
Converting to units of \(kg/m^3\), we find
\[ \begin{align*} \rho &= (0.138 \, u/fm^3)(1.66 \times 10^{-27} \, kg/u)\left(\dfrac{1 \, fm}{10^{-15} \, m}\right) \\[5pt] &= 2.3 \times 10^{17} \, kg/m^3. \end{align*} \]
Discussion
- The radius of this medium-sized nucleus is found to be approximately 4.6 fm, and so its diameter is about 10 fm, or \(10^{-14} \, m\). In our discussion of Rutherford’s discovery of the nucleus, we noticed that it is about \(10^{-15} \, m\) in diameter (which is for lighter nuclei), consistent with this result to an order of magnitude. The nucleus is much smaller in diameter than the typical atom, which has a diameter of the order of \(10^{-10} \, m\).
- The density found here is so large as to cause disbelief. It is consistent with earlier discussions we have had about the nucleus being very small and containing nearly all of the mass of the atom. Nuclear densities, such as found here, are about \(2 \times 10^{14}\) times greater than that of water, which has a density of “only” \(10^3 \, kg/m^3\). One cubic meter of nuclear matter, such as found in a neutron star, has the same mass as a cube of water 61 km on a side.
Nuclear Forces and Stability
What forces hold a nucleus together? The nucleus is very small and its protons, being positive, exert tremendous repulsive forces on one another. (The Coulomb force increases as charges get closer, since it is proportional to \(1/r^2\), even at the tiny distances found in nuclei.) The answer is that two previously unknown forces hold the nucleus together and make it into a tightly packed ball of nucleons. These forces are called the weak and strong nuclear forces . Nuclear forces are so short ranged that they fall to zero strength when nucleons are separated by only a few fm. However, like glue, they are strongly attracted when the nucleons get close to one another. The strong nuclear force is about 100 times more attractive than the repulsive EM force, easily holding the nucleons together. Nuclear forces become extremely repulsive if the nucleons get too close, making nucleons strongly resist being pushed inside one another, something like ball bearings.
The fact that nuclear forces are very strong is responsible for the very large energies emitted in nuclear decay. During decay, the forces do work, and since work is force times the distance (\(W = Fd \, cos \, \theta\)), a large force can result in a large emitted energy. In fact, we know that there are two distinct nuclear forces because of the different types of nuclear decay—the strong nuclear force is responsible for decay, while the weak nuclear force is responsible for \(\beta\) decay.
The many stable and unstable nuclei we have explored, and the hundreds we have not discussed, can be arranged in a table called the chart of the nuclides , a simplified version of which is shown in Figure \(\PageIndex{3}\). Nuclides are located on a plot of \(N\) versus \(Z\). Examination of a detailed chart of the nuclides reveals patterns in the characteristics of nuclei, such as stability, abundance, and types of decay, analogous to but more complex than the systematics in the periodic table of the elements.
In principle, a nucleus can have any combination of protons and neutrons, but Figure \(\PageIndex{3}\) shows a definite pattern for those that are stable. For low-mass nuclei, there is a strong tendency for \(N\) and \(Z\) to be nearly equal. This means that the nuclear force is more attractive when \(N = Z\). More detailed examination reveals greater stability when \(N\) and \(Z\) are even numbers—nuclear forces are more attractive when neutrons and protons are in pairs. For increasingly higher masses, there are progressively more neutrons than protons in stable nuclei. This is due to the ever-growing repulsion between protons. Since nuclear forces are short ranged, and the Coulomb force is long ranged, an excess of neutrons keeps the protons a little farther apart, reducing Coulomb repulsion. Decay modes of nuclides out of the region of stability consistently produce nuclides closer to the region of stability. There are more stable nuclei having certain numbers of protons and neutrons, called magic numbers . Magic numbers indicate a shell structure for the nucleus in which closed shells are more stable. Nuclear shell theory has been very successful in explaining nuclear energy levels, nuclear decay, and the greater stability of nuclei with closed shells. We have been producing ever-heavier transuranic elements since the early 1940s, and we have now produced the element with \(Z = 118\). There are theoretical predictions of an island of relative stability for nuclei with such high \(Z\)s.
Summary
- Two particles, both called nucleons, are found inside nuclei. The two types of nucleons are protons and neutrons; they are very similar, except that the proton is positively charged while the neutron is neutral. Some of their characteristics are given in Table \(\PageIndex{3}\) and compared with those of the electron. A mass unit convenient to atomic and nuclear processes is the unified atomic mass unit (u), defined to be \[1 \, u = 1.6605 \times 10^{-27} \, kg = 931.46 \, MeV/c^2.\]
- A nuclide is a specific combination of protons and neutrons, denoted by \[_Z^AX_N \, or \, simply \, ^AX,\]
- \(Z\) is the number of protons or atomic number, X is the symbol for the element, \(N\), is the number of neutrons, and \(A\) is the mass number or the total number of protons and neutrons, \[A = N + Z.\]
- Nuclides having the same \(Z\) but different \(N\) are isotopes of the same element.
- The radius of a nucleus, \(r\), is approximately \[r = r_0A^{1/3},\] where \(r_0 = 1.2 \, fm\). Nuclear volumes are proportional to \(A\). There are two nuclear forces, the weak and the strong. Systematics in nuclear stability seen on the chart of the nuclides indicate that there are shell closures in nuclei for values of \(Z\) and \(N\) equal to the magic numbers, which correspond to highly stable nuclei.
Glossary
- atomic mass
- the total mass of the protons, neutrons, and electrons in a single atom
- atomic number
- number of protons in a nucleus
- chart of the nuclides
- a table comprising stable and unstable nuclei
- isotopes
- nuclei having the same \(Z\) and different \(N\)s
- magic numbers
- a number that indicates a shell structure for the nucleus in which closed shells are more stable
- mass number
- number of nucleons in a nucleus
- neutron
- a neutral particle that is found in a nucleus
- nucleons
- the particles found inside nuclei
- nucleus
- a region consisting of protons and neutrons at the center of an atom
- nuclide
- a type of atom whose nucleus has specific numbers of protons and neutrons
- protons
- the positively charged nucleons found in a nucleus
- radius of a nucleus
- the radius of a nucleus is \(r = r_0 A^{1/3}\)
|
libretexts
|
2025-03-17T19:53:48.275140
| 2016-07-24T08:48:25 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.03%3A_Substructure_of_the_Nucleus",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.3: Substructure of the Nucleus",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.04%3A_Nuclear_Decay_and_Conservation_Laws
|
31.4: Nuclear Decay and Conservation Laws
Learning Objectives
By the end of this section, you will be able to:
- Define and discuss nuclear decay.
- State the conservation laws.
- Explain parent and daughter nucleus.
- Calculate the energy emitted during nuclear decay.
Nuclear decay has provided an amazing window into the realm of the very small. Nuclear decay gave the first indication of the connection between mass and energy, and it revealed the existence of two of the four basic forces in nature. In this section, we explore the major modes of nuclear decay; and, like those who first explored them, we will discover evidence of previously unknown particles and conservation laws.
Some nuclides are stable, apparently living forever. Unstable nuclides decay (that is, they are radioactive), eventually producing a stable nuclide after many decays. We call the original nuclide the parent and its decay products the daughters . Some radioactive nuclides decay in a single step to a stable nucleus. For example, \(\ce{^{60}Co}\) is unstable and decays directly to \(\ce{^{60}Ni}\), which is stable. Others, such as \(\ce{^{238}U}\), decay to another unstable nuclide, resulting in a decay series in which each subsequent nuclide decays until a stable nuclide is finally produced. The decay series that starts from \(\ce{^{238}U}\) is of particular interest, since it produces the radioactive isotopes \(\ce{^{226}Ra}\) and \(\ce{^{210}Po}\), which the Curies first discovered (Figure \(\PageIndex{1}\)). Radon gas is also produced (\(\ce{^{222}Rn}\) in the series), an increasingly recognized naturally occurring hazard. Since radon is a noble gas, it emanates from materials, such as soil, containing even trace amounts of \(\ce{^{238}U}\) and can be inhaled. The decay of radon and its daughters produces internal damage. The \(\ce{^{238}U}\) decay series ends with \(\ce{^{206}Pb}\), a stable isotope of lead.
Note that the daughters of \(\alpha\) decay shown in Figure \(\PageIndex{1}\) always have two fewer protons and two fewer neutrons than the parent. This seems reasonable, since we know that \(\alpha\) decay is the emission of a \(\ce{^4He}\) nucleus, which has two protons and two neutrons. The daughters of \(\beta\) decay have one less neutron and one more proton than their parent. Beta decay is a little more subtle, as we shall see. No \(\gamma\) decays are shown in the figure, because they do not produce a daughter that differs from the parent.
Alpha Decay
In alpha decay , a \(\ce{^4He}\) nucleus simply breaks away from the parent nucleus, leaving a daughter with two fewer protons and two fewer neutrons than the parent (Figure \(\PageIndex{2}\)). One example of \(\alpha\) decay is shown in Figure \(\PageIndex{2}\) for \(\ce{^{238}U}\). Another nuclide that undergoes \(\alpha\) decay is \(^{239}Pu\). The decay equations for these two nuclides are
\[\ce{^{238}U \rightarrow ^{234}Th_{92} + ^4He}\]
\[\ce{^{239}Pu \rightarrow ^{235}U + ^4He}.\]
If you examine the periodic table of the elements, you will find that Th has \(Z = 90\), two fewer than U, which has \(Z = 92\). Similarly, in the second decay equation , we see that U has two fewer protons than Pu, which has \(Z = 94\). The general rule for \(\alpha\) decay is best written in the format \(_Z^AX_N\). If a certain nuclide is known to \(\alpha\) decay (generally this information must be looked up in a table of isotopes, such as in Appendix B ), its \(\alpha\) decay equation is
\[\ce{_{Z}^{A}X_N \rightarrow _{Z- 2}^{A -4} Y_{N - 2} + _2^4 He_2} \, (\alpha \, decay)\]
where \(\ce{Y}\) is the nuclide that has two fewer protons than \(\ce{X}\), such as \(\ce{Th}\) having two fewer than \(\ce{U}\). So if you were told that \(^{239}Pu \, \alpha\) decays and were asked to write the complete decay equation, you would first look up which element has two fewer protons (an atomic number two lower) and find that this is uranium. Then since four nucleons have broken away from the original 239, its atomic mass would be 235.
It is instructive to examine conservation laws related to \(\alpha\) decay. You can see from the equation
\[\ce{_{Z}^{A} X_N \rightarrow _{Z- 2}^{A -4}Y_{N - 2} + _2^4He_2}\]
that total charge is conserved. Linear and angular momentum are conserved, too. Although conserved angular momentum is not of great consequence in this type of decay, conservation of linear momentum has interesting consequences. If the nucleus is at rest when it decays, its momentum is zero. In that case, the fragments must fly in opposite directions with equal-magnitude momenta so that total momentum remains zero. This results in the \(\alpha\) particle carrying away most of the energy, as a bullet from a heavy rifle carries away most of the energy of the powder burned to shoot it. Total mass–energy is also conserved: the energy produced in the decay comes from conversion of a fraction of the original mass. As discussed in the modele on Atomic Physics , the general relationship is
\[E = (\Delta m)c^2. \label{einstein1}\]
Here, \(E\) is the nuclear reaction energy (the reaction can be nuclear decay or any other reaction), and \(\Delta m\) is the difference in mass between initial and final products. When the final products have less total mass, \(\Delta m\) is positive, and the reaction releases energy (is exothermic). When the products have greater total mass, the reaction is endothermic (\(\Delta m\) is negative) and must be induced with an energy input. For \(\alpha\) decay to be spontaneous, the decay products must have smaller mass than the parent.
Example \(\PageIndex{1}\): Alpha Decay Energy Found from Nuclear Masses
Find the energy emitted in the \(\alpha\) decay of \(\ce{^{239}Pu}\).
Strategy
Nuclear reaction energy, such as released in α decay, can be found using the equation \(E = (\Delta m)c^2\). We must first find \(\Delta m\), the difference in mass between the parent nucleus and the products of the decay. This is easily done using masses given in Appendix A .
Solution
The decay equation was given earlier for \(^{239}Pu\); it is
\[^{239}Pu \rightarrow \, ^{235}U + ^4He. \nonumber\]
Thus the pertinent masses are those of \(^{239}Pu\), \(^{235}U\), and the \(\alpha\) particle or \(^4He\), all of which are listed in Appendix A . The initial mass was \(m(^{239}Pu) = 239.052157 \, u\). The final mass is the sum:
\(m(^{235}U) + m(^4He) = 235.043924 \, u + 4.002602 \, u = 239.046526 \, u.\)
Thus,
\[ \begin{align*} \Delta m &= m(^{239}Pu) - [m(^{235}U) + m(^4He)] \\[5pt] &= 239.052157 \, u - 239.046526 \, u \\[5pt] &= 0.0005631 \, u. \end{align*}\]
Now we can find \(E\) by entering \(\Delta m\) into Equation \ref{einstein1}:
\[E = (\Delta m)c^2 = (0.005631 \, u)c^2. \nonumber \]
We know \(1 \, u = 931.5 \, MeV/c^2\), and so
\[E = (0.005631)(931.5 \, MeV/c^2)(c^2) = 5.25 \, MeV. \nonumber\]
Discussion
The energy released in this \(\alpha\) decay is in the \(MeV\) range, about \(10^6\) times as great as typical chemical reaction energies, consistent with many previous discussions. Most of this energy becomes kinetic energy of the \(\alpha\) particle (or \(^4He\) nucleus), which moves away at high speed. The energy carried away by the recoil of the \(^{235}U\) nucleus is much smaller in order to conserve momentum. The \(^{235}U\) nucleus can be left in an excited state to later emit photons (\(\gamma\) rays). This decay is spontaneous and releases energy, because the products have less mass than the parent nucleus. The question of why the products have less mass will be discussed in Binding Energy . Note that the masses given in Appendix A are atomic masses of neutral atoms, including their electrons. The mass of the electrons is the same before and after \(α\) decay, and so their masses subtract out when finding \(\Delta m\). In this case, there are 94 electrons before and after the decay.
Beta Decay
There are actually three types of beta decay . The first discovered was “ordinary” beta decay and is called \(\beta^-\) decay or electron emission. The symbol \(\beta^-\) represents an electron emitted in nuclear beta decay . Cobalt-60 is a nuclide that \(\beta^-\) decays in the following manner:
\[\ce{^{60}Co \rightarrow ^{60}Ni + \beta^{-} + } \text{neutrino}.\]
The neutrino is a particle emitted in beta decay that was unanticipated and is of fundamental importance. The neutrino was not even proposed in theory until more than 20 years after beta decay was known to involve electron emissions. Neutrinos are so difficult to detect that the first direct evidence of them was not obtained until 1953. Neutrinos are nearly massless, have no charge, and do not interact with nucleons via the strong nuclear force. Traveling approximately at the speed of light, they have little time to affect any nucleus they encounter. This is, owing to the fact that they have no charge (and they are not EM waves), they do not interact through the EM force. They do interact via the relatively weak and very short range weak nuclear force. Consequently, neutrinos escape almost any detector and penetrate almost any shielding. However, neutrinos do carry energy, angular momentum (they are fermions with half-integral spin), and linear momentum away from a beta decay. When accurate measurements of beta decay were made, it became apparent that energy, angular momentum, and linear momentum were not accounted for by the daughter nucleus and electron alone. Either a previously unsuspected particle was carrying them away, or three conservation laws were being violated. Wolfgang Pauli made a formal proposal for the existence of neutrinos in 1930. The Italian-born American physicist Enrico Fermi (1901–1954) gave neutrinos their name, meaning little neutral ones, when he developed a sophisticated theory of beta decay (Figure \(\PageIndex{3}\)). Part of Fermi’s theory was the identification of the weak nuclear force as being distinct from the strong nuclear force and in fact responsible for beta decay.
The neutrino also reveals a new conservation law. There are various families of particles, one of which is the electron family. We propose that the number of members of the electron family is constant in any process or any closed system. In our example of beta decay, there are no members of the electron family present before the decay, but after, there is an electron and a neutrino. So electrons are given an electron family number of \(+1\). The neutrino in \(\beta^-\) decay is an electron’s antineutrino , given the symbol \(\overline{\nu}_e\), where \(\nu\) is the Greek letter nu, and the subscript e means this neutrino is related to the electron. The bar indicates this is a particle of antimatter . (All particles have antimatter counterparts that are nearly identical except that they have the opposite charge. Antimatter is almost entirely absent on Earth, but it is found in nuclear decay and other nuclear and particle reactions as well as in outer space.) The electron’s antineutrino \(\overline{\nu}_e\), being antimatter, has an electron family number of \(-1\). The total is zero, before and after the decay. The new conservation law, obeyed in all circumstances, states that the total electron family number is constant . An electron cannot be created without also creating an antimatter family member. This law is analogous to the conservation of charge in a situation where total charge is originally zero, and equal amounts of positive and negative charge must be created in a reaction to keep the total zero.
If a nuclide \(_Z^AX_N\) is known to \(\beta^-\) decay, then its \(\beta^-\) decay equation is
\[_Z^AX_N \rightarrow _{Z+1}^AY_{N-1} + \beta^- + \overline{\nu}_e (\beta^- \, decay),\]
where Y is the nuclide having one more proton than X (Figure \(\PageIndex{4}\)). So if you know that a certain nuclide \(\beta^-\) decays, you can find the daughter nucleus by first looking up \(Z\) for the parent and then determining which element has atomic number \(Z + 1\). In the example of the \(\beta^-\) decay of \(^{60}Co\) given earlier, we see that \(Z = 27\) for Co and \(Z = 28\) is Ni. It is as if one of the neutrons in the parent nucleus decays into a proton, electron, and neutrino. In fact, neutrons outside of nuclei do just that—they live only an average of a few minutes and \(\beta^-\) decay in the following manner:
\[n \rightarrow p + \beta^- + \overline{\nu}_e.\]
We see that charge is conserved in \(\beta^-\) decay, since the total charge is \(Z\) before and after the decay. For example, in \(^{60}Co\) decay, total charge is 27 before decay, since cobalt has \(Z = 27\). After decay, the daughter nucleus is Ni, which has \(Z = 28\), and there is an electron, so that the total charge is also \(28 + (-1)\) or 27. Angular momentum is conserved, but not obviously (you have to examine the spins and angular momenta of the final products in detail to verify this). Linear momentum is also conserved, again imparting most of the decay energy to the electron and the antineutrino, since they are of low and zero mass, respectively. Another new conservation law is obeyed here and elsewhere in nature. The total number of nucleons \(A\) is conserved. In \(^{60}Co\) decay, for example, there are 60 nucleons before and after the decay. Note that total \(A\) is also conserved in \(\alpha\) decay. Also note that the total number of protons changes, as does the total number of neutrons, so that total \(Z\) and total \(N\) are not conserved in \(\beta^-\) decay, as they are in \(\alpha\) decay. Energy released in \(\beta^-\) decay can be calculated given the masses of the parent and products.
Example \(\PageIndex{1}\): \(\beta^-\) Decay Energy from Masses
Find the energy emitted in the \(\beta^-\) decay of \(^{60}Co\).
Strategy and Concept
As in the preceding example, we must first find \(\Delta m\), the difference in mass between the parent nucleus and the products of the decay, using masses given in Appendix A . Then the emitted energy is calculated as before, using \(E = (\Delta m)c^2\). The initial mass is just that of the parent nucleus, and the final mass is that of the daughter nucleus and the electron created in the decay. The neutrino is massless, or nearly so. However, since the masses given in Appendix A are for neutral atoms, the daughter nucleus has one more electron than the parent, and so the extra electron mass that corresponds to the \(\beta^-\) is included in the atomic mass of Ni. Thus, \[ \Delta m = m(^{60}Co) - m(^{60}Ni ).\]
Solution
The \(\beta^-\) decay equation for \(^{60}Co\) is
\[_{27}^{60}Co_{33} \rightarrow _{28}^{60}Ni_{32} + \beta^- + \overline{\nu}_e.\]
As noticed,
\[\Delta m = m(^{60}Co) - m(^{60}Ni ).\]
Entering the masses found in Appendix A gives
\[\Delta m = 59.933820 \, u - 59.930789 \, u = 0.003031 \, u. \nonumber\]
Thus,
\[ E = (\Delta m)c^2 = (0.003031 \, u)c^2.\] Using \(1 \, u = 031.5 \, MeV/c^2\)
and we obtain
\[E = (0.003031)(931.5 \, MeV/c^2)(c^2) = 2.82 \, MeV. \nonumber\]
Discussion and Implications
Perhaps the most difficult thing about this example is convincing yourself that the \(\beta^-\) mass is included in the atomic mass of \(^{60}Ni\). Beyond that are other implications. Again the decay energy is in the MeV range. This energy is shared by all of the products of the decay. In many \(^{60}Co\) decays, the daughter nucleus \(^{60}Ni\) is left in an excited state and emits photons ( \gamma\) rays). Most of the remaining energy goes to the electron and neutrino, since the recoil kinetic energy of the daughter nucleus is small. One final note: the electron emitted in \(\beta^-\) decay is created in the nucleus at the time of decay.
The second type of beta decay is less common than the first. It is \(\beta^+\) decay. Certain nuclides decay by the emission of a positive electron. This is antielectron or positron decay (Figure \(\PageIndex{5}\)).
The antielectron is often represented by the symbol \(e^+\), but in beta decay it is written as \(\beta^+\) to indicate the antielectron was emitted in a nuclear decay. Antielectrons are the antimatter counterpart to electrons, being nearly identical, having the same mass, spin, and so on, but having a positive charge and an electron family number of \(-1\). When a positron encounters an electron, there is a mutual annihilation in which all the mass of the antielectron-electron pair is converted into pure photon energy. (The reaction, \(e^+ + e^- \rightarrow \gamma + \gamma\), conserves electron family number as well as all other conserved quantities.) If a nuclide \(_Z^AX_N\) is known to \(\beta^+\) decay, then its \(\beta^+\) decay equation is
\[_Z^AX_N \rightarrow _{Z-1}^AY_{N+1} + \beta^+ + \nu_e (\beta^+ \, decay),\]
where Y is the nuclide having one less proton than X (to conserve charge) and \(\nu_e\) is the symbol for the electron’s neutrino, which has an electron family number of \(+1\). Since an antimatter member of the electron family (the \(\beta^+\)) is created in the decay, a matter member of the family (here the \(\nu_e\) must also be created. Given, for example, that \(^{22}Na \, \beta^+\) decays, you can write its full decay equation by first finding that \(Z = 11\) for \(^{22}Na\), so that the daughter nuclide will have \(Z = 10\), the atomic number for neon. Thus the \(\beta^+\) decay equation for \(^{22}Na\) is
\[_{11}^{22}Na_{11} \rightarrow _{10}^{22}Ne_{12} + \beta^+ + \nu_e.\]
In \(\beta^+\) decay, it is as if one of the protons in the parent nucleus decays into a neutron, a positron, and a neutrino. Protons do not do this outside of the nucleus, and so the decay is due to the complexities of the nuclear force. Note again that the total number of nucleons is constant in this and any other reaction. To find the energy emitted in \(\beta^+\) decay, you must again count the number of electrons in the neutral atoms, since atomic masses are used. The daughter has one less electron than the parent, and one electron mass is created in the decay. Thus, in \(\beta^+\) decay,
\[\Delta m = m(parent) - [m(daughter) + 2m_e],\]
since we use the masses of neutral atoms.
Electron capture is the third type of beta decay. Here, a nucleus captures an inner-shell electron and undergoes a nuclear reaction that has the same effect as \(\beta^+\) decay. Electron capture is sometimes denoted by the letters EC. We know that electrons cannot reside in the nucleus, but this is a nuclear reaction that consumes the electron and occurs spontaneously only when the products have less mass than the parent plus the electron. If a nuclide \(_Z^AX_N\) is known to undergo electron capture, then its electron capture equation is
\[_Z^AX_N + e^- \rightarrow _{Z-1}^AY_{N+1} + \nu_e (electron \, capture, \, or \, EC).\]
Any nuclide that can \(\beta^+\) decay can also undergo electron capture (and often does both). The same conservation laws are obeyed for EC as for \(\beta^+\) decay. It is good practice to confirm these for yourself.
All forms of beta decay occur because the parent nuclide is unstable and lies outside the region of stability in the chart of nuclides. Those nuclides that have relatively more neutrons than those in the region of stability will \(\beta^-\) decay to produce a daughter with fewer neutrons, producing a daughter nearer the region of stability. Similarly, those nuclides having relatively more protons than those in the region of stability will \(\beta^-\) decay or undergo electron capture to produce a daughter with fewer protons, nearer the region of stability.
Gamma Decay
Gamma decay is the simplest form of nuclear decay - it is the emission of energetic photons by nuclei left in an excited state by some earlier process. Protons and neutrons in an excited nucleus are in higher orbitals, and they fall to lower levels by photon emission (analogous to electrons in excited atoms). Nuclear excited states have lifetimes typically of only about \(10^{-14}\) s, an indication of the great strength of the forces pulling the nucleons to lower states. The \(\gamma\) decay equation is simply
\[\ce{_{Z}^{A}X_{N}^{*} \rightarrow _{Z}^{A}X_N + \gamma_1 + \gamma_2 + . . .} (\gamma \, decay)\]
where the asterisk indicates the nucleus is in an excited state. There may be one or more \(\gamma\)s emitted, depending on how the nuclide de-excites. In radioactive decay, \(\gamma\) emission is common and is preceded by \(\gamma\) or \(\beta\) decay. For example, when \(^{60}Co \, \beta^-\) decays, it most often leaves the daughter nucleus in an excited state, written \(\ce{^{60}Ni*}\). Then the nickel nucleus quickly \(\gamma\) decays by the emission of two penetrating \(\gamma\)s.
\[\ce{^{60}Ni^{*} \rightarrow ^{60}Ni + \gamma_1 + \gamma_2.}\]
These are called cobalt \(\gamma\) rays, although they come from nickel—they are used for cancer therapy, for example. It is again constructive to verify the conservation laws for gamma decay. Finally, since \(\gamma\) decay does not change the nuclide to another species, it is not prominently featured in charts of decay series, such as that in Figure .
There are other types of nuclear decay, but they occur less commonly than \(\alpha\), \(\beta\), and \(\gamma\) decay. Spontaneous fission is the most important of the other forms of nuclear decay because of its applications in nuclear power and weapons. It is covered in the next chapter.
Summary
- When a parent nucleus decays, it produces a daughter nucleus following rules and conservation laws. There are three major types of nuclear decay, called alpha (\(\alpha\)) beta (\(\beta\)) and gamma (\(\gamma\)). The \(\alpha\) decay equation is \[_Z^AX_N \rightarrow _{Z-2}^{A-4}Y_{N-2} + _2^4He_2. \nonumber\]
- Nuclear decay releases an amount of energy \(E\) related to the mass destroyed \(\Delta m\) by \[E = (\Delta m)c^2. \nonumber\]
- There are three forms of beta decay. The \(\beta^-\) decay equation is \[_Z^AX_N \rightarrow _{Z+1}^AY_{N-1} + \beta^- + \overline{\nu}_e. \nonumber\]
- The \(\beta^+\) decay equation is \[_Z^AX_N \rightarrow _{Z-1}^AY_{N+1} + \beta^+ + \nu_e. \nonumber\]
- The electron capture equation is \[_Z^AX_N + e^- \rightarrow _{Z-1}^AY_{N+1} + \nu_e. \nonumber\]
- \(\beta^-\) is an electron, \(\beta^+\) is an antielectron or positron, \(\nu_e\) represents an electron’s neutrino, and \(\overline{\nu}_e\) is an electron’s antineutrino. In addition to all previously known conservation laws, two new ones arise— conservation of electron family number and conservation of the total number of nucleons. The \(\gamma\) decay equation is \[_Z^AX*_N \rightarrow _Z^AX_N + \gamma_1 + \gamma_2 + . . . \nonumber\] where\(\gamma\) is a high-energy photon originating in a nucleus.
Glossary
- parent
- the original state of nucleus before decay
- daughter
- the nucleus obtained when parent nucleus decays and produces another nucleus following the rules and the conservation laws
- positron
- the particle that results from positive beta decay; also known as an antielectron
- decay
- the process by which an atomic nucleus of an unstable atom loses mass and energy by emitting ionizing particles
- alpha decay
- type of radioactive decay in which an atomic nucleus emits an alpha particle
- beta decay
- type of radioactive decay in which an atomic nucleus emits a beta particle
- gamma decay
- type of radioactive decay in which an atomic nucleus emits a gamma particle
- decay equation
- the equation to find out how much of a radioactive material is left after a given period of time
- nuclear reaction energy
- the energy created in a nuclear reaction
- neutrino
- an electrically neutral, weakly interacting elementary subatomic particle
- electron’s antineutrino
- antiparticle of electron’s neutrino
- positron decay
- type of beta decay in which a proton is converted to a neutron, releasing a positron and a neutrino
- antielectron
- another term for positron
- decay series
- process whereby subsequent nuclides decay until a stable nuclide is produced
- electron’s neutrino
- a subatomic elementary particle which has no net electric charge
- antimatter
- composed of antiparticles
- electron capture
- the process in which a proton-rich nuclide absorbs an inner atomic electron and simultaneously emits a neutrino
- electron capture equation
- equation representing the electron capture
|
libretexts
|
2025-03-17T19:53:48.373113
| 2016-07-24T08:49:18 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.04%3A_Nuclear_Decay_and_Conservation_Laws",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.4: Nuclear Decay and Conservation Laws",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.05%3A_Half-Life_and_Activity
|
31.5: Half-Life and Activity
Learning Objectives
By the end of this section, you will be able to:
- Define half-life.
- Define dating.
- Calculate age of old objects by radioactive dating.
Unstable nuclei decay. However, some nuclides decay faster than others. For example, radium and polonium, discovered by the Curies, decay faster than uranium. This means they have shorter lifetimes, producing a greater rate of decay. In this section we explore half-life and activity, the quantitative terms for lifetime and rate of decay.
Half-Life
Why use a term like half-life rather than lifetime? The answer can be found by examining Figure \(\PageIndex{1}\), which shows how the number of radioactive nuclei in a sample decreases with time. The time in which half of the original number of nuclei decay is defined as the half-life \(t_{1/2}\). Half of the remaining nuclei decay in the next half-life. Further, half of that amount decays in the following half-life. Therefore, the number of radioactive nuclei decreases from \(N\) to \(N/2\) in one half-life, then to \(N/4\) in the next, and to \(N/8\) in the next, and so on. If \(N\) is a large number, then many half-lives (not just two) pass before all of the nuclei decay. Nuclear decay is an example of a purely statistical process. A more precise definition of half-life is that each nucleus has a 50% chance of living for a time equal to one half-life \(t_{1/2}\). Thus, if \(N\) is reasonably large, half of the original nuclei decay in a time of one half-life. If an individual nucleus makes it through that time, it still has a 50% chance of surviving through another half-life. Even if it happens to make it through hundreds of half-lives, it still has a 50% chance of surviving through one more. The probability of decay is the same no matter when you start counting. This is like random coin flipping. The chance of heads is 50%, no matter what has happened before.
There is a tremendous range in the half-lives of various nuclides, from as short as \(10^{-23}\)s for the most unstable, to more than \(10^{16}\)y for the least unstable, or about 46 orders of magnitude. Nuclides with the shortest half-lives are those for which the nuclear forces are least attractive, an indication of the extent to which the nuclear force can depend on the particular combination of neutrons and protons. The concept of half-life is applicable to other subatomic particles, as will be discussed in Particle Physics . It is also applicable to the decay of excited states in atoms and nuclei. The following equation gives the quantitative relationship between the original number of nuclei present at time zero (N_0\) and the number (\(N\)) at a later time \(t\).
\[N = N_0e^{-\lambda t},\] where \(e = 2.71828 . . .\) is the base of the natural logarithm, and \(\lambda\) is the decay constant for the nuclide. The shorter the half-life, the larger is the value of \(\lambda\) and the faster the exponential \(e^{\lambda t}\) decreases with time. The relationship between the decay constant \(\lambda\) and the half-life \(t_{1/2}\) is
\[\lambda = \dfrac{ln(2)}{t_{1/2}} \approx \dfrac{0.693}{t_{1/2}}.\]
To see how the number of nuclei declines to half its original value in one half-life, let \(t = t_{1/2}\) in the exponential in the equation \(N = N_0e^{-\lambda t}\). This gives
\[N = N_0e^{-\lambda t} = N_0e^{-0.693} = 0.500 N_0.\] For integral numbers of half-lives, you can just divide the original number by 2 over and over again, rather than using the exponential relationship. For example, if ten half-lives have passed, we divide \(N\) by 2 ten times. This reduces it to \(N/1024\). For an arbitrary time, not just a multiple of the half-life, the exponential relationship must be used.
Radioactive dating is a clever use of naturally occurring radioactivity. Its most famous application is carbon-14 dating . Carbon-14 has a half-life of 5730 years and is produced in a nuclear reaction induced when solar neutrinos strike \(^{14}N\) in the atmosphere. Radioactive carbon has the same chemistry as stable carbon, and so it mixes into the ecosphere, where it is consumed and becomes part of every living organism. Carbon-14 has an abundance of 1.3 parts per trillion of normal carbon. Thus, if you know the number of carbon nuclei in an object (perhaps determined by mass and Avogadro’s number), you multiply that number by \(1.3 \times 10^{-12}\) to find the number of \(^{14}C\) nuclei in the object. When an organism dies, carbon exchange with the environment ceases, and \(^{14}C\) is not replenished as it decays. By comparing the abundance of \(^{14}C\) in an artifact, such as mummy wrappings, with the normal abundance in living tissue, it is possible to determine the artifact’s age (or time since death). Carbon-14 dating can be used for biological tissues as old as 50 or 60 thousand years, but is most accurate for younger samples, since the abundance of \(^{14}C\) nuclei in them is greater. Very old biological materials contain no \(^{14}C\) at all. There are instances in which the date of an artifact can be determined by other means, such as historical knowledge or tree-ring counting. These cross-references have confirmed the validity of carbon-14 dating and permitted us to calibrate the technique as well. Carbon-14 dating revolutionized parts of archaeology and is of such importance that it earned the 1960 Nobel Prize in chemistry for its developer, the American chemist Willard Libby (1908–1980).
One of the most famous cases of carbon-14 dating involves the Shroud of Turin, a long piece of fabric purported to be the burial shroud of Jesus (Figure \(\PageIndex{2}\)). This relic was first displayed in Turin in 1354 and was denounced as a fraud at that time by a French bishop. Its remarkable negative imprint of an apparently crucified body resembles the then-accepted image of Jesus, and so the shroud was never disregarded completely and remained controversial over the centuries. Carbon-14 dating was not performed on the shroud until 1988, when the process had been refined to the point where only a small amount of material needed to be destroyed. Samples were tested at three independent laboratories, each being given four pieces of cloth, with only one unidentified piece from the shroud, to avoid prejudice. All three laboratories found samples of the shroud contain 92% of the \(^{14}C\) found in living tissues, allowing the shroud to be dated (Example \(\PageIndex{1}\)).
Example \(\PageIndex{1}\): How Old Is the Shroud of Turin?
Calculate the age of the Shroud of Turin given that the amount of \(^{14}C\) found in it is 92% of that in living tissue.
Strategy
Knowing that 92% of the \(^{14}C\) remains means that \(N/N_0 = 0.92\). Therefore, the equation \(N = N_0e^{-\lambda t}\) can be used to find \(\lambda t\). We also know that the half-life of \(^{14}C\) is 5730 y, and so once \(\lambda t\) is known, we can use the equation \(\lambda = \frac{0.693}{t_{1/2}}\) to find \(\lambda\) and then find \(t\) as requested. Here, we postulate that the decrease in \(^{14}C\) is solely due to nuclear decay.
Solution
Solving the equation \(N = N_0e^{-\lambda t}\) for N/N_0\) gives
\[\dfrac{N}{N_0} = e^{-\lambda t}.\]
Thus,
\[0.92 = e^{-\lambda t}\]
Taking the natural logarithm of both sides of the equation yields
\[ln \, 0.92 = - \lambda t\]
so that
\[-0.0834 = -\lambda t.\]
Rearranging to isolate \(t\) gives
\[t = \dfrac{0.0834}{\lambda}.\]
Now, the equation \(\lambda = \frac{0.693}{t_{1/2}}\) can be used to find \(|lambda\) for \(^{14}C\). Solving for \(\lambda\) and substituting the known half-life gives
\[\lambda = \dfrac{0.693}{t_{1/2}} = \dfrac{0.693}{5730 \, y}.\]
We enter this value into the previous equation to find \(t\).
\[t = \dfrac{0.0834}{\frac{0.693}{5730 \, y}} = 690 \, y.\]
Discussion
This dates the material in the shroud to 1988–690 = a.d. 1300. Our calculation is only accurate to two digits, so that the year is rounded to 1300. The values obtained at the three independent laboratories gave a weighted average date of a.d. \(1320 \pm 60\). The uncertainty is typical of carbon-14 dating and is due to the small amount of \(^{14}C\) in living tissues, the amount of material available, and experimental uncertainties (reduced by having three independent measurements). It is meaningful that the date of the shroud is consistent with the first record of its existence and inconsistent with the period in which Jesus lived.
There are other forms of radioactive dating. Rocks, for example, can sometimes be dated based on the decay of \(^{238}U\). The decay series for \(^{238}U\) ends with \(^{206}Pb\), so that the ratio of these nuclides in a rock is an indication of how long it has been since the rock solidified. The original composition of the rock, such as the absence of lead, must be known with some confidence. However, as with carbon-14 dating, the technique can be verified by a consistent body of knowledge. Since \(^{238}U\) has a half-life of \(4.5 \times 10^9\)y, it is useful for dating only very old materials, showing, for example, that the oldest rocks on Earth solidified about \(3.5 \times 10^9\) years ago.
Activity, the Rate of Decay
What do we mean when we say a source is highly radioactive? Generally, this means the number of decays per unit time is very high. We define activity \(R\) to be the rate of decay expressed in decays per unit time. In equation form, this is
\[R = \dfrac{\Delta N}{\Delta t}\] where \(\Delta N\) is the number of decays that occur in time \(\Delta t\). The SI unit for activity is one decay per second and is given the name becquerel (Bq) in honor of the discoverer of radioactivity. That is,
\[1 \, Bq = 1 \, decay/s.\]
Activity \(R\) is often expressed in other units, such as decays per minute or decays per year. One of the most common units for activity is the curie (Ci), defined to be the activity of 1 g of \(^{226}Ra\), in honor of Marie Curie’s work with radium. The definition of curie is
\[1 \, Ci = 3.70 \times 10^{10} \, Bq,\] or \(3.70 \times 10^{10}\) decays per second. A curie is a large unit of activity, while a becquerel is a relatively small unit. \(1 \, MBq = 100 \, microcuries ~(\muCi)\). In countries like Australia and New Zealand that adhere more to SI units, most radioactive sources, such as those used in medical diagnostics or in physics laboratories, are labeled in Bq or megabecquerel (MBq).
Intuitively, you would expect the activity of a source to depend on two things: the amount of the radioactive substance present, and its half-life. The greater the number of radioactive nuclei present in the sample, the more will decay per unit of time. The shorter the half-life, the more decays per unit time, for a given number of nuclei. So activity \(R\) should be proportional to the number of radioactive nuclei, \(N\), and inversely proportional to their half-life, \(t_{1/2}\). In fact, your intuition is correct. It can be shown that the activity of a source is
\[R = \dfrac{0.693 \, N}{t_{1/2}}\]
where \(N\) is the number of radioactive nuclei present, having half-life \(t_{1/2}\). This relationship is useful in a variety of calculations, as the next two examples illustrate.
Example \(\PageIndex{2}\): How Great is the \(^{14}C\) Activity in Living Tissue?
Calculate the activity due to \(^{14}C\) in 1.00 kg of carbon found in a living organism. Express the activity in units of Bq and Ci.
Strategy
To find the activity \(R\) using the equation \(R = \frac{0.693 N}{t_{1/2}}\), we must know \(N\) and \(t_{1/2}\). The half-life of \(^{14}C\) can be found in Appendix B , and was stated above as 5730 y. To find \(N\), we first find the number of \(^{12}C\) nuclei in 1.00 kg of carbon using the concept of a mole. As indicated, we then multiply by \(1.3 \times 10^{-12}\) (the abundance of \(^{14}C\) in a carbon sample from a living organism) to get the number of \(^{14}C\) nuclei in a living organism.
Solution
One mole of carbon has a mass of 12.0 g, since it is nearly pure \(^{12}C\). (A mole has a mass in grams equal in magnitude to \(A\) found in the periodic table.) Thus the number of carbon nuclei in a kilogram is
\[N(^{12}C = \dfrac{6.02 \times 10^{23} \, mol^{-1}}{12.0 \, g/mol} \times (1000 \, g) = 5.02 \times 10^{25}. \nonumber\]
So the number of \(^{14}C\) nuclei in 1 kg of carbon is
\[N(^{14}C) = (5.02 \times 10^{25})(1.3 \times 10^{-12}) = 6.52 \times 10^{13}.\nonumber\]
Now the activity \(R\) is found using the equation \(R = \frac{0.693 N}{t_{1/2}}\). Entering known values gives
\[R = \dfrac{0.693(6.52 \times 10^{13})}{5730 \, y} = 7.89 \times 10^9 \, y^{-1},\nonumber\]
or \(7.89 \times 10^9\) decays per year. To convert this to the unit Bq, we simply convert years to seconds. Thus,
\[R = (7.89 \times 10^9 \, y^{-1}) \dfrac{1.00 \, y}{3.16 \times 10^7 \, s} = 250 \, Bq, \nonumber\]
or 250 decays per second. To express \(R\) in curies, we use the definition of a curie,
\[R = \dfrac{250 \, Bq}{3.7 \times 10^{10} \, Bq/Ci} = 6.76 \times 10^{-9} \, Ci.\nonumber\]
Thus,
\[R = 6.76 \, nCi.\nonumber\]
Discussion
Our own bodies contain kilograms of carbon, and it is intriguing to think there are hundreds of \(^{14}C\) decays per second taking place in us. Carbon-14 and other naturally occurring radioactive substances in our bodies contribute to the background radiation we receive. The small number of decays per second found for a kilogram of carbon in this example gives you some idea of how difficult it is to detect \(^{14}C\) in a small sample of material. If there are 250 decays per second in a kilogram, then there are 0.25 decays per second in a gram of carbon in living tissue. To observe this, you must be able to distinguish decays from other forms of radiation, in order to reduce background noise. This becomes more difficult with an old tissue sample, since it contains less \(^{14}C\), and for samples more than 50 thousand years old, it is impossible.
Human-made (or artificial) radioactivity has been produced for decades and has many uses. Some of these include medical therapy for cancer, medical imaging and diagnostics, and food preservation by irradiation. Many applications as well as the biological effects of radiation are explored in Medical Applications of Nuclear Physics , but it is clear that radiation is hazardous. A number of tragic examples of this exist, one of the most disastrous being the meltdown and fire at the Chernobyl reactor complex in the Ukraine (Figure \(\PageIndex{3}\)). Several radioactive isotopes were released in huge quantities, contaminating many thousands of square kilometers and directly affecting hundreds of thousands of people. The most significant releases were of \(^{131}I\), \(^{90}Sr\), \(^{137}Cs\), \(^{239}Pu\), \(^{238}U\), and \(^{235}U\). Estimates are that the total amount of radiation released was about 100 million curies.
Human and Medical Applications
Example \(\PageIndex{3}\): What Mass of \(^{137}Cs\) Escaped Chernobyl?
It is estimated that the Chernobyl disaster released 6.0 MCi of \(^{137}Cs\) into the environment. Calculate the mass of \(^{137}Cs\) released.
Strategy
We can calculate the mass released using Avogadro’s number and the concept of a mole if we can first find the number of nuclei \(N\) released. Since the activity \(R\) is given, and the half-life of \(^{137}Cs\) is found in Appendix B to be 30.2 y, we can use the equation \(N = \frac{0.693N}{t_{1/2}}\) to find \(N\).
Solution
Solving the equation \(N = \frac{0.693N}{t_{1/2}}\) for \(N\) gives
\[N = \dfrac{Rt_{1/2}}{0.693}.\]
Entering the given values yields
\[N = \dfrac{(6.0 \, MCo)(30.2 \, y)}{0.693}.\]
Converting curies to becquerels and years to seconds, we get
\[N = \dfrac{(6.0 \times 10^6 \, Ci)(3.7 \times 10^{10} \, Bq/Ci)(39.2 \, y)(3.16 \times 10^7 \, s/y)}{0.693} = 3.1 \times 10^{26}.\]
One mole of a nuclide \(^AX\) has a mass of \(A\) grams, so that one mole of \(^{137}Cs\) has a mass of 137 g. A mole has \(6.02 \times 10^{23}\) nuclei. Thus the mass of \(^{137}Cs\) released was
\[m = \left(\dfrac{137 \, g}{6.02 \times 10^{23}} \right)(3.1 \times 10^{26}) = 70 \times 10^3 \, g = 70 \, kg.\]
Discussion
While 70 kg of material may not be a very large mass compared to the amount of fuel in a power plant, it is extremely radioactive, since it only has a 30-year half-life. Six megacuries (6.0 MCi) is an extraordinary amount of activity but is only a fraction of what is produced in nuclear reactors. Similar amounts of the other isotopes were also released at Chernobyl. Although the chances of such a disaster may have seemed small, the consequences were extremely severe, requiring greater caution than was used. More will be said about safe reactor design in the next chapter, but it should be noted that Western reactors have a fundamentally safer design.
Activity \(R\) decreases in time, going to half its original value in one half-life, then to one-fourth its original value in the next half-life, and so on. Since \(R = \frac{0.693N}{t_{1/2}}\), the activity decreases as the number of radioactive nuclei decreases. The equation for \(R\) as a function of time is found by combining the equations \(N = N_0e^{-\lambda t}\) and \(R = \frac{0.693N}{t_{1/2}}\), yielding
\[R = R_0e^{-\lambda t},\]
where \(R_0\) is the activity at \(t = 0\). This equation shows exponential decay of radioactive nuclei. For example, if a source originally has a 1.00-mCi activity, it declines to 0.500 mCi in one half-life, to 0.250 mCi in two half-lives, to 0.125 mCi in three half-lives, and so on. For times other than whole half-lives, the equation \(R = R_0 e^{-\lambda t}\) must be used to find \(R\).
PHET EXPLORATIONS: ALPHA DECAY
Watch alpha particles escape from a polonium nucleus, causing radioactive alpha decay. See how random decay times relate to the half life.
Summary
- Half-life \(t_{1/2}\) is the time in which there is a 50% chance that a nucleus will decay. The number of nuclei \(N\) as a function of time is \[N = N_0e^{-\lambda t},\] where \(N_0\) is the number present at \(t = 0\), and \(\lambda\) is the decay constant, related to the half-life by \[\lambda = \dfrac{0.693}{t_{1/2}}.\]
- One of the applications of radioactive decay is radioactive dating, in which the age of a material is determined by the amount of radioactive decay that occurs. The rate of decay is called the activity \(R\): \[R = \dfrac{\Delta N}{\Delta t}.\]
- The SI unit for \(R\) is the becquerel (Bq), defined by \[1 \, Bq = 1 \, decay/s.\]
- \(R\) is also expressed in terms of curies (Ci), where \[1 \, Ci = 3.70 \times 10^{10} \, Bq.\]
- The activity \(R\) of a source is related to \(N\) and \(t_{1/2}\) by \[R = \dfrac{0.693N}{t_{1/2}}.\]
- Since \(N\) has an exponential behavior as in the equation \(N = N_0e^{-\lambda t}\), the activity also has an exponential behavior, given by \[R = R_0e^{-\lambda t},\] where \(R-0\) is the activity at \(t = 0\).
Glossary
- becquerel
- SI unit for rate of decay of a radioactive material
- half-life
- the time in which there is a 50% chance that a nucleus will decay
- radioactive dating
- an application of radioactive decay in which the age of a material is determined by the amount of radioactivity of a particular type that occurs
- decay constant
- quantity that is inversely proportional to the half-life and that is used in equation for number of nuclei as a function of time
- carbon-14 dating
- a radioactive dating technique based on the radioactivity of carbon-14
- activity
- the rate of decay for radioactive nuclides
- rate of decay
- the number of radioactive events per unit time
- curie
- the activity of 1g of \(^{226}Ra\), equal to \(3.70 \times 10^{10} \, Bq\)
|
libretexts
|
2025-03-17T19:53:48.464376
| 2016-07-24T08:50:06 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.05%3A_Half-Life_and_Activity",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.5: Half-Life and Activity",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.06%3A_Binding_Energy
|
31.6: Binding Energy
Learning Objectives
By the end of this section, you will be able to:
- Define and discuss binding energy.
- Calculate the binding energy per nucleon of a particle.
The more tightly bound a system is, the stronger the forces that hold it together and the greater the energy required to pull it apart. We can therefore learn about nuclear forces by examining how tightly bound the nuclei are. We define the binding energy (BE) of a nucleus to be the energy required to completely disassemble it into separate protons and neutrons . We can determine the BE of a nucleus from its rest mass. The two are connected through Einstein’s famous relationship \(E = (\Delta m)c^2\).
A bound system has a smaller mass than its separate constituents; the more tightly the nucleons are bound together, the smaller the mass of the nucleus.
Imagine pulling a nuclide apart as illustrated in Figure \(\PageIndex{1}\). Work done to overcome the nuclear forces holding the nucleus together puts energy into the system. By definition, the energy input equals the binding energy BE. The pieces are at rest when separated, and so the energy put into them increases their total rest mass compared with what it was when they were glued together as a nucleus. That mass increase is thus \(\Delta m = BE/c^2\) This difference in mass is known as mass defect . It implies that the mass of the nucleus is less than the sum of the masses of its constituent protons and neutrons. A nuclide \(^AX\) has \(Z\) protons and \(N\) neutrons, so that the difference in mass is
\[ \Delta m = (Zm_p + Nm_n) - M_{tot}.\] Thus,
\[BE = (∆m)c^2 = [(Zm_p + Nm_n) − m_{tot}]c^2.\]
where \(m_{tot}\) is the mass of the nuclide \(^AX\), \(m_p\) is the mass of a proton, and \(m_n\) is the mass of a neutron. Traditionally, we deal with the masses of neutral atoms. To get atomic masses into the last equation, we first add \(Z\) electrons to \(m_{tot}\) which gives \(m(^AX)\), the atomic mass of the nuclide. We then add \(Z\) electrons to the \(Z\) protons, which gives \(Zm(^1H)\), or \(Z\) times the mass of a hydrogen atom. Thus the binding energy of a nuclide \(^AX\) is
\[BE = [(Z_m(^1H) + Nm_n) - m(^AX)]c^2 \label{BE}\]
The atomic masses can be found in Appendix A , most conveniently expressed in unified atomic mass units u \((1 \, u = 931.5 \, MeV/c^2)\). BE is thus calculated from known atomic masses.
Nuclear Decay Helps Explain Earth’s Hot Interior
A puzzle created by radioactive dating of rocks is resolved by radioactive heating of Earth’s interior. This intriguing story is another example of how small-scale physics can explain large-scale phenomena.
Radioactive dating plays a role in determining the approximate age of the Earth. The oldest rocks on Earth solidified about \(3.5 \times 10^9\) years ago—a number determined by uranium-238 dating. These rocks could only have solidified once the surface of the Earth had cooled sufficiently. The temperature of the Earth at formation can be estimated based on gravitational potential energy of the assemblage of pieces being converted to thermal energy. Using heat transfer concepts discussed in Thermodynamics it is then possible to calculate how long it would take for the surface to cool to rock-formation temperatures. The result is about \(10^9\) years. The first rocks formed have been solid for \(3.5 \times 10^9\) years, so that the age of the Earth is approximately \(4.5 \times 10^9\) years. There is a large body of other types of evidence (both Earth-bound and solar system characteristics are used) that supports this age. The puzzle is that, given its age and initial temperature, the center of the Earth should be much cooler than it is today (Figure \(\PageIndex{2}\)).
We know from seismic waves produced by earthquakes that parts of the interior of the Earth are liquid. Shear or transverse waves cannot travel through a liquid and are not transmitted through the Earth’s core. Yet compression or longitudinal waves can pass through a liquid and do go through the core. From this information, the temperature of the interior can be estimated. As noticed, the interior should have cooled more from its initial temperature in the \(4.5 \times 10^9\) years since its formation. In fact, it should have taken no more than about \(10^9\) years to cool to its present temperature. What is keeping it hot? The answer seems to be radioactive decay of primordial elements that were part of the material that formed the Earth (see the blowup in Figure \(\PageIndex{2}\)).
Nuclides such as \(^{238}U\) and \(^{40}K\) have half-lives similar to or longer than the age of the Earth, and their decay still contributes energy to the interior. Some of the primordial radioactive nuclides have unstable decay products that also release energy— \(\ce{^{238}U}\) has a long decay chain of these. Further, there were more of these primordial radioactive nuclides early in the life of the Earth, and thus the activity and energy contributed were greater then (perhaps by an order of magnitude). The amount of power created by these decays per cubic meter is very small. However, since a huge volume of material lies deep below the surface, this relatively small amount of energy cannot escape quickly. The power produced near the surface has much less distance to go to escape and has a negligible effect on surface temperatures.
A final effect of this trapped radiation merits mention. Alpha decay produces helium nuclei, which form helium atoms when they are stopped and capture electrons. Most of the helium on Earth is obtained from wells and is produced in this manner. Any helium in the atmosphere will escape in geologically short times because of its high thermal velocity.
What patterns and insights are gained from an examination of the binding energy of various nuclides? First, we find that BE is approximately proportional to the number of nucleons \(A\) in any nucleus. About twice as much energy is needed to pull apart a nucleus like \(^{24}Mg\) compared with pulling apart \(^{12}C\), for example. To help us look at other effects, we divide BE by \(A\) and consider the binding energy per nucleon , \(BE/A\). The graph of \(BE/A\) in Figure \(\PageIndex{3}\) reveals some very interesting aspects of nuclei. We see that the binding energy per nucleon averages about 8 MeV, but is lower for both the lightest and heaviest nuclei. This overall trend, in which nuclei with \(A\) equal to about 60 have the greatest \(BE/A\) and are thus the most tightly bound, is due to the combined characteristics of the attractive nuclear forces and the repulsive Coulomb force.
It is especially important to note two things—the strong nuclear force is about 100 times stronger than the Coulomb force, and the nuclear forces are shorter in range compared to the Coulomb force. So, for low-mass nuclei, the nuclear attraction dominates and each added nucleon forms bonds with all others, causing progressively heavier nuclei to have progressively greater values of \(BE/A\). This continues up to \(A \approx 60\), roughly corresponding to the mass number of iron. Beyond that, new nucleons added to a nucleus will be too far from some others to feel their nuclear attraction. Added protons, however, feel the repulsion of all other protons, since the Coulomb force is longer in range. Coulomb repulsion grows for progressively heavier nuclei, but nuclear attraction remains about the same, and so \(BE/A\) becomes smaller. This is why stable nuclei heavier than \(A \approx 40\) have more neutrons than protons. Coulomb repulsion is reduced by having more neutrons to keep the protons farther apart (Figure \(\PageIndex{4}\)).
There are some noticeable spikes on the \(BE/A\) graph, which represent particularly tightly bound nuclei. These spikes reveal further details of nuclear forces, such as confirming that closed-shell nuclei (those with magic numbers of protons or neutrons or both) are more tightly bound. The spikes also indicate that some nuclei with even numbers for \(Z\) and \(N\) and with \(Z = N\), are exceptionally tightly bound. This finding can be correlated with some of the cosmic abundances of the elements. The most common elements in the universe, as determined by observations of atomic spectra from outer space, are hydrogen, followed by \(^4He\) with much smaller amounts of \(^{12}C\) and other elements. It should be noted that the heavier elements are created in supernova explosions, while the lighter ones are produced by nuclear fusion during the normal life cycles of stars, as will be discussed in subsequent chapters. The most common elements have the most tightly bound nuclei. It is also no accident that one of the most tightly bound light nuclei is \(^4He\) emitted in \(\alpha\) decay.
Example \(\PageIndex{1}\): What Is \(BE/A\) for an Alpha Particle?
Calculate the binding energy per nucleon of \(^4He\) the \(\alpha\) particle.
Strategy
To find \(BE/A\) we first find BE using the Equation \(BE = [(Z_m(^1H) + Nm_n) - m(^AX)]c^2\) and then divide by \(A\). This is straightforward once we have looked up the appropriate atomic masses in Appendix A .
Solution
The binding energy for a nucleus is given by the equation
\[BE = [(Z_m(^1H) + Nm_n) - m(^AX)]c^2 \nonumber\]
For \(^4He\), we have \(Z = N = 2; thus,
\[BE = [(2m(^1H) + 2Nm_n) - m(^4He)]c^2. \nonumber\]
Appendix A gives these masses as \(m(^4He) = 4.002602 \, u\). \(m(^1H) = 1.007825 \, u\), and \(m_n = 1.008665 \, u\). Thus
\[BE = (0.030378 \, u)c^2. \nonumber\]
Noting that \(1 \, u = 931.5 \, MeV/c^2\), we find
\[BE = (0.030378)(931.5 \, MeV/c^2)c^2 = 28.3 \, MeV. \nonumber\]
Since \(A = 4\), we see that \(BE/A\) is this number divided by 4, or
\[BE/A = 7.07 \, MeV/nucleon. \nonumber\]
Discussion
This is a large binding energy per nucleon compared with those for other low-mass nuclei, which have \(BE/A \approx 3 \, MeV/nucleon\). This indicates that \(^4He\) is tightly bound compared with its neighbors on the chart of the nuclides. You can see the spike representing this value of \(BE/A\) for \(^4He\) on the graph in Figure . This is why \(^4He\) is stable. Since \(^4He\) is tightly bound, it has less mass than other \(A = 4\) nuclei and, therefore, cannot spontaneously decay into them. The large binding energy also helps to explain why some nuclei undergo decay. Smaller mass in the decay products can mean energy release, and such decays can be spontaneous. Further, it can happen that two protons and two neutrons in a nucleus can randomly find themselves together, experience the exceptionally large nuclear force that binds this combination, and act as a \(^4He\) unit within the nucleus, at least for a while. In some cases, the \(^4He\) escapes, and \(\alpha\) decay has then taken place.
There is more to be learned from nuclear binding energies. The general trend in \(BE/A\) is fundamental to energy production in stars, and to fusion and fission energy sources on Earth, for example. This is one of the applications of nuclear physics covered in Medical Applications of Nuclear Physics . The abundance of elements on Earth, in stars, and in the universe as a whole is related to the binding energy of nuclei and has implications for the continued expansion of the universe.
Problem-Solving Strategies: For Reaction, Binding Energies, & Activity Calculations
- Identify exactly what needs to be determined in the problem (identify the unknowns) . This will allow you to decide whether the energy of a decay or nuclear reaction is involved, for example, or whether the problem is primarily concerned with activity (rate of decay).
- Make a list of what is given or can be inferred from the problem as stated (identify the knowns).
- For reaction and binding-energy problems, we use atomic rather than nuclear masses. Since the masses of neutral atoms are used, you must count the number of electrons involved. If these do not balance (such as in \(\beta^+\) decay), then an energy adjustment of 0.511 MeV per electron must be made. Also note that atomic masses may not be given in a problem; they can be found in tables.
- For problems involving activity, the relationship of activity to half-life, and the number of nuclei given in the equation \(R = \frac{0.693N}{t_{1/2}}\) can be very useful. Owing to the fact that number of nuclei is involved, you will also need to be familiar with moles and Avogadro’s number.
- Perform the desired calculation; keep careful track of plus and minus signs as well as powers of 10.
- Check the answer to see if it is reasonable: Does it make sense? Compare your results with worked examples and other information in the text. (Heeding the advice in Step 5 will also help you to be certain of your result.) You must understand the problem conceptually to be able to determine whether the numerical result is reasonable.
PHET EXPLORATIONS: NUCLEAR FISSION
Download the PhET simulation and start a chain reaction, or introduce non-radioactive isotopes to prevent one. Control energy production in a nuclear reactor!
Summary
The binding energy (BE) of a nucleus is the energy needed to separate it into individual protons and neutrons. In terms of atomic masses, \[BE = [(Zm(^1H) + Nm_n] - m(^AX)]c^2, \nonumber\] where \(m(^1H)\) is the mass of a hydrogen atom, \(m(^AX)\) is the atomic mass of the nuclide, and \(m_n\) is the mass of a neutron. Patterns in the binding energy per nucleon, \(BE/A\), reveal details of the nuclear force. The larger the \(BE/A\), the more stable the nucleus.
Glossary
- binding energy
- the energy needed to separate nucleus into individual protons and neutrons
- binding energy per nucleon
- the binding energy calculated per nucleon; it reveals the details of the nuclear force—larger the \(BE/A\) the more stable the nucleu
|
libretexts
|
2025-03-17T19:53:48.543182
| 2016-07-24T08:51:20 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.06%3A_Binding_Energy",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.6: Binding Energy",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.07%3A_Tunneling
|
31.7: Tunneling
Learning Objectives
By the end of this section, you will be able to:
- Define and discuss tunneling.
- Define potential barrier.
- Explain quantum tunneling.
Protons and neutrons are bound inside nuclei, that means energy must be supplied to break them away. The situation is analogous to a marble in a bowl that can roll around but lacks the energy to get over the rim. It is bound inside the bowl (Figure \(\PageIndex{1}\)). If the marble could get over the rim, it would gain kinetic energy by rolling down outside. However classically, if the marble does not have enough kinetic energy to get over the rim, it remains forever trapped in its well.
In a nucleus, the attractive nuclear potential is analogous to the bowl at the top of a volcano (where the “volcano” refers only to the shape). Protons and neutrons have kinetic energy, but it is about 8 MeV less than that needed to get out (Figure \(\PageIndex{2}\)). That is, they are bound by an average of 8 MeV per nucleon. The slope of the hill outside the bowl is analogous to the repulsive Coulomb potential for a nucleus, such as for an \(\alpha\) particle outside a positive nucleus. In \(\alpha\) decay, two protons and two neutrons spontaneously break away as a \(^4He\) unit. Yet the protons and neutrons do not have enough kinetic energy to get over the rim. So how does the \(\alpha\) particle get out?
The answer was supplied in 1928 by the Russian physicist George Gamow (1904–1968). The \(\alpha\) particle tunnels through a region of space it is forbidden to be in, and it comes out of the side of the nucleus. Like an electron making a transition between orbits around an atom, it travels from one point to another without ever having been in between. Figure \(\PageIndex{3}\) indicates how this works. The wave function of a quantum mechanical particle varies smoothly, going from within an atomic nucleus (on one side of a potential energy barrier) to outside the nucleus (on the other side of the potential energy barrier). Inside the barrier, the wave function does not become zero but decreases exponentially, and we do not observe the particle inside the barrier. The probability of finding a particle is related to the square of its wave function, and so there is a small probability of finding the particle outside the barrier, which implies that the particle can tunnel through the barrier. This process is called barrier penetration or quantum mechanical tunneling . This concept was developed in theory by J. Robert Oppenheimer (who led the development of the first nuclear bombs during World War II) and was used by Gamow and others to describe \(\alpha\) decay.
Good ideas explain more than one thing. In addition to qualitatively explaining how the four nucleons in an \(\alpha\) particle can get out of the nucleus, the detailed theory also explains quantitatively the half-life of various nuclei that undergo \(\alpha\) decay. This description is what Gamow and others devised, and it works for \(\alpha\) decay half-lives that vary by 17 orders of magnitude. Experiments have shown that the more energetic the \(\alpha\) decay of a particular nuclide is, the shorter is its half-life. Tunneling explains this in the following manner: For the decay to be more energetic, the nucleons must have more energy in the nucleus and should be able to ascend a little closer to the rim. The barrier is therefore not as thick for more energetic decay, and the exponential decrease of the wave function inside the barrier is not as great. Thus the probability of finding the particle outside the barrier is greater, and the half-life is shorter.
Tunneling as an effect also occurs in quantum mechanical systems other than nuclei. Electrons trapped in solids can tunnel from one object to another if the barrier between the objects is thin enough. The process is the same in principle as described for \(\alpha\) decay. It is far more likely for a thin barrier than a thick one. Scanning tunneling electron microscopes function on this principle. The current of electrons that travels between a probe and a sample tunnels through a barrier and is very sensitive to its thickness, allowing detection of individual atoms as shown in Figure \(\PageIndex{4a}\).
PHET EXPLORATIONS: QUANTUM TUNNELING AND WAVE PACKETS
Watch quantum "particles" tunnel through barriers . Explore the properties of the wave functions that describe these particles.
Summary
- Tunneling is a quantum mechanical process of potential energy barrier penetration. The concept was first applied to explain \(\alpha\) decay, but tunneling is found to occur in other quantum mechanical systems.
Glossary
- barrier penetration
- quantum mechanical effect whereby a particle has a nonzero probability to cross through a potential energy barrier despite not having sufficient energy to pass over the barrier; also called quantum mechanical tunneling
- quantum mechanical tunneling
- quantum mechanical effect whereby a particle has a nonzero probability to cross through a potential energy barrier despite not having sufficient energy to pass over the barrier; also called barrier penetration
- tunneling
- a quantum mechanical process of potential energy barrier penetration
|
libretexts
|
2025-03-17T19:53:48.609076
| 2016-07-24T08:52:05 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.07%3A_Tunneling",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.7: Tunneling",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.E%3A_Radioactivity_and_Nuclear_Physics_(Exercises)
|
31.E: Radioactivity and Nuclear Physics (Exercises)
-
- Last updated
- Save as PDF
Conceptual Questions
31.1: Nuclear Radioactivity
1. Suppose the range for \(\displaystyle 5.0 MeVα\) ray is known to be 2.0 mm in a certain material. Does this mean that every \(\displaystyle 5.0 MeVα\) a ray that strikes this material travels 2.0 mm, or does the range have an average value with some statistical fluctuations in the distances traveled? Explain.
2. What is the difference between \(\displaystyle γ\) rays and characteristic x rays? Is either necessarily more energetic than the other? Which can be the most energetic?
3. Ionizing radiation interacts with matter by scattering from electrons and nuclei in the substance. Based on the law of conservation of momentum and energy, explain why electrons tend to absorb more energy than nuclei in these interactions.
4. What characteristics of radioactivity show it to be nuclear in origin and not atomic?
5. What is the source of the energy emitted in radioactive decay? Identify an earlier conservation law, and describe how it was modified to take such processes into account.
6. Consider Figure. If an electric field is substituted for the magnetic field with positive charge instead of the north pole and negative charge instead of the south pole, in which directions will the \(\displaystyle α, β\) , and \(\displaystyle γ\) rays bend?
7. Explain how an \(\displaystyle α\) particle can have a larger range in air than a \(\displaystyle β\) particle with the same energy in lead.
8. Arrange the following according to their ability to act as radiation shields, with the best first and worst last. Explain your ordering in terms of how radiation loses its energy in matter.
(a) A solid material with low density composed of low-mass atoms.
(b) A gas composed of high-mass atoms.
(c) A gas composed of low-mass atoms.
(d) A solid with high density composed of high-mass atoms.
9. Often, when people have to work around radioactive materials spills, we see them wearing white coveralls (usually a plastic material). What types of radiation (if any) do you think these suits protect the worker from, and how?
31.2: Radiation Detection and Detectors
10. Is it possible for light emitted by a scintillator to be too low in frequency to be used in a photomultiplier tube? Explain.
31.3: Substructure of the Nucleus
11. The weak and strong nuclear forces are basic to the structure of matter. Why we do not experience them directly?
12. Define and make clear distinctions between the terms neutron, nucleon, nucleus, nuclide, and neutrino.
13. What are isotopes? Why do different isotopes of the same element have similar chemistries?
31.4: Nuclear Decay and Conservation Laws
14. Star Trek fans have often heard the term “antimatter drive.” Describe how you could use a magnetic field to trap antimatter, such as produced by nuclear decay, and later combine it with matter to produce energy. Be specific about the type of antimatter, the need for vacuum storage, and the fraction of matter converted into energy.
15. What conservation law requires an electron’s neutrino to be produced in electron capture? Note that the electron no longer exists after it is captured by the nucleus.
16. Neutrinos are experimentally determined to have an extremely small mass. Huge numbers of neutrinos are created in a supernova at the same time as massive amounts of light are first produced. When the 1987A supernova occurred in the Large Magellanic Cloud, visible primarily in the Southern Hemisphere and some 100,000 light-years away from Earth, neutrinos from the explosion were observed at about the same time as the light from the blast. How could the relative arrival times of neutrinos and light be used to place limits on the mass of neutrinos?
17. What do the three types of beta decay have in common that is distinctly different from alpha decay?
31.5: Half-Life and Activity
18. In a \(\displaystyle 3×10^9\)-year-old rock that originally contained some \(\displaystyle ^{238}U\), which has a half-life of \(\displaystyle 4.5×10^9\) years, we expect to find some \(\displaystyle ^{238}U\) remaining in it. Why are \(\displaystyle ^{226}Ra, ^{222}Rn\), and \(\displaystyle ^{210}Po\) also found in such a rock, even though they have much shorter half-lives (1600 years, 3.8 days, and 138 days, respectively)?
19. Does the number of radioactive nuclei in a sample decrease to exactly half its original value in one half-life? Explain in terms of the statistical nature of radioactive decay.
20. Radioactivity depends on the nucleus and not the atom or its chemical state. Why, then, is one kilogram of uranium more radioactive than one kilogram of uranium hexafluoride?
21. Explain how a bound system can have less mass than its components. Why is this not observed classically, say for a building made of bricks?
22. Spontaneous radioactive decay occurs only when the decay products have less mass than the parent, and it tends to produce a daughter that is more stable than the parent. Explain how this is related to the fact that more tightly bound nuclei are more stable. (Consider the binding energy per nucleon.)
23. To obtain the most precise value of BE from the equation \(\displaystyle BE=[ZM(^1H)+Nm_n]c^2−m(^AX)c^2\), we should take into account the binding energy of the electrons in the neutral atoms. Will doing this produce a larger or smaller value for BE? Why is this effect usually negligible?
24. How does the finite range of the nuclear force relate to the fact that \(\displaystyle BE/A\) is greatest for \(\displaystyle A\) near 60?
31.6: Binding Energy
25. Why is the number of neutrons greater than the number of protons in stable nuclei having \(\displaystyle A\) greater than about 40, and why is this effect more pronounced for the heaviest nuclei?
31.7: Tunneling
26. A physics student caught breaking conservation laws is imprisoned. She leans against the cell wall hoping to tunnel out quantum mechanically. Explain why her chances are negligible. (This is so in any classical situation.)
27. When a nucleus \(\displaystyle α\) decays, does the \(\displaystyle α\) particle move continuously from inside the nucleus to outside? That is, does it travel each point along an imaginary line from inside to out? Explain.
Problems & Exercises
31.2: Radiation Detection and Detectors
28. The energy of 30.0 \(\displaystyle eV\) is required to ionize a molecule of the gas inside a Geiger tube, thereby producing an ion pair. Suppose a particle of ionizing radiation deposits 0.500 MeV of energy in this Geiger tube. What maximum number of ion pairs can it create?
Solution
\(\displaystyle 1.67×10^4\)
29. A particle of ionizing radiation creates 4000 ion pairs in the gas inside a Geiger tube as it passes through. What minimum energy was deposited, if 30.0 \(\displaystyle eV\) is required to create each ion pair?
30. (a) Repeat Exercise, and convert the energy to joules or calories.
(b) If all of this energy is converted to thermal energy in the gas, what is its temperature increase, assuming \(\displaystyle 50.0 cm^3\) of ideal gas at 0.250-atm pressure? (The small answer is consistent with the fact that the energy is large on a quantum mechanical scale but small on a macroscopic scale.)
31. Suppose a particle of ionizing radiation deposits 1.0 MeV in the gas of a Geiger tube, all of which goes to creating ion pairs. Each ion pair requires 30.0 eV of energy.
(a) The applied voltage sweeps the ions out of the gas in \(\displaystyle 1.00μs\). What is the current?
(b) This current is smaller than the actual current since the applied voltage in the Geiger tube accelerates the separated ions, which then create other ion pairs in subsequent collisions. What is the current if this last effect multiplies the number of ion pairs by 900?
31.3: Substructure of the Nucleus
32. Verify that a \(\displaystyle 2.3×10^{17}kg\) mass of water at normal density would make a cube 60 km on a side, as claimed in Example. (This mass at nuclear density would make a cube 1.0 m on a side.)
Solution
\(\displaystyle m=ρV=ρd^3⇒=a=(\frac{m}{ρ})^{1/3}=(\frac{2.3×10^{17}kg}{1000kg/m^3})^{\frac{1}{3}}=61×10^3m=61 km\)
33. Find the length of a side of a cube having a mass of 1.0 kg and the density of nuclear matter, taking this to be \(\displaystyle 2.3×10^{17}kg/m^3\).
34. What is the radius of an \(\displaystyle α\) particle?
Solution
1.9 fm
35. Find the radius of a \(\displaystyle ^{238}Pu\) nucleus. \(\displaystyle ^{238}Pu\) is a manufactured nuclide that is used as a power source on some space probes.
36. (a) Calculate the radius of \(\displaystyle ^{58}Ni\), one of the most tightly bound stable nuclei.
(b) What is the ratio of the radius of \(\displaystyle ^{58}Ni\) to that of \(\displaystyle ^{258}Ha\), one of the largest nuclei ever made? Note that the radius of the largest nucleus is still much smaller than the size of an atom.
Solution
(a) 4.6 fm
(b) 0.61 to 1
37. The unified atomic mass unit is defined to be \(\displaystyle 1u=1.6605×10^{−27}kg\). Verify that this amount of mass converted to energy yields 931.5 MeV. Note that you must use four-digit or better values for \(\displaystyle c\) and \(\displaystyle ∣q_e∣\).
38. What is the ratio of the velocity of a \(\displaystyle β\) particle to that of an \(\displaystyle α\) particle, if they have the same nonrelativistic kinetic energy?
Solution
85.4 to 1
39. If a 1.50-cm-thick piece of lead can absorb 90.0% of the \(\displaystyle γ\) rays from a radioactive source, how many centimeters of lead are needed to absorb all but 0.100% of the \(\displaystyle γ\) rays?
40. The detail observable using a probe is limited by its wavelength. Calculate the energy of a \(\displaystyle γ\)-ray photon that has a wavelength of \(\displaystyle 1×10^{−16}m\), small enough to detect details about one-tenth the size of a nucleon. Note that a photon having this energy is difficult to produce and interacts poorly with the nucleus, limiting the practicability of this probe.
Solution
12.4 GeV
41. (a) Show that if you assume the average nucleus is spherical with a radius \(\displaystyle r=r_0A^{1/3}\), and with a mass of \(\displaystyle A\) u, then its density is independent of \(\displaystyle A\).
(b) Calculate that density in \(\displaystyle u/fm^3\) and \(\displaystyle kg/m^3\), and compare your results with those found in Example for \(\displaystyle ^{56}Fe\).
42. What is the ratio of the velocity of a 5.00-MeV β ray to that of an α particle with the same kinetic energy? This should confirm that βs travel much faster than αs even when relativity is taken into consideration. (See also Exercise.)
Solution
19.3 to 1
43. (a) What is the kinetic energy in MeV of a \(\displaystyle β\) ray that is traveling at \(\displaystyle 0.998c\)? This gives some idea of how energetic a \(\displaystyle β\) ray must be to travel at nearly the same speed as a \(\displaystyle γ\) ray.
(b) What is the velocity of the \(\displaystyle γ\) ray relative to the \(\displaystyle β\) ray?
31.4: Nuclear Decay and Conservation Laws
In the following eight problems, write the complete decay equation for the given nuclide in the complete \(\displaystyle ^A_ZX_N\) notation. Refer to the periodic table for values of \(\displaystyle Z\).
44. \(\displaystyle β^−\) decay of \(\displaystyle ^3H\) (tritium), a manufactured isotope of hydrogen used in some digital watch displays, and manufactured primarily for use in hydrogen bombs.
Solution
\(\displaystyle ^3_1H_2→^3_2He_1+β^−+\bar{ν_e}\)
45 . \(\displaystyle β^−\) decay of \(\displaystyle ^{40}K\), a naturally occurring rare isotope of potassium responsible for some of our exposure to background radiation.
46. \(\displaystyle β^+\) decay of \(\displaystyle ^{50}Mn\).
Solution
\(\displaystyle ^{50}_{25}M_{25}→^{50}_{24}Cr_{26}+β^++ν_e\)
47. \(\displaystyle β^+\) decay of \(\displaystyle ^{52}Fe\).
48. Electron capture by \(\displaystyle ^7Be\).
Solution
\(\displaystyle ^7_4Be_3+e^−→^7_3Li_4+ν_e\)
48. Electron capture by \(\displaystyle ^{106}In\).
49. \(\displaystyle α\) decay of \(\displaystyle ^{210}Po\), the isotope of polonium in the decay series of \(\displaystyle ^{238}U\) that was discovered by the Curies. A favorite isotope in physics labs, since it has a short half-life and decays to a stable nuclide.
Solution
\(\displaystyle ^{210}_{84}Po_{126}→^{206}_{82}Pb_{124}+^4_2He_2\)
50. \(\displaystyle α\) decay of \(\displaystyle ^{226}Ra\), another isotope in the decay series of \(\displaystyle ^{238}U\), first recognized as a new element by the Curies. Poses special problems because its daughter is a radioactive noble gas.
In the following four problems, identify the parent nuclide and write the complete decay equation in the \(\displaystyle ^A_ZX_N\) notation. Refer to the periodic table for values of \(\displaystyle Z\).
51. \(\displaystyle β^−\) decay producing \(\displaystyle ^{137}Ba\). The parent nuclide is a major waste product of reactors and has chemistry similar to potassium and sodium, resulting in its concentration in your cells if ingested.
Solution
\(\displaystyle ^{137}_{55}Cs_{82}→^{137}_{56}Ba_{81}+β^−+\bar{ν_e}\)
52. \(\displaystyle β^−\) decay producing \(\displaystyle ^{90}Y\). The parent nuclide is a major waste product of reactors and has chemistry similar to calcium, so that it is concentrated in bones if ingested (\(\displaystyle ^{90}Y\) is also radioactive.)
53. \(\displaystyle α\) decay producing \(\displaystyle ^{228}Ra\). The parent nuclide is nearly 100% of the natural element and is found in gas lantern mantles and in metal alloys used in jets (\(\displaystyle ^{228}Ra\) is also radioactive).
Solution
\(\displaystyle ^{232}_{90}Th_{142}→^{228}_{88}Ra_{140}+^4_2He_2\)
54. \(\displaystyle α\) decay producing \(\displaystyle ^{208}Pb\). The parent nuclide is in the decay series produced by \(\displaystyle ^{232}Th\), the only naturally occurring isotope of thorium.
55. When an electron and positron annihilate, both their masses are destroyed, creating two equal energy photons to preserve momentum.
(a) Confirm that the annihilation equation \(\displaystyle e^++e^−→γ+γ\) conserves charge, electron family number, and total number of nucleons. To do this, identify the values of each before and after the annihilation.
(b) Find the energy of each \(\displaystyle γ\) ray, assuming the electron and positron are initially nearly at rest.
(c) Explain why the two \(\displaystyle γ\) rays travel in exactly opposite directions if the center of mass of the electron-positron system is initially at rest.
Solution
(a) \(\displaystyle charge:(+1)+(−1)=0\);electron family number: \(\displaystyle (+1)+(−1)=0;A: 0+0=0\)
(b) 0.511 MeV
(c) The two \(\displaystyle γ\) rays must travel in exactly opposite directions in order to conserve momentum, since initially there is zero momentum if the center of mass is initially at rest.
56. Confirm that charge, electron family number, and the total number of nucleons are all conserved by the rule for \(\displaystyle α\) decay given in the equation \(\displaystyle ^A_ZX_N→^{A−4}_{Z−2}Y_{N−2}+^4_2He_2\). To do this, identify the values of each before and after the decay.
57. Confirm that charge, electron family number, and the total number of nucleons are all conserved by the rule for \(\displaystyle β^−\) decay given in the equation \(\displaystyle ^A_ZX_N\)→^A_{Z+1}Y_{N−1}+β^−+\bat{ν_e}\). To do this, identify the values of each before and after the decay.
Solution
\(\displaystyle Z=(Z+1)−1;A=A;efn : 0=(+1)+(−1)\)
58. Confirm that charge, electron family number, and the total number of nucleons are all conserved by the rule for \(\displaystyle β^−\) decay given in the equation \(\displaystyle ^A_ZX_N→^A_{Z−1}Y_{N−1}+β^−+ν_e\). To do this, identify the values of each before and after the decay.
59. Confirm that charge, electron family number, and the total number of nucleons are all conserved by the rule for electron capture given in the equation \(\displaystyle ^A_ZX_N+e^−→^A_{Z−1}Y_{N+1}+ν_e\). To do this, identify the values of each before and after the capture.
Solution
\(\displaystyle Z−1=Z−1;A=A;efn :(+1)=(+1)\)
60. A rare decay mode has been observed in which \(\displaystyle ^{222}Ra\) emits a \(\displaystyle ^{14}C\) nucleus.
(a) The decay equation is \(\displaystyle ^{222}Ra→^AX+^{14}C\). Identify the nuclide \(\displaystyle ^AX\).
(b) Find the energy emitted in the decay. The mass of \(\displaystyle ^{222}Ra\) is 222.015353 u.
61. (a) Write the complete α decay equation for \(\displaystyle ^{226}Ra\).
(b) Find the energy released in the decay.
Solution
(a) \(\displaystyle ^{226}_{88}Ra_{138}→^{222}_{86}Rn_{136}+^4_2He_2\)
(b) 4.87 MeV
62. (a) Write the complete \(\displaystyle α\) decay equation for \(\displaystyle ^{249}Cf\).
(b) Find the energy released in the decay.
63. (a) Write the complete \(\displaystyle β^−\) decay equation for the neutron.
(b) Find the energy released in the decay.
Solution
(a) \(\displaystyle n→p+β^−+\bar{ν_e}\)
(b) ) 0.783 MeV
64. (a) Write the complete \(\displaystyle β^−\) decay equation for \(\displaystyle ^{90}Sr\), a major waste product of nuclear reactors.
(b) Find the energy released in the decay.
65. Calculate the energy released in the \(\displaystyle β^+\) decay of \(\displaystyle ^{22}Na\), the equation for which is given in the text. The masses of \(\displaystyle ^{22}Na\) and \(\displaystyle ^{22}Ne\) are 21.994434 and 21.991383 u, respectively.
Solution
1.82 MeV
66. (a) Write the complete \(\displaystyle β^+\) decay equation for \(\displaystyle ^{11}C\).
(b) Calculate the energy released in the decay. The masses of \(\displaystyle ^{11}C\) and \(\displaystyle ^{11}B\) are 11.011433 and 11.009305 u, respectively.
67. (a) Calculate the energy released in the α decay of \(\displaystyle ^{238}U\).
(b) What fraction of the mass of a single \(\displaystyle ^{238}U\) is destroyed in the decay? The mass of \(\displaystyle ^{234}Th\) is 234.043593 u.
(c) Although the fractional mass loss is large for a single nucleus, it is difficult to observe for an entire macroscopic sample of uranium. Why is this?
Solution
(a) 4.274 MeV
(b) \(\displaystyle 1.927×10^{−5}\)
(c) Since U-238 is a slowly decaying substance, only a very small number of nuclei decay on human timescales; therefore, although those nuclei that decay lose a noticeable fraction of their mass, the change in the total mass of the sample is not detectable for a macroscopic sample.
68. (a) Write the complete reaction equation for electron capture by \(\displaystyle ^7Be\).
(b) Calculate the energy released.
69. (a) Write the complete reaction equation for electron capture by \(\displaystyle ^{15}O\).
(b) Calculate the energy released.
Solution
(a) \(\displaystyle ^{15}_8O_7+e^−→^{15}_7N_8+ν_e\)
(b) 2.754 MeV
31.5: Half-Life and Activity
Data from the appendices and the periodic table may be needed for these problems.
70. An old campfire is uncovered during an archaeological dig. Its charcoal is found to contain less than 1/1000 the normal amount of \(\displaystyle ^{14}C\). Estimate the minimum age of the charcoal, noting that \(\displaystyle 2^{10}=1024\).
Solution
57,300 y
71. A \(\displaystyle ^{60}Co\) source is labeled 4.00 mCi, but its present activity is found to be \(\displaystyle 1.85×10^7\) Bq.
(a) What is the present activity in mCi?
(b) How long ago did it actually have a 4.00-mCi activity?
72. (a) Calculate the activity \(\displaystyle R\) in curies of 1.00 g of \(\displaystyle ^{226}Ra\).
(b) Discuss why your answer is not exactly 1.00 Ci, given that the curie was originally supposed to be exactly the activity of a gram of radium.
Solution
(a) 0.988 Ci
(b) The half-life of \(\displaystyle ^{226}Ra\) is now better known.
73. Show that the activity of the \(\displaystyle ^{14}C\) in 1.00 g of \(\displaystyle ^{12}C\) found in living tissue is 0.250 Bq.
74. Mantles for gas lanterns contain thorium, because it forms an oxide that can survive being heated to incandescence for long periods of time. Natural thorium is almost 100% \(\displaystyle ^{232}Th\), with a half-life of \(\displaystyle 1.405×10^{10}y\). If an average lantern mantle contains 300 mg of thorium, what is its activity?
Solution
\(\displaystyle 1.22×10^3Bq\)
75. Cow’s milk produced near nuclear reactors can be tested for as little as 1.00 pCi of \(\displaystyle ^{131}I\) per liter, to check for possible reactor leakage. What mass of \(\displaystyle ^{131}I\) has this activity?
76. (a) Natural potassium contains \(\displaystyle ^{40}K\), which has a half-life of \(\displaystyle 1.277×10^9 y\). What mass of \(\displaystyle ^{40}K\) in a person would have a decay rate of 4140 Bq?
(b) What is the fraction of \(\displaystyle ^{40}K\) in natural potassium, given that the person has 140 g in his body? (These numbers are typical for a 70-kg adult.)
Solution
(a) 16.0 mg
(b) 0.0114%
77. There is more than one isotope of natural uranium. If a researcher isolates 1.00 mg of the relatively scarce 235U and finds this mass to have an activity of 80.0 Bq, what is its half-life in years?
78. \(\displaystyle ^{50}V\) has one of the longest known radioactive half-lives. In a difficult experiment, a researcher found that the activity of 1.00 kg of \(\displaystyle ^{50}V\) is 1.75 Bq. What is the half-life in years?
Solution
\(\displaystyle 1.48×10^{17}y\)
79. You can sometimes find deep red crystal vases in antique stores, called uranium glass because their color was produced by doping the glass with uranium. Look up the natural isotopes of uranium and their half-lives, and calculate the activity of such a vase assuming it has 2.00 g of uranium in it. Neglect the activity of any daughter nuclides.
80. A tree falls in a forest. How many years must pass before the \(\displaystyle ^{14}C\) activity in 1.00 g of the tree’s carbon drops to 1.00 decay per hour?
Solution
\(\displaystyle 5.6×10^4y\)
81. What fraction of the \(\displaystyle ^{40}K\) that was on Earth when it formed \(\displaystyle 4.5×10^9\) years ago is left today?
82. A 5000-Ci \(\displaystyle ^{60}Co\) source used for cancer therapy is considered too weak to be useful when its activity falls to 3500 Ci. How long after its manufacture does this happen?
Solution
2.71 y
83. Natural uranium is 0.7200% \(\displaystyle ^{235}U\) and 99.27% \(\displaystyle ^{238}U\). What were the percentages of \(\displaystyle ^{235}U\) and \(\displaystyle ^{238}U\) in natural uranium when Earth formed \(\displaystyle 4.5×10^9\) years ago?
84. The \(\displaystyle β^−\) particles emitted in the decay of \(\displaystyle ^3H\) (tritium) interact with matter to create light in a glow-in-the-dark exit sign. At the time of manufacture, such a sign contains 15.0 Ci of \(\displaystyle ^3H\).
(a) What is the mass of the tritium?
(b) What is its activity 5.00 y after manufacture?
Solution
(a) 1.56 mg
(b) 11.3 Ci
85. World War II aircraft had instruments with glowing radium-painted dials (see [link]). The activity of one such instrument was \(\displaystyle 1.0×10^5 Bq\) when new.
(a) What mass of \(\displaystyle ^{226}Ra\) was present?
(b) After some years, the phosphors on the dials deteriorated chemically, but the radium did not escape. What is the activity of this instrument 57.0 years after it was made?
86. (a) The \(\displaystyle ^{210}Po\) source used in a physics laboratory is labeled as having an activity of \(\displaystyle 1.0μCi\) on the date it was prepared. A student measures the radioactivity of this source with a Geiger counter and observes 1500 counts per minute. She notices that the source was prepared 120 days before her lab. What fraction of the decays is she observing with her apparatus?
(b) Identify some of the reasons that only a fraction of the α s emitted are observed by the detector.
Solution
(a) \(\displaystyle 1.23×10^{−3}\)
(b) Only part of the emitted radiation goes in the direction of the detector. Only a fraction of that causes a response in the detector. Some of the emitted radiation (mostly α particles) is observed within the source. Some is absorbed within the source, some is absorbed by the detector, and some does not penetrate the detector.
87. Armor-piercing shells with depleted uranium cores are fired by aircraft at tanks. (The high density of the uranium makes them effective.) The uranium is called depleted because it has had its \(\displaystyle ^{235}U\) removed for reactor use and is nearly pure \(\displaystyle ^{238}U\). Depleted uranium has been erroneously called non-radioactive. To demonstrate that this is wrong:
(a) Calculate the activity of 60.0 g of pure \(\displaystyle ^{238}U\).
(b) Calculate the activity of 60.0 g of natural uranium, neglecting the \(\displaystyle ^{234}U\) and all daughter nuclides.
88. The ceramic glaze on a red-orange Fiestaware plate is \(\displaystyle U_2O_3\) and contains 50.0 grams of \(\displaystyle ^{238}U\) , but very little \(\displaystyle ^{235}U\).
(a) What is the activity of the plate?
(b) Calculate the total energy that will be released by the \(\displaystyle ^{238}U\) decay.
(c) If energy is worth 12.0 cents per \(\displaystyle kW⋅h\), what is the monetary value of the energy emitted? (These plates went out of production some 30 years ago, but are still available as collectibles.)
Solution
(a) \(\displaystyle 1.68×10^{–5}Ci\)
(b) \(\displaystyle 8.65×10^{10}J\)
(c) \(\displaystyle $2.9×10^3\)
89. Large amounts of depleted uranium (\(\displaystyle ^{238}U\)) are available as a by-product of uranium processing for reactor fuel and weapons. Uranium is very dense and makes good counter weights for aircraft. Suppose you have a 4000-kg block of \(\displaystyle ^{238}U\).
(a) Find its activity.
(b) How many calories per day are generated by thermalization of the decay energy?
(c) Do you think you could detect this as heat? Explain.
90. The Galileo space probe was launched on its long journey past several planets in 1989, with an ultimate goal of Jupiter. Its power source is 11.0 kg of \(\displaystyle ^{238}Pu\), a by-product of nuclear weapons plutonium production. Electrical energy is generated thermoelectrically from the heat produced when the 5.59-MeV \(\displaystyle α\) particles emitted in each decay crash to a halt inside the plutonium and its shielding. The half-life of \(\displaystyle ^{238}Pu\) is 87.7 years.
(a) What was the original activity of the \(\displaystyle ^{238}Pu\) in becquerel?
(b) What power was emitted in kilowatts?
(c) What power was emitted 12.0 y after launch? You may neglect any extra energy from daughter nuclides and any losses from escaping \(\displaystyle γ\) rays.
Solution
(a) \(\displaystyle 6.97×10^{15}Bq\)
(b) 6.24 kW
(c) 5.67 kW
91. Construct Your Own Problem
Consider the generation of electricity by a radioactive isotope in a space probe, such as described in Exercise. Construct a problem in which you calculate the mass of a radioactive isotope you need in order to supply power for a long space flight. Among the things to consider are the isotope chosen, its half-life and decay energy, the power needs of the probe and the length of the flight.
92. Unreasonable Results
A nuclear physicist finds \(\displaystyle 1.0μg\) of \(\displaystyle ^{236}U\) in a piece of uranium ore and assumes it is primordial since its half-life is \(\displaystyle 2.3×10^7y\).
(a) Calculate the amount of \(\displaystyle ^{236}U\) that would had to have been on Earth when it formed \(\displaystyle 4.5×10^9y\) ago for \(\displaystyle 1.0μg\) to be left today.
(b) What is unreasonable about this result?
(c) What assumption is responsible?
93. Unreasonable Results
(a) Repeat Exercise but include the 0.0055% natural abundance of \(\displaystyle ^{234}U\) with its \(\displaystyle 2.45×10^5y\) half-life.
(b) What is unreasonable about this result?
(c) What assumption is responsible?
(d) Where does the \(\displaystyle ^{234}U\) come from if it is not primordial?
94. Unreasonable Results
The manufacturer of a smoke alarm decides that the smallest current of \(\displaystyle α\) radiation he can detect is \(\displaystyle 1.00μA\).
(a) Find the activity in curies of an \(\displaystyle α\) emitter that produces a \(\displaystyle 1.00μA\) current of α particles.
(b) What is unreasonable about this result?
(c) What assumption is responsible?
Solution
(a) 84.5 Ci
(b) An extremely large activity, many orders of magnitude greater than permitted for home use.
(c) The assumption of \(\displaystyle 1.00μA\) is unreasonably large. Other methods can detect much smaller decay rates.
31.6: Binding Energy
95. \(\displaystyle ^2H\) is a loosely bound isotope of hydrogen. Called deuterium or heavy hydrogen, it is stable but relatively rare—it is 0.015% of natural hydrogen. Note that deuterium has \(\displaystyle Z=N\), which should tend to make it more tightly bound, but both are odd numbers. Calculate \(\displaystyle BE/A\), the binding energy per nucleon, for \(\displaystyle ^2H\) and compare it with the approximate value obtained from the graph in Figure.
Solution
1.112 MeV, consistent with graph
96. \(\displaystyle ^{56}Fe\) is among the most tightly bound of all nuclides. It is more than 90% of natural iron. Note that \(\displaystyle ^{56}Fe\) has even numbers of both protons and neutrons. Calculate BE/A, the binding energy per nucleon, for \(\displaystyle ^{56}Fe\) and compare it with the approximate value obtained from the graph in Figure.
97. \(\displaystyle ^{209}Bi\) is the heaviest stable nuclide, and its \(\displaystyle BE/A\) is low compared with medium-mass nuclides. Calculate \(\displaystyle BE/A\), the binding energy per nucleon, for \(\displaystyle ^{209}Bi\) and compare it with the approximate value obtained from the graph in Figure.
Solution
7.848 MeV, consistent with graph
98. (a) Calculate \(\displaystyle BE/A\) for \(\displaystyle ^{235}U\), the rarer of the two most common uranium isotopes.
(b) Calculate \(\displaystyle BE/A\) for \(\displaystyle ^{238}U\). (Most of uranium is \(\displaystyle ^{238}U\).) Note that \(\displaystyle ^{238}U\) has even numbers of both protons and neutrons. Is the \(\displaystyle BE/A\) of \(\displaystyle ^{238}U\) significantly different from that of \(\displaystyle ^{235}U\)?
99. (a) Calculate \(\displaystyle BE/A\) for \(\displaystyle ^{12}C\). Stable and relatively tightly bound, this nuclide is most of natural carbon.
(b) Calculate \(\displaystyle BE/A\) for \(\displaystyle ^{14}C\). Is the difference in \(\displaystyle BE/A\) between \(\displaystyle ^{12}C\) and \(\displaystyle ^{14}C\) significant? One is stable and common, and the other is unstable and rare.
Solution
(a) 7.680 MeV, consistent with graph
(b) 7.520 MeV, consistent with graph. Not significantly different from value for \(\displaystyle ^{12}C\), but sufficiently lower to allow decay into another nuclide that is more tightly bound.
100. The fact that \(\displaystyle BE/A\) is greatest for \(\displaystyle A\) near 60 implies that the range of the nuclear force is about the diameter of such nuclides.
(a) Calculate the diameter of an \(\displaystyle A=60\) nucleus.
(b) Compare \(\displaystyle BE/A\) for \(\displaystyle ^{58}Ni\) and \(\displaystyle ^{90}Sr\). The first is one of the most tightly bound nuclides, while the second is larger and less tightly bound.
101. The purpose of this problem is to show in three ways that the binding energy of the electron in a hydrogen atom is negligible compared with the masses of the proton and electron.
(a) Calculate the mass equivalent in u of the 13.6-eV binding energy of an electron in a hydrogen atom, and compare this with the mass of the hydrogen atom obtained from Appendix A.
(b) Subtract the mass of the proton given in [link] from the mass of the hydrogen atom given in Appendix A. You will find the difference is equal to the electron’s mass to three digits, implying the binding energy is small in comparison.
(c) Take the ratio of the binding energy of the electron (13.6 eV) to the energy equivalent of the electron’s mass (0.511 MeV).
(d) Discuss how your answers confirm the stated purpose of this problem.
Solution
(a) \(\displaystyle 1.46×10^{−8}u\) vs. 1.007825 u for \(\displaystyle ^1H\)
(b) 0.000549 u
(c) \(\displaystyle 2.66×10^{−5}\)
102. Unreasonable Results
A particle physicist discovers a neutral particle with a mass of 2.02733 u that he assumes is two neutrons bound together.
(a) Find the binding energy.
(b) What is unreasonable about this result?
(c) What assumptions are unreasonable or inconsistent?
Solution
(a) \(\displaystyle –9.315 MeV\)
(b) The negative binding energy implies an unbound system.
(c) This assumption that it is two bound neutrons is incorrect.
31.7: Tunneling
103. Derive an approximate relationship between the energy of \(\displaystyle α\) decay and half-life using the following data. It may be useful to graph the log of \(\displaystyle t_{1/2}\) against \(\displaystyle E_α\) to find some straight-line relationship.
Energy and Half-Life for α Decay
| Nuclide | \(\displaystyle E_α(MeV)\) | \(\displaystyle t_{1/2}\) |
|
\(\displaystyle ^{216}Ra\) |
9.5 | \(\displaystyle 0.18 μs\) |
| \(\displaystyle ^{194}Po\) | 7.0 | 0.7s |
| \(\displaystyle ^{240}Cm\) | 6.4 | 27 d |
| \(\displaystyle ^{226}Ra\) | 4.91 | 1600 y |
| \(\displaystyle ^{232}Th\) | 4.1 | \(\displaystyle 1.4×10^{10}y\) |
104. Integrated Concepts
A 2.00-T magnetic field is applied perpendicular to the path of charged particles in a bubble chamber. What is the radius of curvature of the path of a 10 MeV proton in this field? Neglect any slowing along its path.
Solution
22.8 cm
105. (a) Write the decay equation for the \(\displaystyle α\) decay of \(\displaystyle ^{235}U\).
(b) What energy is released in this decay? The mass of the daughter nuclide is 231.036298 u.
(c) Assuming the residual nucleus is formed in its ground state, how much energy goes to the \(\displaystyle α\) particle?
Solution
(a) \(\displaystyle ^{235}_{92}U_{143}→^{231}_{90}Th_{141}+^4_2He_2\)
(b) 4.679 MeV
(c) 4.599 MeV
106. Unreasonable Results
The relatively scarce naturally occurring calcium isotope \(\displaystyle ^{48}Ca\) has a half-life of about \(\displaystyle 2×10^{16}y\).
(a) A small sample of this isotope is labeled as having an activity of 1.0 Ci. What is the mass of the 48Ca in the sample?
(b) What is unreasonable about this result?
(c) What assumption is responsible?
107. Unreasonable Results
A physicist scatters γ rays from a substance and sees evidence of a nucleus \(\displaystyle 7.5×10^{–13}m\) in radius.
(a) Find the atomic mass of such a nucleus.
(b) What is unreasonable about this result?
(c) What is unreasonable about the assumption?
Solution
(a) \(\displaystyle 2.4×10^8 u\)
(b) The greatest known atomic masses are about 260. This result found in (a) is extremely large.
(c) The assumed radius is much too large to be reasonable.
108. Unreasonable Results
A frazzled theoretical physicist reckons that all conservation laws are obeyed in the decay of a proton into a neutron, positron, and neutrino (as in \(\displaystyle β^+\) decay of a nucleus) and sends a paper to a journal to announce the reaction as a possible end of the universe due to the spontaneous decay of protons.
(a) What energy is released in this decay?
(b) What is unreasonable about this result?
(c) What assumption is responsible?
Solution
(a) \(\displaystyle –1.805 MeV\)
(b) Negative energy implies energy input is necessary and the reaction cannot be spontaneous.
(c) Although all conversation laws are obeyed, energy must be supplied, so the assumption of spontaneous decay is incorrect.
109. Construct Your Own Problem
Consider the decay of radioactive substances in the Earth’s interior. The energy emitted is converted to thermal energy that reaches the earth’s surface and is radiated away into cold dark space. Construct a problem in which you estimate the activity in a cubic meter of earth rock? And then calculate the power generated. Calculate how much power must cross each square meter of the Earth’s surface if the power is dissipated at the same rate as it is generated. Among the things to consider are the activity per cubic meter, the energy per decay, and the size of the Earth.
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:48.825954
| 2017-12-04T00:56:12 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/31%3A_Radioactivity_and_Nuclear_Physics/31.E%3A_Radioactivity_and_Nuclear_Physics_(Exercises)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "31.E: Radioactivity and Nuclear Physics (Exercises)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics
|
32: Medical Applications of Nuclear Physics
Not only has nuclear physics revealed secrets of nature, it has an inevitable impact based on its applications, as they are intertwined with human values. Because of its potential for alleviation of suffering, and its power as an ultimate destructor of life, nuclear physics is often viewed with ambivalence. But it provides perhaps the best example that applications can be good or evil, while knowledge itself is neither.
Thumbnail: Whole-body PET scan using 18 F-FDG. (Public Domain;
|
libretexts
|
2025-03-17T19:53:48.885070
| 2015-11-01T04:25:17 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32: Medical Applications of Nuclear Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.00%3A_Prelude_to_Applications_of_Nuclear_Physics
|
32.0: Prelude to Applications of Nuclear Physics
Applications of nuclear physics have become an integral part of modern life. From the bone scan that detects a cancer to the radioiodine treatment that cures another, nuclear radiation has diagnostic and therapeutic effects on medicine. From the fission power reactor to the hope of controlled fusion, nuclear energy is now commonplace and is a part of our plans for the future. Yet, the destructive potential of nuclear weapons haunts us, as does the possibility of nuclear reactor accidents.
Certainly, several applications of nuclear physics escape our view, as seen in Figure \(\PageIndex{1}\). Not only has nuclear physics revealed secrets of nature, it has an inevitable impact based on its applications, as they are intertwined with human values. Because of its potential for alleviation of suffering, and its power as an ultimate destructor of life, nuclear physics is often viewed with ambivalence. But it provides perhaps the best example that applications can be good or evil, while knowledge itself is neither.
|
libretexts
|
2025-03-17T19:53:48.942131
| 2016-07-24T08:54:21 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.00%3A_Prelude_to_Applications_of_Nuclear_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.0: Prelude to Applications of Nuclear Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.01%3A_Medical_Imaging_and_Diagnostics
|
32.1: Medical Imaging and Diagnostics
Learning Objectives
By the end of this section, you will be able to:
- Explain the working principle behind an anger camera.
- Describe the SPECT and PET imaging techniques.
A host of medical imaging techniques employ nuclear radiation. What makes nuclear radiation so useful? First, \(\gamma\) radiation can easily penetrate tissue; hence, it is a useful probe to monitor conditions inside the body. Second, nuclear radiation depends on the nuclide and not on the chemical compound it is in, so that a radioactive nuclide can be put into a compound designed for specific purposes. The compound is said to be tagged . A tagged compound used for medical purposes is called a radiopharmaceutical . Radiation detectors external to the body can determine the location and concentration of a radiopharmaceutical to yield medically useful information. For example, certain drugs are concentrated in inflamed regions of the body, and this information can aid diagnosis and treatment as seen in Figure \(\PageIndex{1}\). Another application utilizes a radiopharmaceutical which the body sends to bone cells, particularly those that are most active, to detect cancerous tumors or healing points. Images can then be produced of such bone scans. Radioisotopes are also used to determine the functioning of body organs, such as blood flow, heart muscle activity, and iodine uptake in the thyroid gland.
Medical Application
Table \(\PageIndex{1}\) lists certain medical diagnostic uses of radiopharmaceuticals, including isotopes and activities that are typically administered. Many organs can be imaged with a variety of nuclear isotopes replacing a stable element by a radioactive isotope. One common diagnostic employs iodine to image the thyroid, since iodine is concentrated in that organ. The most active thyroid cells, including cancerous cells, concentrate the most iodine and, therefore, emit the most radiation. Conversely, hypothyroidism is indicated by lack of iodine uptake. Note that there is more than one isotope that can be used for several types of scans. Another common nuclear diagnostic is the thallium scan for the cardiovascular system, particularly used to evaluate blockages in the coronary arteries and examine heart activity. The salt TlCl can be used, because it acts like NaCl and follows the blood. Gallium-67 accumulates where there is rapid cell growth, such as in tumors and sites of infection. Hence, it is useful in cancer imaging. Usually, the patient receives the injection one day and has a whole body scan 3 or 4 days later because it can take several days for the gallium to build up.
| Isotope | Typical activity (mCi), where \(1 \, mCi = 3.7 \times 10^7 \, Bq\) |
|---|---|
| Brain scan | |
| \(^{88m}Tc\) | 7.5 |
| \(^{113m}In\) | 7.5 |
| \(^{11}C\space (PET)\) | 20 |
| \(^{13}N \, (PET)\) | 20 |
| \(^{15}O \, (PET)\) | 50 |
| \(^{18}F \, (PET)\) | 10 |
| Lung scan | |
| \(^{99m}Tc\) | 2 |
| \(^{133}Xe\) | 7.5 |
| Cardiovascular blood pool | |
| \(^{131}I\) | 0.2 |
| \(^{99}Tc\) | 2 |
| Cardiovascular arterial flow | |
| \(^{201}Tl\) | 3 |
| \(^{24}Na\) | 7.5 |
| Thyroid scan | |
| \(^{131}I\) | 0.05 |
| \(^{123}I\) | 0.07 |
| Liver scan | |
|
\(^{198}Au\) (colloid) \(^{99m}Tc\) (colloid) |
0.1 2 |
| Bone scan | |
| \(^{85}Sr\) | 0.1 |
| \(^{99m}Tc\) | 10 |
| Kidney scan | |
| \(^{197}Hg\) | 0.1 |
| \(^{99m}Tc\) | 1.5 |
Note that Table \(\PageIndex{1}\) lists many diagnostic uses for \(^{99m}Tc\), where “m” stands for a metastable state of the technetium nucleus. Perhaps 80 percent of all radiopharmaceutical procedures employ \(^{99m}Tc\) because of its many advantages. One is that the decay of its metastable state produces a single, easily identified 0.142-MeV \(\gamma\) ray. Additionally, the radiation dose to the patient is limited by the short 6.0-h half-life of \(^{99m}Tc\). And, although its half-life is short, it is easily and continuously produced on site. The basic process for production is neutron activation of molybdenum, which quickly \(\beta\) decays into \(^{99m}Tc\). Technetium-99m can be attached to many compounds to allow the imaging of the skeleton, heart, lungs, kidneys, etc.
Figure \(\PageIndex{2}\) shows one of the simpler methods of imaging the concentration of nuclear activity, employing a device called an Anger camera or gamma camera . A piece of lead with holes bored through it collimates \(\gamma\) rays emerging from the patient, allowing detectors to receive \(\gamma\) rays from specific directions only. The computer analysis of detector signals produces an image. One of the disadvantages of this detection method is that there is no depth information (i.e., it provides a two-dimensional view of the tumor as opposed to a three-dimensional view), because radiation from any location under that detector produces a signal.
Imaging techniques much like those in x-ray computed tomography (CT) scans use nuclear activity in patients to form three-dimensional images. Figure \(\PageIndex{3}\) shows a patient in a circular array of detectors that may be stationary or rotated, with detector output used by a computer to construct a detailed image. This technique is called single-photon-emission computed tomography(SPECT) or sometimes simply SPET. The spatial resolution of this technique is poor, about 1 cm, but the contrast (i.e. the difference in visual properties that makes an object distinguishable from other objects and the background) is good.
Images produced by \(\beta^+\) emitters have become important in recent years. When the emitted positron (\(\beta^+\)) encounters an electron, mutual annihilation occurs, producing two \(\gamma\) rays. These \(\gamma\) rays have identical 0.511-MeV energies (the energy comes from the destruction of an electron or positron mass) and they move directly away from one another, allowing detectors to determine their point of origin accurately, as shown in Figure \(\PageIndex{4}\). The system is called positron emission tomography (PET) . It requires detectors on opposite sides to simultaneously (i.e., at the same time) detect photons of 0.511-MeV energy and utilizes computer imaging techniques similar to those in SPECT and CT scans. Examples of \(\beta^+\)-emitting isotopes used in PET are \(^{11}O\), \(^{13}N\), \(^{15}O\) and \(^{18}F\), as seen in Table \(\PageIndex{1}\). This list includes C, N, and O, and so they have the advantage of being able to function as tags for natural body compounds. Its resolution of 0.5 cm is better than that of SPECT; the accuracy and sensitivity of PET scans make them useful for examining the brain’s anatomy and function. The brain’s use of oxygen and water can be monitored with \(^{15}O\). PET is used extensively for diagnosing brain disorders. It can note decreased metabolism in certain regions prior to a confirmation of Alzheimer’s disease. PET can locate regions in the brain that become active when a person carries out specific activities, such as speaking, closing their eyes, and so on.
PHET EXPLORATIONS: SIMPLIFIED MRI
Is it a tumor? Magnetic Resonance Imaging (MRI) can tell. Your head is full of tiny radio transmitters (the nuclear spins of the hydrogen nuclei of your water molecules). In an MRI unit, these little radios can be made to broadcast their positions, giving a detailed picture of the inside of your head.
Summary
- Radiopharmaceuticals are compounds that are used for medical imaging and therapeutics.
- The process of attaching a radioactive substance is called tagging.
- Table lists certain diagnostic uses of radiopharmaceuticals including the isotope and activity typically used in diagnostics.
- One common imaging device is the Anger camera, which consists of a lead collimator, radiation detectors, and an analysis computer.
- Tomography performed with \(\gamma\) -emitting radiopharmaceuticals is called SPECT and has the advantages of x-ray CT scans coupled with organ- and function-specific drugs.
- PET is a similar technique that uses \(\beta^+\) emitters and detects the two annihilation \(\gamma\) rays, which aid to localize the source.
Glossary
- Anger camera
- a common medical imaging device that uses a scintillator connected to a series of photomultipliers
- gamma camera
- another name for an Anger camera
- positron emission tomography (PET)
- tomography technique that uses \(\beta^+\) emitters and detects the two annihilation \(\gamma\) rays, aiding in source localization
- radiopharmaceutical
- compound used for medical imaging
- single-photon-emission computed tomography (SPECT)
- tomography performed with \(\gamma\)-emitting radiopharmaceuticals
- tagged
- process of attaching a radioactive substance to a chemical compound
|
libretexts
|
2025-03-17T19:53:49.024100
| 2016-07-24T08:55:12 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.01%3A_Medical_Imaging_and_Diagnostics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.1: Medical Imaging and Diagnostics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.02%3A_Biological_Effects_of_Ionizing_Radiation
|
32.2: Biological Effects of Ionizing Radiation
Learning Objectives
By the end of this section, you will be able to:
- Define various units of radiation.
- Describe RBE.
We hear many seemingly contradictory things about the biological effects of ionizing radiation. It can cause cancer, burns, and hair loss, yet it is used to treat and even cure cancer. How do we understand these effects? Once again, there is an underlying simplicity in nature, even in complicated biological organisms. All the effects of ionizing radiation on biological tissue can be understood by knowing that ionizing radiation affects molecules within cells, particularly DNA molecules.
Let us take a brief look at molecules within cells and how cells operate. Cells have long, double-helical DNA molecules containing chemical codes called genetic codes that govern the function and processes undertaken by the cell. It is for unraveling the double-helical structure of DNA that James Watson, Francis Crick, and Maurice Wilkins received the Nobel Prize. Damage to DNA consists of breaks in chemical bonds or other changes in the structural features of the DNA chain, leading to changes in the genetic code. In human cells, we can have as many as a million individual instances of damage to DNA per cell per day. It is remarkable that DNA contains codes that check whether the DNA is damaged or can repair itself. It is like an auto check and repair mechanism. This repair ability of DNA is vital for maintaining the integrity of the genetic code and for the normal functioning of the entire organism. It should be constantly active and needs to respond rapidly. The rate of DNA repair depends on various factors such as the cell type and age of the cell. A cell with a damaged ability to repair DNA, which could have been induced by ionizing radiation, can do one of the following:
- The cell can go into an irreversible state of dormancy, known as senescence.
- The cell can commit suicide, known as programmed cell death (apoptosis ).
- The cell can go into unregulated cell division leading to tumors and cancers.
Since ionizing radiation damages the DNA, which is critical in cell reproduction, it has its greatest effect on cells that rapidly reproduce, including most types of cancer. Thus, cancer cells are more sensitive to radiation than normal cells and can be killed by it easily. Cancer is characterized by a malfunction of cell reproduction, and can also be caused by ionizing radiation. Without contradiction, ionizing radiation can be both a cure and a cause.
To discuss quantitatively the biological effects of ionizing radiation, we need a radiation dose unit that is directly related to those effects. All effects of radiation are assumed to be directly proportional to the amount of ionization produced in the biological organism. The amount of ionization is in turn proportional to the amount of deposited energy. Therefore, we define a radiation dose unit called the rad , as \(1/100\) of a joule of ionizing energy deposited per kilogram of tissue, which is
\[1 \, rad = 0.01 \, J/kg.\]
For example, if a 50.0-kg person is exposed to ionizing radiation over her entire body and she absorbs 1.00 J, then her whole-body radiation dose is
\[(1.00 \, J)/(50.0 \, kg) = 0.0200 \, J/kg = 2.00 \, rad.\]
If the same 1.00 J of ionizing energy were absorbed in her 2.00-kg forearm alone, then the dose to the forearm would be
\[(1,00 \, J)/(2.00 \, kg) = 0.500 \, J/kg = 50.0 \, rad,\]
and the unaffected tissue would have a zero rad dose. While calculating radiation doses, you divide the energy absorbed by the mass of affected tissue. You must specify the affected region, such as the whole body or forearm in addition to giving the numerical dose in rads. The SI unit for radiation dose is the gray (Gy) , which is defined to be
\[1 \, Gy = 1 \, J/kg = 100 \, rad.\]
However, the rad is still commonly used. Although the energy per kilogram in 1 rad is small, it has significant effects since the energy causes ionization. The energy needed for a single ionization is a few eV, or less than \(10^{-18} \, J\). Thus, 0.01 J of ionizing energy can create a huge number of ion pairs and have an effect at the cellular level.
The effects of ionizing radiation may be directly proportional to the dose in rads, but they also depend on the type of radiation and the type of tissue. That is, for a given dose in rads, the effects depend on whether the radiation is \(\alpha\), \(\beta\), \(\gamma\), x-ray, or some other type of ionizing radiation. In the earlier discussion of the range of ionizing radiation, it was noted that energy is deposited in a series of ionizations and not in a single interaction. Each ion pair or ionization requires a certain amount of energy, so that the number of ion pairs is directly proportional to the amount of the deposited ionizing energy. However, if the range of the radiation is small, as it is for \(\alpha\) paticles, then the ionization and the damage created is more concentrated and harder for the organism to repair, as seen in Figure \(\PageIndex{1}\). Concentrated damage is more difficult for biological organisms to repair than damage that is spread out, so short-range particles have greater biological effects. The relative biological effectiveness (RBE) or quality factor (QF) is given in Table \(\PageIndex{1}\) for several types of ionizing radiation—the effect of the radiation is directly proportional to the RBE. A dose unit more closely related to effects in biological tissue is called the roentgen equivalent man or rem and is defined to be the dose in rads multiplied by the relative biological effectiveness.
\[rem = rad \times RBE\]
So, if a person had a whole-body dose of 2.00 rad of \(\gamma\) radiation, the dose in rem would be \(2.00 \, rad)(1) = 2.00 \, rem\) whole body. If the person had a whole-body dose of 2.00 rad of \(\alpha\) radiation, then the dose in rem would be \((2.00 \, rad)(20) = 40.0 \, rem\) whole body. The \(\alpha\) s would have 20 times the effect on the person than the \(\gamma\)s for the same deposited energy. The SI equivalent of the rem is the sievert (Sv), defined to be \(Sv = Gy \times RBE\), so that \[1 \, Sv = 1 \, Gy \times RBE = 100 \, rem.\]
| Type and energy of radiation | RBE 1 |
|---|---|
| X-rays | 1 |
| \(\gamma\) rays | 1 |
| \(\beta\) rays greater than 32 keV | 1 |
| \(\beta\) rays less than 32 keV | 1.7 |
| Neutrons, thermal to slow (<20 keV) | 2–5 |
| Neutrons, fast (1–10 MeV) | 10 (body), 32 (eyes) |
| Protons (1–10 MeV) | 10 (body), 32 (eyes) |
| \(\alpha\) rays from radioactive decay | 10–20 |
| Heavy ions from accelerators | 10–20 |
The RBEs given in Table \(\PageIndex{1}\) are approximate, but they yield certain insights. For example, the eyes are more sensitive to radiation, because the cells of the lens do not repair themselves. Neutrons cause more damage than \(\gamma\) rays, although both are neutral and have large ranges, because neutrons often cause secondary radiation when they are captured. Note that the RBEs are 1 for higher-energy \(\beta\)s, \(\gamma\)s, and x-rays, three of the most common types of radiation. For those types of radiation, the numerical values of the dose in rem and rad are identical. For example, 1 rad of \(\gamma\) radiation is also 1 rem. For that reason, rads are still widely quoted rather than rem. Table \(\PageIndex{2}\) summarizes the units that are used for radiation.
Misconception Alert: Activity vs. Dose
“Activity” refers to the radioactive source while “dose” refers to the amount of energy from the radiation that is deposited in a person or object.
A high level of activity doesn’t mean much if a person is far away from the source. The activity \(R\) of a source depends upon the quantity of material (kg) as well as the half-life. A short half-life will produce many more disintegrations per second. Recall that \(R = \frac{0.693 N}{t_{1/2}}\). Also, the activity decreases exponentially, which is seen in the equation \(R = R_0e^{-\lambda t}\).
| Quantity | SI unit name | Definition | Former unit | Conversion |
|---|---|---|---|---|
| Activity | Becquerel (bq) | decay/sec | Curie (Ci) | \(1 \, Bq = 2.7 \times 10^{-11} \, Ci\) |
| Absorbed dose | Gray (Gy) | 1 J/kg | rad | \(Gy = 100 \, rad\) |
| Dose Equivalent | Sievert (Sv) | 1 J/kg × RBE | rem | \(Sv = 100 \, rem\) |
The large-scale effects of radiation on humans can be divided into two categories: immediate effects and long-term effects. Table gives the immediate effects of whole-body exposures received in less than one day. If the radiation exposure is spread out over more time, greater doses are needed to cause the effects listed. This is due to the body’s ability to partially repair the damage. Any dose less than 100 mSv (10 rem) is called a low dose , 0.1 Sv to 1 Sv (10 to 100 rem) is called a moderate dose , and anything greater than 1 Sv (100 rem) is called a high dose . There is no known way to determine after the fact if a person has been exposed to less than 10 mSv.
| Dose in Sv 2 | Effect |
|---|---|
| 0–0.10 | No observable effect. |
| 0.1 – 1 | Slight to moderate decrease in white blood cell counts. |
| 0.5 | Temporary sterility; 0.35 for women, 0.50 for men. |
| 1 – 2 | Significant reduction in blood cell counts, brief nausea and vomiting. Rarely fatal. |
| 2 – 5 | Nausea, vomiting, hair loss, severe blood damage, hemorrhage, fatalities. |
| 4.5 | LD50/32. Lethal to 50% of the population within 32 days after exposure if not treated. |
| 5 – 20 | Worst effects due to malfunction of small intestine and blood systems. Limited survival. |
| >20 | Fatal within hours due to collapse of central nervous system. |
Immediate effects are explained by the effects of radiation on cells and the sensitivity of rapidly reproducing cells to radiation. The first clue that a person has been exposed to radiation is a change in blood count, which is not surprising since blood cells are the most rapidly reproducing cells in the body. At higher doses, nausea and hair loss are observed, which may be due to interference with cell reproduction. Cells in the lining of the digestive system also rapidly reproduce, and their destruction causes nausea. When the growth of hair cells slows, the hair follicles become thin and break off. High doses cause significant cell death in all systems, but the lowest doses that cause fatalities do so by weakening the immune system through the loss of white blood cells.
The two known long-term effects of radiation are cancer and genetic defects. Both are directly attributable to the interference of radiation with cell reproduction. For high doses of radiation, the risk of cancer is reasonably well known from studies of exposed groups. Hiroshima and Nagasaki survivors and a smaller number of people exposed by their occupation, such as radium dial painters, have been fully documented. Chernobyl victims will be studied for many decades, with some data already available. For example, a significant increase in childhood thyroid cancer has been observed. The risk of a radiation-induced cancer for low and moderate doses is generally assumed to be proportional to the risk known for high doses. Under this assumption, any dose of radiation, no matter how small, involves a risk to human health. This is called the linear hypothesis and it may be prudent, but it is controversial. There is some evidence that, unlike the immediate effects of radiation, the long-term effects are cumulative and there is little self-repair. This is analogous to the risk of skin cancer from UV exposure, which is known to be cumulative.
There is a latency period for the onset of radiation-induced cancer of about 2 years for leukemia and 15 years for most other forms. The person is at risk for at least 30 years after the latency period. Omitting many details, the overall risk of a radiation-induced cancer death per year per rem of exposure is about 10 in a million, which can be written as \(10/10^6 \, rem \cdot y\).
If a person receives a dose of 1 rem, his risk each year of dying from radiation-induced cancer is 10 in a million and that risk continues for about 30 years. The lifetime risk is thus 300 in a million, or 0.03 percent. Since about 20 percent of all worldwide deaths are from cancer, the increase due to a 1 rem exposure is impossible to detect demographically. But 100 rem (1 Sv), which was the dose received by the average Hiroshima and Nagasaki survivor, causes a 3 percent risk, which can be observed in the presence of a 20 percent normal or natural incidence rate.
The incidence of genetic defects induced by radiation is about one-third that of cancer deaths, but is much more poorly known. The lifetime risk of a genetic defect due to a 1 rem exposure is about 100 in a million or \(3.3/10^6 \, rem \cdot y\), but the normal incidence is 60,000 in a million. Evidence of such a small increase, tragic as it is, is nearly impossible to obtain. For example, there is no evidence of increased genetic defects among the offspring of Hiroshima and Nagasaki survivors. Animal studies do not seem to correlate well with effects on humans and are not very helpful. For both cancer and genetic defects, the approach to safety has been to use the linear hypothesis, which is likely to be an overestimate of the risks of low doses. Certain researchers even claim that low doses are beneficial . Hormesis is a term used to describe generally favorable biological responses to low exposures of toxins or radiation. Such low levels may help certain repair mechanisms to develop or enable cells to adapt to the effects of the low exposures. Positive effects may occur at low doses that could be a problem at high doses.
Even the linear hypothesis estimates of the risks are relatively small, and the average person is not exposed to large amounts of radiation. Table lists average annual background radiation doses from natural and artificial sources for Australia, the United States, Germany, and world-wide averages. Cosmic rays are partially shielded by the atmosphere, and the dose depends upon altitude and latitude, but the average is about 0.40 mSv/y. A good example of the variation of cosmic radiation dose with altitude comes from the airline industry. Monitored personnel show an average of 2 mSv/y. A 12-hour flight might give you an exposure of 0.02 to 0.03 mSv.
Doses from the Earth itself are mainly due to the isotopes of uranium, thorium, and potassium, and vary greatly by location. Some places have great natural concentrations of uranium and thorium, yielding doses ten times as high as the average value. Internal doses come from foods and liquids that we ingest. Fertilizers containing phosphates have potassium and uranium. So we are all a little radioactive. Carbon-14 has about 66 Bq/kg radioactivity whereas fertilizers may have more than 3000 Bq/kg radioactivity. Medical and dental diagnostic exposures are mostly from x-rays. It should be noted that x-ray doses tend to be localized and are becoming much smaller with improved techniques. Table shows typical doses received during various diagnostic x-ray examinations. Note the large dose from a CT scan. While CT scans only account for less than 20 percent of the x-ray procedures done today, they account for about 50 percent of the annual dose received.
Radon is usually more pronounced underground and in buildings with low air exchange with the outside world. Almost all soil contains some \(^{226}Ra\) and \(^{222}Rn\), but radon is lower in mainly sedimentary soils and higher in granite soils. Thus, the exposure to the public can vary greatly, even within short distances. Radon can diffuse from the soil into homes, especially basements. The estimated exposure for \(^{222}Rn\) is controversial. Recent studies indicate there is more radon in homes than had been realized, and it is speculated that radon may be responsible for 20 percent of lung cancers, being particularly hazardous to those who also smoke. Many countries have introduced limits on allowable radon concentrations in indoor air, often requiring the measurement of radon concentrations in a house prior to its sale. Ironically, it could be argued that the higher levels of radon exposure and their geographic variability, taken with the lack of demographic evidence of any effects, means that low-level radiation is less dangerous than previously thought.
Radiation Protection
Laws regulate radiation doses to which people can be exposed. The greatest occupational whole-body dose that is allowed depends upon the country and is about 20 to 50 mSv/y and is rarely reached by medical and nuclear power workers. Higher doses are allowed for the hands. Much lower doses are permitted for the reproductive organs and the fetuses of pregnant women. Inadvertent doses to the public are limited to \(1/10\) of occupational doses, except for those caused by nuclear power, which cannot legally expose the public to more than \(1/1000\) of the occupational limit or 0.05 mSv/y (5 mrem/y). This has been exceeded in the United States only at the time of the Three Mile Island (TMI) accident in 1979. Chernobyl is another story. Extensive monitoring with a variety of radiation detectors is performed to assure radiation safety. Increased ventilation in uranium mines has lowered the dose there to about 1 mSv/y.
| Source | Dose (mSv/y) 3 | Dose (mSv/y) 3 | Dose (mSv/y) 3 | Dose (mSv/y) 3 |
|---|---|---|---|---|
| Source | Australia | Germany | United States | World |
| Natural Radiation - external | ||||
| Cosmic Rays | 0.30 | 0.28 | 0.30 | 0.39 |
| Soil, building materials | 0.40 | 0.40 | 0.30 | 0.48 |
| Radon gas | 0.90 | 1.1 | 2.0 | 1.2 |
| Natural Radiation - internal | ||||
| \(^{40}K\), \(^{14}C\), \(^{226}Ra\) | 0.24 | 0.28 | 0.40 | 0.29 |
| Medical & Dental | 0.80 | 0.90 | 0.53 | 0.40 |
| TOTAL | 2.6 | 3.0 | 3.5 | 2.8 |
To physically limit radiation doses, we use shielding , increase the distance from a source, and limit the time of exposure .
Figure \(\PageIndex{2}\) illustrates how these are used to protect both the patient and the dental technician when an x-ray is taken. Shielding absorbs radiation and can be provided by any material, including sufficient air. The greater the distance from the source, the more the radiation spreads out. The less time a person is exposed to a given source, the smaller is the dose received by the person. Doses from most medical diagnostics have decreased in recent years due to faster films that require less exposure time.
| Procedure | Effective dose (mSv) |
|---|---|
| Chest | 0.02 |
| Dental | 0.01 |
| Skull | 0.07 |
| Leg | 0.02 |
| Mammogram | 0.40 |
| Barium enema | 7.0 |
| Upper GI | 3.0 |
| CT head | 2.0 |
| CT abdomen | 10.0 |
Problem-Solving Strategy
You need to follow certain steps for dose calculations, which are
- Step 1. Examine the situation to determine that a person is exposed to ionizing radiation.
- Step 2. Identify exactly what needs to be determined in the problem (identify the unknowns). The most straightforward problems ask for a dose calculation.
- Step 3. Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Look for information on the type of radiation, the energy per event, the activity, and the mass of tissue affected.
- Step 4. For dose calculations, you need to determine the energy deposited. This may take one or more steps, depending on the given information.
- Step 5. Divide the deposited energy by the mass of the affected tissue. Use units of joules for energy and kilograms for mass. To calculate the dose in Gy, use the definition that \(1 \, Gy = 1 \, J/kg\).
- Step 6. If a dose in mSv is involved, determine the RBE (QF) of the radiation. Recall that \(1 \, mSv = 1 \, mGy \times RBE (or \, 1 \, rem = 1 \, rad \times RBE)\).
- Step 7. Check the answer to see if it is reasonable: Does it make sense? The dose should be consistent with the numbers given in the text for diagnostic, occupational, and therapeutic exposures.
Example \(\PageIndex{1}\): Dose from Inhaled Plutonium
Calculate the dose in rem/y for the lungs of a weapons plant employee who inhales and retains an activity of \(1.00 \, \mu Ci\) of \(^{239}Pu\) in an accident. The mass of affected lung tissue is 2.00 kg, the plutonium decays by emission of a 5.23-MeV \(\alpha\) particle, and you may assume the higher value of the RBE for \(\alpha\)s from Table .
Strategy
Dose in rem is defined by \(1 \, rad = 0.01 \, J/kg\) and \(rem = rad \times RBE\). The energy deposited is divided by the mass of tissue affected and then multiplied by the RBE. The latter two quantities are given, and so the main task in this example will be to find the energy deposited in one year. Since the activity of the source is given, we can calculate the number of decays, multiply by the energy per decay, and convert MeV to joules to get the total energy.
Solution
The activity \(R = 1.00 \, \mu Ci = 3.70 \times 10^4 \, Bq = 3.70 \times 10^4\) decays/s. So, the number of decays per year is obtained by multiplying by the number of seconds in a year:
\[(3.70 \times 10^4 \, decays/s)(3.16 \times 10^7 \, s) = 1.17 \times 10^{12} \, decays.\]
Thus, the ionizing energy deposited per year is
\[E = (1.17 \times 10^{12} \, decays)(5.23 \, MeV/decay) \times \left(\dfrac{1.60 \times 10^{-13} \, J}{MeV}\right) = 0.\]
Dividing by the mass of the affected tissue gives
\[\dfrac{E}{mass} = \dfrac{0.978 \, J}{2.00 \, kg} = 0.490 \, J/kg.\]
One Gray is 1.00 J/kg, and so the dose in Gy is
\[dose \, in \, Gy = \dfrac{0.489 \, J/kg}{1.00 \, (J/kg)/Gy} = 0.489 \, Gy.\]
Now, the dose in Sv is \[dose \, in Sv = Gy \times RBE\]\[= (0.489 \, Gy)(20) = 9.8 \, Sv.\]
Discussion
First note that the dose is given to two digits, because the RBE is (at best) known only to two digits. By any standard, this yearly radiation dose is high and will have a devastating effect on the health of the worker. Worse yet, plutonium has a long radioactive half-life and is not readily eliminated by the body, and so it will remain in the lungs. Being an \(\alpha\) emitter makes the effects 10 to 20 times worse than the same ionization produced by \(\beta\)s, \(\gamma\) rays, or x-rays. An activity of \(1.00 \, \mu Ci\) is created by only \(16 \, \mu g\) of \(^{239}Pu\) (left as an end-of-chapter problem to verify), partly justifying claims that plutonium is the most toxic substance known. Its actual hazard depends on how likely it is to be spread out among a large population and then ingested. The Chernobyl disaster’s deadly legacy, for example, has nothing to do with the plutonium it put into the environment.
Risk versus Benefit
Medical doses of radiation are also limited. Diagnostic doses are generally low and have further lowered with improved techniques and faster films. With the possible exception of routine dental x-rays, radiation is used diagnostically only when needed so that the low risk is justified by the benefit of the diagnosis. Chest x-rays give the lowest doses—about 0.1 mSv to the tissue affected, with less than 5 percent scattering into tissues that are not directly imaged. Other x-ray procedures range upward to about 10 mSv in a CT scan, and about 5 mSv (0.5 rem) per dental x-ray, again both only affecting the tissue imaged. Medical images with radiopharmaceuticals give doses ranging from 1 to 5 mSv, usually localized. One exception is the thyroid scan using \(^{131}I\). Because of its relatively long half-life, it exposes the thyroid to about 0.75 Sv. The isotope \(^{123}I\) is more difficult to produce, but its short half-life limits thyroid exposure to about 15 mSv.
PHET EXPLORATIONS: ALPHA DECAY
Watch alpha particles escape from a polonium nucleus, causing radioactive alpha decay. See how random decay times relate to the half life.
Summary
- The biological effects of ionizing radiation are due to two effects it has on cells: interference with cell reproduction, and destruction of cell function.
- A radiation dose unit called the rad is defined in terms of the ionizing energy deposited per kilogram of tissue: \[1 \, rad = 0.01 \, J/kg.\]
- The SI unit for radiation dose is the gray (Gy), which is defined to be \(1 \, Gy = 1 \, J/kg = 100 \, rad.\)
- To account for the effect of the type of particle creating the ionization, we use the relative biological effectiveness (RBE) or quality factor (QF) given in Table and define a unit called the roentgen equivalent man (rem) as \[rem = rad \times RBE.\]
- Particles that have short ranges or create large ionization densities have RBEs greater than unity. The SI equivalent of the rem is the sievert (Sv), defined to be \[Sv = Gy \times RBE \, and \, 1 \, Sv = 100 \, rem.\]
- Whole-body, single-exposure doses of 0.1 Sv or less are low doses while those of 0.1 to 1 Sv are moderate, and those over 1 Sv are high doses. Some immediate radiation effects are given in Table . Effects due to low doses are not observed, but their risk is assumed to be directly proportional to those of high doses, an assumption known as the linear hypothesis. Long-term effects are cancer deaths at the rate of \(10/10^6 \, rem \cdot y\) and genetic defects at roughly one-third this rate. Background radiation doses and sources are given in Table . World-wide average radiation exposure from natural sources, including radon, is about 3 mSv, or 300 mrem. Radiation protection utilizes shielding, distance, and time to limit exposure.
Glossary
- gray (Gy)
- the SI unit for radiation dose which is defined to be \(1 \, Gy = 1 \, J/kg = 100 \, rad\)
- linear hypothesis
- assumption that risk is directly proportional to risk from high doses
- rad
- the ionizing energy deposited per kilogram of tissue
- sievert
- the SI equivalent of the rem
- relative biological effectiveness (RBE)
- a number that expresses the relative amount of damage that a fixed amount of ionizing radiation of a given type can inflict on biological tissues
- quality factor
- same as relative biological effectiveness
- roentgen equivalent man (rem)
- a dose unit more closely related to effects in biological tissue
- low dose
- a dose less than 100 mSv (10 rem)
- moderate dose
- a dose from 0.1 Sv to 1 Sv (10 to 100 rem)
- high dose
- a dose greater than 1 Sv (100 rem)
- hormesis
- a term used to describe generally favorable biological responses to low exposures of toxins or radiation
- shielding
- a technique to limit radiation exposure
|
libretexts
|
2025-03-17T19:53:49.147348
| 2016-07-24T08:56:04 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.02%3A_Biological_Effects_of_Ionizing_Radiation",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.2: Biological Effects of Ionizing Radiation",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.03%3A_Therapeutic_Uses_of_Ionizing_Radiation
|
32.3: Therapeutic Uses of Ionizing Radiation
Learning Objectives
By the end of this section, you will be able to:
- Explain the concept of radiotherapy and list typical doses for cancer therapy.
Therapeutic applications of ionizing radiation, called radiation therapy or radiotherapy , have existed since the discovery of x-rays and nuclear radioactivity. Today, radiotherapy is used almost exclusively for cancer therapy, where it saves thousands of lives and improves the quality of life and longevity of many it cannot save. Radiotherapy may be used alone or in combination with surgery and chemotherapy (drug treatment) depending on the type of cancer and the response of the patient. A careful examination of all available data has established that radiotherapy’s beneficial effects far outweigh its long-term risks.
Medical Application
The earliest uses of ionizing radiation on humans were mostly harmful, with many at the level of snake oil as seen in Figure . Radium-doped cosmetics that glowed in the dark were used around the time of World War I. As recently as the 1950s, radon mine tours were promoted as healthful and rejuvenating—those who toured were exposed but gained no benefits. Radium salts were sold as health elixirs for many years. The gruesome death of a wealthy industrialist, who became psychologically addicted to the brew, alerted the unsuspecting to the dangers of radium salt elixirs. Most abuses finally ended after the legislation in the 1950s.
Radiotherapy is effective against cancer because cancer cells reproduce rapidly and, consequently, are more sensitive to radiation. The central problem in radiotherapy is to make the dose for cancer cells as high as possible while limiting the dose for normal cells. The ratio of abnormal cells killed to normal cells killed is called the therapeutic ratio , and all radiotherapy techniques are designed to enhance this ratio. Radiation can be concentrated in cancerous tissue by a number of techniques. One of the most prevalent techniques for well-defined tumors is a geometric technique shown in Figure . A narrow beam of radiation is passed through the patient from a variety of directions with a common crossing point in the tumor. This concentrates the dose in the tumor while spreading it out over a large volume of normal tissue. The external radiation can be x-rays, \(^{60}Co\) \(\gamma\) rays, or ionizing-particle beams produced by accelerators. Accelerator-produced beams of neutrons, \(\pi -mesons\), and heavy ions such as nitrogen nuclei have been employed, and these can be quite effective. These particles have larger QFs or RBEs and sometimes can be better localized, producing a greater therapeutic ratio. But accelerator radiotherapy is much more expensive and less frequently employed than other forms.
Another form of radiotherapy uses chemically inert radioactive implants. One use is for prostate cancer. Radioactive seeds (about 40 to 100 and the size of a grain of rice) are placed in the prostate region. The isotopes used are usually \(^{135}I\) (6-month half life) or \(^{103}Pd\) (3-month half life). Alpha emitters have the dual advantages of a large QF and a small range for better localization.
Radiopharmaceuticals are used for cancer therapy when they can be localized well enough to produce a favorable therapeutic ratio. Thyroid cancer is commonly treated utilizing radioactive iodine. Thyroid cells concentrate iodine, and cancerous thyroid cells are more aggressive in doing this. An ingenious use of radiopharmaceuticals in cancer therapy tags antibodies with radioisotopes. Antibodies produced by a patient to combat his cancer are extracted, cultured, loaded with a radioisotope, and then returned to the patient. The antibodies are concentrated almost entirely in the tissue they developed to fight, thus localizing the radiation in abnormal tissue. The therapeutic ratio can be quite high for short-range radiation. There is, however, a significant dose for organs that eliminate radiopharmaceuticals from the body, such as the liver, kidneys, and bladder. As with most radiotherapy, the technique is limited by the tolerable amount of damage to the normal tissue.
Table lists typical therapeutic doses of radiation used against certain cancers. The doses are large, but not fatal because they are localized and spread out in time. Protocols for treatment vary with the type of cancer and the condition and response of the patient. Three to five 200-rem treatments per week for a period of several weeks is typical. Time between treatments allows the body to repair normal tissue. This effect occurs because damage is concentrated in the abnormal tissue, and the abnormal tissue is more sensitive to radiation. Damage to normal tissue limits the doses. You will note that the greatest doses are given to any tissue that is not rapidly reproducing, such as in the adult brain. Lung cancer, on the other end of the scale, cannot ordinarily be cured with radiation because of the sensitivity of lung tissue and blood to radiation. But radiotherapy for lung cancer does alleviate symptoms and prolong life and is therefore justified in some cases.
| Type of Cancer | Typical dose (Sv) |
|---|---|
| Lung | 10–20 |
| Hodgkin’s disease | 40–45 |
| Skin | 40–50 |
| Ovarian | 50–75 |
| Breast | 50–80+ |
| Brain | 80+ |
| Neck | 80+ |
| Bone | 80+ |
| Soft tissue | 80+ |
| Thyroid | 80+ |
Finally, it is interesting to note that chemotherapy employs drugs that interfere with cell division and is, thus, also effective against cancer. It also has almost the same side effects, such as nausea and hair loss, and risks, such as the inducement of another cancer.
Summary
- Radiotherapy is the use of ionizing radiation to treat ailments, now limited to cancer therapy.
- The sensitivity of cancer cells to radiation enhances the ratio of cancer cells killed to normal cells killed, which is called the therapeutic ratio. Doses for various organs are limited by the tolerance of normal tissue for radiation.
- Treatment is localized in one region of the body and spread out in time.
Glossary
- radiotherapy
- the use of ionizing radiation to treat ailments
- therapeutic ratio
- the ratio of abnormal cells killed to normal cells killed
|
libretexts
|
2025-03-17T19:53:49.218366
| 2016-07-24T08:56:52 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.03%3A_Therapeutic_Uses_of_Ionizing_Radiation",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.3: Therapeutic Uses of Ionizing Radiation",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.04%3A_Food_Irradiation
|
32.4: Food Irradiation
Learning Objectives
By the end of this section, you will be able to:
- Define food irradiation low dose, and free radicals.
Ionizing radiation is widely used to sterilize medical supplies, such as bandages, and consumer products, such as tampons. Worldwide, it is also used to irradiate food, an application that promises to grow in the future. Food irradiation is the treatment of food with ionizing radiation. It is used to reduce pest infestation and to delay spoilage and prevent illness caused by microorganisms. Food irradiation is controversial. Proponents see it as superior to pasteurization, preservatives, and insecticides, supplanting dangerous chemicals with a more effective process. Opponents see its safety as unproven, perhaps leaving worse toxic residues as well as presenting an environmental hazard at treatment sites. In developing countries, food irradiation might increase crop production by 25.0% or more, and reduce food spoilage by a similar amount. It is used chiefly to treat spices and some fruits, and in some countries, red meat, poultry, and vegetables. Over 40 countries have approved food irradiation at some level.
Food irradiation exposes food to large doses of \(\gamma\) rays, x-rays, or electrons. These photons and electrons induce no nuclear reactions and thus create no residual radioactivity . (Some forms of ionizing radiation, such as neutron irradiation, cause residual radioactivity. These are not used for food irradiation.) The \(\gamma\) source is usually \(^{60}Co\) or \(^{137}Cs\), the latter isotope being a major by-product of nuclear power. Cobalt-60 \(\gamma\) rays average 1.25 MeV, while those of \(^{137}Cs\) are 0.67 MeV and are less penetrating. X-rays used for food irradiation are created with voltages of up to 5 million volts and, thus, have photon energies up to 5 MeV. Electrons used for food irradiation are accelerated to energies up to 10 MeV. The higher the energy per particle, the more penetrating the radiation is and the more ionization it can create. Figure shows a typical \(\gamma\) irradiation plant.
Owing to the fact that food irradiation seeks to destroy organisms such as insects and bacteria, much larger doses than those fatal to humans must be applied. Generally, the simpler the organism, the more radiation it can tolerate. (Cancer cells are a partial exception, because they are rapidly reproducing and, thus, more sensitive.) Current licensing allows up to 1000 Gy to be applied to fresh fruits and vegetables, called a low dose in food irradiation. Such a dose is enough to prevent or reduce the growth of many microorganisms, but about 10,000 Gy is needed to kill salmonella, and even more is needed to kill fungi. Doses greater than 10,000 Gy are considered to be high doses in food irradiation and product sterilization.
The effectiveness of food irradiation varies with the type of food. Spices and many fruits and vegetables have dramatically longer shelf lives. These also show no degradation in taste and no loss of food value or vitamins. If not for the mandatory labeling, such foods subjected to low-level irradiation (up to 1000 Gy) could not be distinguished from untreated foods in quality. However, some foods actually spoil faster after irradiation, particularly those with high water content like lettuce and peaches. Others, such as milk, are given a noticeably unpleasant taste. High-level irradiation produces significant and chemically measurable changes in foods. It produces about a 15% loss of nutrients and a 25% loss of vitamins, as well as some change in taste. Such losses are similar to those that occur in ordinary freezing and cooking.
How does food irradiation work? Ionization produces a random assortment of broken molecules and ions, some with unstable oxygen- or hydrogen-containing molecules known as free radicals . These undergo rapid chemical reactions, producing perhaps four or five thousand different compounds called radiolytic products , some of which make cell function impossible by breaking cell membranes, fracturing DNA, and so on. How safe is the food afterward? Critics argue that the radiolytic products present a lasting hazard, perhaps being carcinogenic. However, the safety of irradiated food is not known precisely. We do know that low-level food irradiation produces no compounds in amounts that can be measured chemically. This is not surprising, since trace amounts of several thousand compounds may be created. We also know that there have been no observable negative short-term effects on consumers. Long-term effects may show up if large number of people consume large quantities of irradiated food, but no effects have appeared due to the small amounts of irradiated food that are consumed regularly. The case for safety is supported by testing of animal diets that were irradiated; no transmitted genetic effects have been observed. Food irradiation (at least up to a million rad) has been endorsed by the World Health Organization and the UN Food and Agricultural Organization. Finally, the hazard to consumers, if it exists, must be weighed against the benefits in food production and preservation. It must also be weighed against the very real hazards of existing insecticides and food preservatives.
Glossary
- food irradiation
- treatment of food with ionizing radiation
- free radicals
- ions with unstable oxygen- or hydrogen-containing molecules
- radiolytic products
- compounds produced due to chemical reactions of free radicals
|
libretexts
|
2025-03-17T19:53:49.281301
| 2016-07-24T08:57:30 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.04%3A_Food_Irradiation",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.4: Food Irradiation",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.05%3A_Fusion
|
32.5: Fusion
Learning Objectives
By the end of this section, you will be able to:
- Define nuclear fusion.
- Discuss processes to achieve practical fusion energy generation.
While basking in the warmth of the summer sun, a student reads of the latest breakthrough in achieving sustained thermonuclear power and vaguely recalls hearing about the cold fusion controversy. The three are connected. The Sun’s energy is produced by nuclear fusion (see Figure ). Thermonuclear power is the name given to the use of controlled nuclear fusion as an energy source. While research in the area of thermonuclear power is progressing, high temperatures and containment difficulties remain. The cold fusion controversy centered around unsubstantiated claims of practical fusion power at room temperatures.
Nuclear fusion is a reaction in which two nuclei are combined, or fused , to form a larger nucleus. We know that all nuclei have less mass than the sum of the masses of the protons and neutrons that form them. The missing mass times \(c^2\) equals the binding energy of the nucleus—the greater the binding energy, the greater the missing mass. We also know that \(BE/A\), the binding energy per nucleon, is greater for medium-mass nuclei and has a maximum at Fe (iron). This means that if two low-mass nuclei can be fused together to form a larger nucleus, energy can be released. The larger nucleus has a greater binding energy and less mass per nucleon than the two that combined. Thus mass is destroyed in the fusion reaction, and energy is released (see Figure ). On average, fusion of low-mass nuclei releases energy, but the details depend on the actual nuclides involved.
The major obstruction to fusion is the Coulomb repulsion between nuclei. Since the attractive nuclear force that can fuse nuclei together is short ranged, the repulsion of like positive charges must be overcome to get nuclei close enough to induce fusion. Figure shows an approximate graph of the potential energy between two nuclei as a function of the distance between their centers. The graph is analogous to a hill with a well in its center. A ball rolled from the right must have enough kinetic energy to get over the hump before it falls into the deeper well with a net gain in energy. So it is with fusion. If the nuclei are given enough kinetic energy to overcome the electric potential energy due to repulsion, then they can combine, release energy, and fall into a deep well. One way to accomplish this is to heat fusion fuel to high temperatures so that the kinetic energy of thermal motion is sufficient to get the nuclei together.
You might think that, in the core of our Sun, nuclei are coming into contact and fusing. However, in fact, temperatures on the order of \(10^8 K\) are needed to actually get the nuclei in contact, exceeding the core temperature of the Sun. Quantum mechanical tunneling is what makes fusion in the Sun possible, and tunneling is an important process in most other practical applications of fusion, too. Since the probability of tunneling is extremely sensitive to barrier height and width, increasing the temperature greatly increases the rate of fusion. The closer reactants get to one another, the more likely they are to fuse (see Figure ). Thus most fusion in the Sun and other stars takes place at their centers, where temperatures are highest. Moreover, high temperature is needed for thermonuclear power to be a practical source of energy.
The Sun produces energy by fusing protons or hydrogen nuclei \(^1H\) (by far the Sun’s most abundant nuclide) into helium nuclei \(^4He\). The principal sequence of fusion reactions forms what is called the proton-proton cycle :
\[^1H + ^1H \rightarrow ^2H + e^+ + \nu_e \, \space (0.42 \, MeV)\]
\[^1H + ^2H \rightarrow ^3He + \gamma \, \space (5.49 \, meV)\]
\[^3He + ^3He \rightarrow ^4He + ^1H + ^1H \, (12.86 \, MeV)\]
where \(e^+\) stands for a positron and \(\nu_e\) is an electron neutrino. (The energy in parentheses is released by the reaction.) Note that the first two reactions must occur twice for the third to be possible, so that the cycle consumes six protons (\(^1H\)) but gives back two. Furthermore, the two positrons produced will find two electrons and annihilate to form four more \(\gamma\) rays, for a total of six. The overall effect of the cycle is thus
\[2e^- + 4^1H \rightarrow ^4He + 2\nu_e + 6\gamma \, \space (26.7 \, MeV)\]
where the 26.7 MeV includes the annihilation energy of the positrons and electrons and is distributed among all the reaction products. The solar interior is dense, and the reactions occur deep in the Sun where temperatures are highest. It takes about 32,000 years for the energy to diffuse to the surface and radiate away. However, the neutrinos escape the Sun in less than two seconds, carrying their energy with them, because they interact so weakly that the Sun is transparent to them. Negative feedback in the Sun acts as a thermostat to regulate the overall energy output. For instance, if the interior of the Sun becomes hotter than normal, the reaction rate increases, producing energy that expands the interior. This cools it and lowers the reaction rate. Conversely, if the interior becomes too cool, it contracts, increasing the temperature and reaction rate (see Figure ). Stars like the Sun are stable for billions of years, until a significant fraction of their hydrogen has been depleted. What happens then is discussed in Introduction to Frontiers of Physics .
Theories of the proton-proton cycle (and other energy-producing cycles in stars) were pioneered by the German-born, American physicist Hans Bethe (1906–2005), starting in 1938. He was awarded the 1967 Nobel Prize in physics for this work, and he has made many other contributions to physics and society. Neutrinos produced in these cycles escape so readily that they provide us an excellent means to test these theories and study stellar interiors. Detectors have been constructed and operated for more than four decades now to measure solar neutrinos (see Figure ). Although solar neutrinos are detected and neutrinos were observed from Supernova 1987A ( Figure ), too few solar neutrinos were observed to be consistent with predictions of solar energy production. After many years, this solar neutrino problem was resolved with a blend of theory and experiment that showed that the neutrino does indeed have mass. It was also found that there are three types of neutrinos, each associated with a different type of nuclear decay.
The proton-proton cycle is not a practical source of energy on Earth, in spite of the great abundance of hydrogen (\(^1H\)). The reaction \(^1H + ^1H \rightarrow ^2H + e^+ + \nu_e\) has a very low probability of occurring. (This is why our Sun will last for about ten billion years.) However, a number of other fusion reactions are easier to induce. Among them are:
\[^2H + ^2H \rightarrow ^3H + ^1H \, \space (4.03 \, MeV)\]
\[^2H + ^2H \rightarrow ^3H + n \, \space (3.27 \, MeV)\]
\[^2H + ^3H \rightarrow ^4He + n \, \space (17.59 \, MeV)\]
\[^2H + ^2H \rightarrow ^4He + \gamma \, \space (23.85 \, MeV)\]
Deuterium (\(^2H\)) is about 0.015% of natural hydrogen, so there is an immense amount of it in sea water alone. In addition to an abundance of deuterium fuel, these fusion reactions produce large energies per reaction (in parentheses), but they do not produce much radioactive waste. Tritium (\(^3H\)) is radioactive, but it is consumed as a fuel (the reaction \(^2H + ^3H \rightarrow ^4He + n\)), and the neutrons and \(\gamma\)s can be shielded. The neutrons produced can also be used to create more energy and fuel in reactions like
\[n + ^1H \rightarrow ^2H + \gamma \, \space (20.68 \, MeV)\] and
\[n + ^1H \rightarrow ^2H + \gamma \, \space (2.22 \, MeV).\]
Note that these last two reactions, and \(^2H + ^2H \rightarrow ^4He + \gamma\), put most of their energy output into the \(\gamma\) ray, and such energy is difficult to utilize.
The three keys to practical fusion energy generation are to achieve the temperatures necessary to make the reactions likely, to raise the density of the fuel, and to confine it long enough to produce large amounts of energy. These three factors—temperature, density, and time—complement one another, and so a deficiency in one can be compensated for by the others. Ignition is defined to occur when the reactions produce enough energy to be self-sustaining after external energy input is cut off. This goal, which must be reached before commercial plants can be a reality, has not been achieved. Another milestone, called break-even , occurs when the fusion power produced equals the heating power input. Break-even has nearly been reached and gives hope that ignition and commercial plants may become a reality in a few decades.
Two techniques have shown considerable promise. The first of these is called magnetic confinement and uses the property that charged particles have difficulty crossing magnetic field lines. The tokamak, shown in Figure , has shown particular promise. The tokamak’s toroidal coil confines charged particles into a circular path with a helical twist due to the circulating ions themselves. In 1995, the Tokamak Fusion Test Reactor at Princeton in the US achieved world-record plasma temperatures as high as 500 million degrees Celsius. This facility operated between 1982 and 1997. A joint international effort is underway in France to build a tokamak-type reactor that will be the stepping stone to commercial power. ITER, as it is called, will be a full-scale device that aims to demonstrate the feasibility of fusion energy. It will generate 500 MW of power for extended periods of time and will achieve break-even conditions. It will study plasmas in conditions similar to those expected in a fusion power plant. Completion is scheduled for 2018.
The second promising technique aims multiple lasers at tiny fuel pellets filled with a mixture of deuterium and tritium. Huge power input heats the fuel, evaporating the confining pellet and crushing the fuel to high density with the expanding hot plasma produced. This technique is called inertial confinement , because the fuel’s inertia prevents it from escaping before significant fusion can take place. Higher densities have been reached than with tokamaks, but with smaller confinement times. In 2009, the Lawrence Livermore Laboratory (CA) completed a laser fusion device with 192 ultraviolet laser beams that are focused upon a D-T pellet (see Figure ).
Example \(\PageIndex{1}\): Calculating Energy and Power from Fusion
(a) Calculate the energy released by the fusion of a 1.00-kg mixture of deuterium and tritium, which produces helium. There are equal numbers of deuterium and tritium nuclei in the mixture.
(b) If this takes place continuously over a period of a year, what is the average power output?
Strategy
According to \(^2H + ^3H \rightarrow ^4He + n\), the energy per reaction is 17.59 MeV. To find the total energy released, we must find the number of deuterium and tritium atoms in a kilogram. Deuterium has an atomic mass of about 2 and tritium has an atomic mass of about 3, for a total of about 5 g per mole of reactants or about 200 mol in 1.00 kg. To get a more precise figure, we will use the atomic masses from Appendix A. The power output is best expressed in watts, and so the energy output needs to be calculated in joules and then divided by the number of seconds in a year.
Solution for (a)
The atomic mass of deuterium (\(^2H\)) is 2.014102 u, while that of tritium (\(^3H\)) is 3.016049 u, for a total of 5.032151 u per reaction. So a mole of reactants has a mass of 5.03 g, and in 1.00 kg there are \((100 \, g)/(5.03 \, g/mol) = 198.8\) mole of reactants. The number of reactions that take place is therefore
\[(198.8 \, mol)(6.02 \times 10^{23} mol^{-1}) = 1.20 \times 10^{26} \, reactions.\]
The total energy output is the number of reactions times the energy per reaction:
\[E = (1.20 \times 10^{26} \, reactions)(17.59 \, MeV/reaction)(1.602 \times 10^{-13} \, J/MeV)\]
\[= 3.37 \times 10^{14} \, J.\]
Solution for (b)
Power is energy per unit time. One year has \(3.16 \times 10^7 \, s\), so
\[P = \dfrac{E}{t} = \dfrac{3.37 \times 10^{14} \, J}{3.16 \times 10^7 \, s}\]
\[= 1.07 \times 10^7 \, W = 10.7 \, MW.\]
Discussion
By now we expect nuclear processes to yield large amounts of energy, and we are not disappointed here. The energy output of \(3.37 \times 10^{14} \, J\) from fusing 1.00 kg of deuterium and tritium is equivalent to 2.6 million gallons of gasoline and about eight times the energy output of the bomb that destroyed Hiroshima. Yet the average backyard swimming pool has about 6 kg of deuterium in it, so that fuel is plentiful if it can be utilized in a controlled manner. The average power output over a year is more than 10 MW, impressive but a bit small for a commercial power plant. About 32 times this power output would allow generation of 100 MW of electricity, assuming an efficiency of one-third in converting the fusion energy to electrical energy.
Summary
- Nuclear fusion is a reaction in which two nuclei are combined to form a larger nucleus. It releases energy when light nuclei are fused to form medium-mass nuclei.
- Fusion is the source of energy in stars, with the proton-proton cycle, \[^1H + ^1H \rightarrow ^2H + e^+ + \nu_e \, \space (0.42 \, MeV)\] \[^1H + ^2H \rightarrow ^3He + \gamma \, \space (5.49 \, MeV)\] \[^3H + ^3H \rightarrow ^4He + ^1H + ^1H \, \space (12.86 \, MeV)\] being the principal sequence of energy-producing reactions in our Sun.
- The overall effect of the proton-proton cycle is \[2e^- + 4^1H \rightarrow ^4He + 2\nu_e + 6 \gamma \, \space (26.7 \, MeV),\] where the 26.7 MeV includes the energy of the positrons emitted and annihilated.
- Attempts to utilize controlled fusion as an energy source on Earth are related to deuterium and tritium, and the reactions play important roles.
- Ignition is the condition under which controlled fusion is self-sustaining; it has not yet been achieved. Break-even, in which the fusion energy output is as great as the external energy input, has nearly been achieved.
- Magnetic confinement and inertial confinement are the two methods being developed for heating fuel to sufficiently high temperatures, at sufficient density, and for sufficiently long times to achieve ignition. The first method uses magnetic fields and the second method uses the momentum of impinging laser beams for confinement.
Glossary
- break-even
- when fusion power produced equals the heating power input
- ignition
- when a fusion reaction produces enough energy to be self-sustaining after external energy input is cut off
- inertial confinement
- a technique that aims multiple lasers at tiny fuel pellets evaporating and crushing them to high density
- magnetic confinement
- a technique in which charged particles are trapped in a small region because of difficulty in crossing magnetic field lines
- nuclear fusion
- a reaction in which two nuclei are combined, or fused, to form a larger nucleus
- proton-proton cycle
- the combined reactions \(^1H + ^1H \rightarrow ^2H + e^+ + \nu_e\), \(^1H + ^2H \rightarrow ^3He + \gamma\), and \(^3He + ^3He \rightarrow ^4He + ^1H + ^1H\)
|
libretexts
|
2025-03-17T19:53:49.361128
| 2016-07-24T08:58:10 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.05%3A_Fusion",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.5: Fusion",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.06%3A_Fission
|
32.6: Fission
Learning Objectives
By the end of this section, you will be able to:
- Define nuclear fission.
- Discuss how fission fuel reacts and describe what it produces.
- Describe controlled and uncontrolled chain reactions.
Nuclear fission is a reaction in which a nucleus is split (or fissured ). Controlled fission is a reality, whereas controlled fusion is a hope for the future. Hundreds of nuclear fission power plants around the world attest to the fact that controlled fission is practical and, at least in the short term, economical, as seen in Figure \(\PageIndex{1}\). Whereas nuclear power was of little interest for decades following TMI and Chernobyl (and now Fukushima Daiichi), growing concerns over global warming has brought nuclear power back on the table as a viable energy alternative. By the end of 2009, there were 442 reactors operating in 30 countries, providing 15% of the world’s electricity. France provides over 75% of its electricity with nuclear power, while the US has 104 operating reactors providing 20% of its electricity. Australia and New Zealand have none. China is building nuclear power plants at the rate of one start every month.
Fission is the opposite of fusion and releases energy only when heavy nuclei are split. As noted previously, energy is released if the products of a nuclear reaction have a greater binding energy per nucleon (\(BE/A\)) than the parent nuclei. Figure \(\PageIndex{2}\) shows that \(BE/A\) is greater for medium-mass nuclei than heavy nuclei, implying that when a heavy nucleus is split, the products have less mass per nucleon, so that mass is destroyed and energy is released in the reaction. The amount of energy per fission reaction can be large, even by nuclear standards. The graph in Figure \(\PageIndex{2}\) shows \(BE/A\) to be about 7.6 MeV/nucleon for the heaviest nuclei (\(A\) about 240), while \(BE/A\) is about 8.6 MeV/nucleon for nuclei having \(A\) about 120. Thus, if a heavy nucleus splits in half, then about 1 MeV per nucleon, or approximately 240 MeV per fission, is released. This is about 10 times the energy per fusion reaction, and about 100 times the energy of the average \(\alpha\), \(\beta\), or \(\gamma\) decay.
Example \(\PageIndex{1}\): Calculating Energy Released by Fission
Calculate the energy released in the following spontaneous fission reaction:
\[\ce{^{238}U} \rightarrow \ce{^{95}Sr} + \ce{^{140}Xe} + 3n \nonumber\]
given the atomic masses to be \(m(^{238}U) = 238.050784 \, u, \, m(^{95}Sr) = 94.919388 \, u\), \(m(^{140}Xe) = 139.921610 \, u\), and \(m(n) = 1.008665 \, u\).
Strategy
As always, the energy released is equal to the mass destroyed times \(c^2\), so we must find the difference in mass between the parent \(^{238}U\) and the fission products.
Solution
The products have a total mass of
\[\begin{align} m_{products} &= 94.919388 \, u + 139.921610 \, u + 3(1.008665 \, u) \nonumber\\[5pt] &= 237.866993 \, u. \nonumber\end{align} \nonumber\]
The mass lost is the mass of \(^{238}U\) minus \(m_{products}\), or
\[\begin{align*} \Delta m &= 238.050784 \, u - 237.8669933 \, u \nonumber\\[5pt] &= 0.183791 \, u\nonumber\end{align*}\]
so the energy released is
\[\begin{align*} E &= (\Delta m)c^2 \nonumber \\[5pt] &= (0.183791 \, u)\dfrac{931.5 \, MeV/c^2}{u}c^2 \nonumber \\[5pt] &= 171.2 \, MeV. \nonumber\end{align*}\]
Discussion
A number of important things arise in this example. The 171-MeV energy released is large, but a little less than the earlier estimated 240 MeV. This is because this fission reaction produces neutrons and does not split the nucleus into two equal parts. Fission of a given nuclide, such as \(^{238}U\), does not always produce the same products. Fission is a statistical process in which an entire range of products are produced with various probabilities. Most fission produces neutrons, although the number varies with each fission. This is an extremely important aspect of fission, because neutrons can induce more fission , enabling self-sustaining chain reactions.
Spontaneous fission can occur, but this is usually not the most common decay mode for a given nuclide. For example, \(^{238}U\) can spontaneously fission, but it decays mostly by \(\alpha\) emission. Neutron-induced fission is crucial as seen in Figure \(\PageIndex{2}\). Being chargeless, even low-energy neutrons can strike a nucleus and be absorbed once they feel the attractive nuclear force. Large nuclei are described by a liquid drop model with surface tension and oscillation modes, because the large number of nucleons act like atoms in a drop. The neutron is attracted and thus, deposits energy, causing the nucleus to deform as a liquid drop. If stretched enough, the nucleus narrows in the middle. The number of nucleons in contact and the strength of the nuclear force binding the nucleus together are reduced. Coulomb repulsion between the two ends then succeeds in fissioning the nucleus, which pops like a water drop into two large pieces and a few neutrons. Neutron-induced fission can be written as
\[n + ^AX \rightarrow FF_1 + FF_2 + xn,\]
where \(FF_1\) and \(FF_2\) are the two daughter nuclei, called fission fragments , and \(x\) is the number of neutrons produced. Most often, the masses of the fission fragments are not the same. Most of the released energy goes into the kinetic energy of the fission fragments, with the remainder going into the neutrons and excited states of the fragments. Since neutrons can induce fission, a self-sustaining chain reaction is possible, provided more than one neutron is produced on average — that is, if \(x > 1\) in \(n + ^AX \rightarrow FF_1 + FF_2 + xn\). This can also be seen in Figure \(\PageIndex{3}\).
An example of a typical neutron-induced fission reaction is
\[\ce{n} + \ce{_{92}^{235}U} \rightarrow \ce{_{56}^{142}Ba} + \ce{_{36}^{142}Kr} + \ce{3n}.\]
Note that in this equation, the total charge remains the same (is conserved): \(92 + 0 = 56 + 36\). Also, as far as whole numbers are concerned, the mass is constant: \(1 + 235 = 142 + 91 + 3\). This is not true when we consider the masses out to 6 or 7 significant places, as in the previous example.
Not every neutron produced by fission induces fission. Some neutrons escape the fissionable material, while others interact with a nucleus without making it fission. We can enhance the number of fissions produced by neutrons by having a large amount of fissionable material. The minimum amount necessary for self-sustained fission of a given nuclide is called its critical mass . Some nuclides, such as \(^{239}Pu\), produce more neutrons per fission than others, such as \(^{235}U\). Additionally, some nuclides are easier to make fission than others. In particular, \(^{235}U\) and \(^{239}Pu\) are easier to fission than the much more abundant \(^{238}U\). Both factors affect critical mass, which is smallest for \(^{239}Pu\).
The reason \(^{235}U\) and \(^{239}Pu\) are easier to fission than \(^{238}U\) is that the nuclear force is more attractive for an even number of neutrons in a nucleus than for an odd number. Consider that \(_{92}^{235}U_{143}\) has 143 neutrons, and \(_{94}^{239}P_{145}\) has 145 neutrons, whereas \(_{92}^{238}U_{146}\) has 146. When a neutron encounters a nucleus with an odd number of neutrons, the nuclear force is more attractive, because the additional neutron will make the number even. About 2-MeV more energy is deposited in the resulting nucleus than would be the case if the number of neutrons was already even. This extra energy produces greater deformation, making fission more likely. Thus, \(^{235}U\) and \(^{239}Pu\) are superior fission fuels. The isotope \(^{235}U\) is only 0.72 % of natural uranium, while \(^{238}U\) is 99.27%, and \(^{239}Pu\) does not exist in nature. Australia has the largest deposits of uranium in the world, standing at 28% of the total. This is followed by Kazakhstan and Canada. The US has only 3% of global reserves.
Most fission reactors utilize \(^{235}U\), which is separated from \(^{238}U\) at some expense. This is called enrichment. The most common separation method is gaseous diffusion of uranium hexafluoride \((UF_6)\) through membranes. Since \(^{235}U\) has less mass than \(^{238}U\), its \(UF_6\) molecules have higher average velocity at the same temperature and diffuse faster. Another interesting characteristic of \(^{235}U\) is that it preferentially absorbs very slow moving neutrons (with energies a fraction of an eV), whereas fission reactions produce fast neutrons with energies in the order of an MeV. To make a self-sustained fission reactor with \(^{235}U\), it is thus necessary to slow down (“thermalize”) the neutrons. Water is very effective, since neutrons collide with protons in water molecules and lose energy. Figure \(\PageIndex{4}\) shows a schematic of a reactor design, called the pressurized water reactor.
Control rods containing nuclides that very strongly absorb neutrons are used to adjust neutron flux. To produce large power, reactors contain hundreds to thousands of critical masses, and the chain reaction easily becomes self-sustaining, a condition called criticality . Neutron flux should be carefully regulated to avoid an exponential increase in fissions, a condition called supercriticality . Control rods help prevent overheating, perhaps even a meltdown or explosive disassembly. The water that is used to thermalize neutrons, necessary to get them to induce fission in \(^{235}U\), and achieve criticality, provides a negative feedback for temperature increases. In case the reactor overheats and boils the water to steam or is breached, the absence of water kills the chain reaction. Considerable heat, however, can still be generated by the reactor’s radioactive fission products. Other safety features, thus, need to be incorporated in the event of a loss of coolant accident, including auxiliary cooling water and pumps.
Example \(\PageIndex{3}\): Calculating Energy from a Kilogram of Fissionable Fuel
Calculate the amount of energy produced by the fission of 1.00 kg of \(\ce{^{235}U}\), given the average fission reaction of \(\ce{^{235}U}\) produces 200 MeV.
Strategy
The total energy produced is the number of \(^{235}U\) atoms times the given energy per \(^{235}U\) fission. We should therefore find the number of \(^{235}U\) atoms in 1.00 kg.
Solution
The number of \(^{235}U\) atoms in 1.00 kg is Avogadro’s number times the number of moles. One mole of \(^{235}U\) has a mass of 235.04 g; thus, there are \((1000 \, g)/(235.04 \, g/mol) = 4.25 \, mol.\) The number of \(^{235}U\) atoms is therefore,
\[(4.25 \, mol)(6.02 \times 10^{23} \, ^{235}U/mol) = 2.56 \times 10^{24} \, ^{235}U. \nonumber\]
So the total energy released is
\[\begin{align} E &= (2.56 \times 10^{24} \, ^{235}U) \left(\dfrac{200 \, MeV}{^{235}U} \right) \left(\dfrac{1.60 \times 10^{-13} \, J}{MeV}\right) \nonumber \\[5pt] &= 8.21 \times 10^{13} \, J. \nonumber \end{align}\nonumber\]
Discussion
This is another impressively large amount of energy, equivalent to about 14,000 barrels of crude oil or 600,000 gallons of gasoline. But, it is only one-fourth the energy produced by the fusion of a kilogram mixture of deuterium and tritium. Even though each fission reaction yields about ten times the energy of a fusion reaction, the energy per kilogram of fission fuel is less, because there are far fewer moles per kilogram of the heavy nuclides. Fission fuel is also much more scarce than fusion fuel, and less than 1% of uranium (the \(^{235}U)\) is readily usable.
One nuclide already mentioned is \(^{239}Pu\) which has a 24,120-y half-life and does not exist in nature. Plutonium-239 is manufactured from \(^{238}U\) in reactors, and it provides an opportunity to utilize the other 99% of natural uranium as an energy source. The following reaction sequence, called breeding , produces \(^{239}Pu\). Breeding begins with neutron capture by \(^{238}U\).
\[\ce{^{238}U + n \rightarrow ^{239}U + \gamma}.\]
Uranium-239 then \(\beta^-\) decays:
\[\ce{^{239}U \rightarrow ^{239}Np + \beta^- + \nu_e } \,(t_{1/2} = 23 \, min).\]
Neptunium-239 also \(\beta^-\) decays:
\[\ce{^{239}Np \rightarrow ^{239}Pu + \beta^- + \nu_e } \,(t_{1/2} = 2.4 \, d).\]
Plutonium-239 builds up in reactor fuel at a rate that depends on the probability of neutron capture by \(^{238}U\) (all reactor fuel contains more \(^{238}U\) than \(^{235}U\)). Reactors designed specifically to make plutonium are called breeder reactors . They seem to be inherently more hazardous than conventional reactors, but it remains unknown whether their hazards can be made economically acceptable. The four reactors at Chernobyl, including the one that was destroyed, were built to breed plutonium and produce electricity. These reactors had a design that was significantly different from the pressurized water reactor illustrated above.
Plutonium-239 has advantages over \(^{235}U\) as a reactor fuel — it produces more neutrons per fission on average, and it is easier for a thermal neutron to cause it to fission. It is also chemically different from uranium, so it is inherently easier to separate from uranium ore. This means \(^{239}Pu\) has a particularly small critical mass, an advantage for nuclear weapons.
PHET EXPLORATIONS: NUCLEAR FISSION
Start a chain reaction , or introduce non-radioactive isotopes to prevent one. Control energy production in a nuclear reactor!
Summary
- Nuclear fission is a reaction in which a nucleus is split.
- Fission releases energy when heavy nuclei are split into medium-mass nuclei.
- Self-sustained fission is possible, because neutron-induced fission also produces neutrons that can induce other fissions, \(n + ^AX \rightarrow FF_1 + FF_2 + xn\), where \(FF_1\) and \(FF_2\) are the two daughter nuclei, or fission fragments, and x is the number of neutrons produced.
- A minimum mass, called the critical mass, should be present to achieve criticality.
- More than a critical mass can produce supercriticality.
- The production of new or different isotopes (especially \(^{239}Pu\)) by nuclear transformation is called breeding, and reactors designed for this purpose are called breeder reactors.
Glossary
- breeder reactors
- reactors that are designed specifically to make plutonium
- breeding
- reaction process that produces \(^{239}Pu\)
- criticality
- condition in which a chain reaction easily becomes self-sustaining
- critical mass
- minimum amount necessary for self-sustained fission of a given nuclide
- fission fragments
- a daughter nuclei
- liquid drop model
- a model of nucleus (only to understand some of its features) in which nucleons in a nucleus act like atoms in a drop
- nuclear fission
- reaction in which a nucleus splits
- neutron-induced fission
- fission that is initiated after the absorption of neutron
- supercriticality
- an exponential increase in fissions
|
libretexts
|
2025-03-17T19:53:49.442862
| 2016-07-24T08:58:56 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.06%3A_Fission",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.6: Fission",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.07%3A_Nuclear_Weapons
|
32.7: Nuclear Weapons
Learning Objectives
By the end of this section, you will be able to:
- Discuss different types of fission and thermonuclear bombs.
- Explain the ill effects of nuclear explosion.
The world was in turmoil when fission was discovered in 1938. The discovery of fission, made by two German physicists, Otto Hahn and Fritz Strassman, was quickly verified by two Jewish refugees from Nazi Germany, Lise Meitner and her nephew Otto Frisch. Fermi, among others, soon found that not only did neutrons induce fission; more neutrons were produced during fission. The possibility of a self-sustained chain reaction was immediately recognized by leading scientists the world over. The enormous energy known to be in nuclei, but considered inaccessible, now seemed to be available on a large scale.
Within months after the announcement of the discovery of fission, Adolf Hitler banned the export of uranium from newly occupied Czechoslovakia. It seemed that the military value of uranium had been recognized in Nazi Germany, and that a serious effort to build a nuclear bomb had begun.
Alarmed scientists, many of them who fled Nazi Germany, decided to take action. None was more famous or revered than Einstein. It was felt that his help was needed to get the American government to make a serious effort at nuclear weapons as a matter of survival. Leo Szilard, an escaped Hungarian physicist, took a draft of a letter to Einstein, who, although pacifistic, signed the final version. The letter was for President Franklin Roosevelt, warning of the German potential to build extremely powerful bombs of a new type. It was sent in August of 1939, just before the German invasion of Poland that marked the start of World War II.
It was not until December 6, 1941, the day before the Japanese attack on Pearl Harbor, that the United States made a massive commitment to building a nuclear bomb. The top secret Manhattan Project was a crash program aimed at beating the Germans. It was carried out in remote locations, such as Los Alamos, New Mexico, whenever possible, and eventually came to cost billions of dollars and employ the efforts of more than 100,000 people. J. Robert Oppenheimer (1904–1967), whose talent and ambitions made him ideal, was chosen to head the project. The first major step was made by Enrico Fermi and his group in December 1942, when they achieved the first self-sustained nuclear reactor. This first “atomic pile”, built in a squash court at the University of Chicago, used carbon blocks to thermalize neutrons. It not only proved that the chain reaction was possible, it began the era of nuclear reactors. Glenn Seaborg, an American chemist and physicist, received the Nobel Prize in physics in 1951 for discovery of several transuranic elements, including plutonium. Carbon-moderated reactors are relatively inexpensive and simple in design and are still used for breeding plutonium, such as at Chernobyl, where two such reactors remain in operation.
Plutonium was recognized as easier to fission with neutrons and, hence, a superior fission material very early in the Manhattan Project. Plutonium availability was uncertain, and so a uranium bomb was developed simultaneously. Figure shows a gun-type bomb, which takes two subcritical uranium masses and blows them together. To get an appreciable yield, the critical mass must be held together by the explosive charges inside the cannon barrel for a few microseconds. Since the buildup of the uranium chain reaction is relatively slow, the device to hold the critical mass together can be relatively simple. Owing to the fact that the rate of spontaneous fission is low, a neutron source is triggered at the same time the critical mass is assembled.
Plutonium’s special properties necessitated a more sophisticated critical mass assembly, shown schematically in Figure. A spherical mass of plutonium is surrounded by shape charges (high explosives that release most of their blast in one direction) that implode the plutonium, crushing it into a smaller volume to form a critical mass. The implosion technique is faster and more effective, because it compresses three-dimensionally rather than one-dimensionally as in the gun-type bomb. Again, a neutron source must be triggered at just the correct time to initiate the chain reaction.
Owing to its complexity, the plutonium bomb needed to be tested before there could be any attempt to use it. On July 16, 1945, the test named Trinity was conducted in the isolated Alamogordo Desert about 200 miles south of Los Alamos (see Figure). A new age had begun. The yield of this device was about 10 kilotons (kT), the equivalent of 5000 of the largest conventional bombs.
Although Germany surrendered on May 7, 1945, Japan had been steadfastly refusing to surrender for many months, forcing large casualties. Invasion plans by the Allies estimated a million casualties of their own and untold losses of Japanese lives. The bomb was viewed as a way to end the war. The first was a uranium bomb dropped on Hiroshima on August 6. Its yield of about 15 kT destroyed the city and killed an estimated 80,000 people, with 100,000 more being seriously injured (see Figure). The second was a plutonium bomb dropped on Nagasaki only three days later, on August 9. Its 20 kT yield killed at least 50,000 people, something less than Hiroshima because of the hilly terrain and the fact that it was a few kilometers off target. The Japanese were told that one bomb a week would be dropped until they surrendered unconditionally, which they did on August 14. In actuality, the United States had only enough plutonium for one more and as yet unassembled bomb.
Knowing that fusion produces several times more energy per kilogram of fuel than fission, some scientists pushed the idea of a fusion bomb starting very early on. Calling this bomb the Super, they realized that it could have another advantage over fission—high-energy neutrons would aid fusion, while they are ineffective in \(^{239}Pu\) fission. Thus the fusion bomb could be virtually unlimited in energy release. The first such bomb was detonated by the United States on October 31, 1952, at Eniwetok Atoll with a yield of 10 megatons (MT), about 670 times that of the fission bomb that destroyed Hiroshima. The Soviets followed with a fusion device of their own in August 1953, and a weapons race, beyond the aim of this text to discuss, continued until the end of the Cold War.
Figure shows a simple diagram of how a thermonuclear bomb is constructed. A fission bomb is exploded next to fusion fuel in the solid form of lithium deuteride. Before the shock wave blows it apart, \(γ\) rays heat and compress the fuel, and neutrons create tritium through the reaction \(n+^6Li→^3H+^4He\). Additional fusion and fission fuels are enclosed in a dense shell of \(^{238}U\). The shell reflects some of the neutrons back into the fuel to enhance its fusion, but at high internal temperatures fast neutrons are created that also cause the plentiful and inexpensive \(^{238}U\) to fission, part of what allows thermonuclear bombs to be so large.
The energy yield and the types of energy produced by nuclear bombs can be varied. Energy yields in current arsenals range from about 0.1 kT to 20 MT, although the Soviets once detonated a 67 MT device. Nuclear bombs differ from conventional explosives in more than size. Figure shows the approximate fraction of energy output in various forms for conventional explosives and for two types of nuclear bombs. Nuclear bombs put a much larger fraction of their output into thermal energy than do conventional bombs, which tend to concentrate the energy in blast. Another difference is the immediate and residual radiation energy from nuclear weapons. This can be adjusted to put more energy into radiation (the so-called neutron bomb) so that the bomb can be used to irradiate advancing troops without killing friendly troops with blast and heat.
At its peak in 1986, the combined arsenals of the United States and the Soviet Union totaled about 60,000 nuclear warheads. In addition, the British, French, and Chinese each have several hundred bombs of various sizes, and a few other countries have a small number. Nuclear weapons are generally divided into two categories. Strategic nuclear weapons are those intended for military targets, such as bases and missile complexes, and moderate to large cities. There were about 20,000 strategic weapons in 1988. Tactical weapons are intended for use in smaller battles. Since the collapse of the Soviet Union and the end of the Cold War in 1989, most of the 32,000 tactical weapons (including Cruise missiles, artillery shells, land mines, torpedoes, depth charges, and backpacks) have been demobilized, and parts of the strategic weapon systems are being dismantled with warheads and missiles being disassembled. According to the Treaty of Moscow of 2002, Russia and the United States have been required to reduce their strategic nuclear arsenal down to about 2000 warheads each.
A few small countries have built or are capable of building nuclear bombs, as are some terrorist groups. Two things are needed—a minimum level of technical expertise and sufficient fissionable material. The first is easy. Fissionable material is controlled but is also available. There are international agreements and organizations that attempt to control nuclear proliferation, but it is increasingly difficult given the availability of fissionable material and the small amount needed for a crude bomb. The production of fissionable fuel itself is technologically difficult. However, the presence of large amounts of such material worldwide, though in the hands of a few, makes control and accountability crucial.
Summary
- There are two types of nuclear weapons—fission bombs use fission alone, whereas thermonuclear bombs use fission to ignite fusion.
- Both types of weapons produce huge numbers of nuclear reactions in a very short time.
- Energy yields are measured in kilotons or megatons of equivalent conventional explosives and range from 0.1 kT to more than 20 MT.
- Nuclear bombs are characterized by far more thermal output and nuclear radiation output than conventional explosives.
|
libretexts
|
2025-03-17T19:53:49.508921
| 2016-07-24T08:59:36 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.07%3A_Nuclear_Weapons",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.7: Nuclear Weapons",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.E%3A_Medical_Applications_of_Nuclear_Physics_(Exercises)
|
32.E: Medical Applications of Nuclear Physics (Exercises)
-
- Last updated
- Save as PDF
Conceptual Questions
32.1: Medical Imaging and Diagnostics
1. In terms of radiation dose, what is the major difference between medical diagnostic uses of radiation and medical therapeutic uses?
2. One of the methods used to limit radiation dose to the patient in medical imaging is to employ isotopes with short half-lives. How would this limit the dose?
32.2: Biological Effects of Ionizing Radiation
3. Isotopes that emit \(\displaystyle α\) radiation are relatively safe outside the body and exceptionally hazardous inside. Yet those that emit \(\displaystyle γ\) radiation are hazardous outside and inside. Explain why.
4. Why is radon more closely associated with inducing lung cancer than other types of cancer?
5. The RBE for low-energy \(\displaystyle βs\) is 1.7, whereas that for higher-energy \(\displaystyle βs\) is only 1. Explain why, considering how the range of radiation depends on its energy.
6. Which methods of radiation protection were used in the device shown in the first photo in Figure? Which were used in the situation shown in the second photo?
(a) This x-ray fluorescence machine is one of the thousands used in shoe stores to produce images of feet as a check on the fit of shoes. They are unshielded and remain on as long as the feet are in them, producing doses much greater than medical images. Children were fascinated with them. These machines were used in shoe stores until laws preventing such unwarranted radiation exposure were enacted in the 1950s. (credit: Andrew Kuchling ) (b) Now that we know the effects of exposure to radioactive material, safety is a priority. (credit: U.S. Navy)
7. What radioisotope could be a problem in homes built of cinder blocks made from uranium mine tailings? (This is true of homes and schools in certain regions near uranium mines.)
8. Are some types of cancer more sensitive to radiation than others? If so, what makes them more sensitive?
9. Suppose a person swallows some radioactive material by accident. What information is needed to be able to assess possible damage?
32.3: Therapeutic Uses of Ionizing Radiation
10. Radiotherapy is more likely to be used to treat cancer in elderly patients than in young ones. Explain why. Why is radiotherapy used to treat young people at all?
32.4: Food Irradiation
11. Does food irradiation leave the food radioactive? To what extent is the food altered chemically for low and high doses in food irradiation?
12. Compare a low dose of radiation to a human with a low dose of radiation used in food treatment.
13. Suppose one food irradiation plant uses a \(\displaystyle ^{137}Cs\) source while another uses an equal activity of \(\displaystyle ^{60}Co\). Assuming equal fractions of the \(\displaystyle γ\) rays from the sources are absorbed, why is more time needed to get the same dose using the \(\displaystyle ^{137}Cs\) source?
32.5: Fusion
14. Why does the fusion of light nuclei into heavier nuclei release energy?
15. Energy input is required to fuse medium-mass nuclei, such as iron or cobalt, into more massive nuclei. Explain why.
16. In considering potential fusion reactions, what is the advantage of the reaction \(\displaystyle ^2H+^3H→^4He+n\) over the reaction \(\displaystyle ^2H+^2H→^3He+n\)?
17. Give reasons justifying the contention made in the text that energy from the fusion reaction \(\displaystyle ^2H+^2H→^4He+γ\) is relatively difficult to capture and utilize.
32.6: Fission
19. Explain why the fission of heavy nuclei releases energy. Similarly, why is it that energy input is required to fission light nuclei?
20. Explain, in terms of conservation of momentum and energy, why collisions of neutrons with protons will thermalize neutrons better than collisions with oxygen.
21. The ruins of the Chernobyl reactor are enclosed in a huge concrete structure built around it after the accident. Some rain penetrates the building in winter, and radioactivity from the building increases. What does this imply is happening inside?
22. Since the uranium or plutonium nucleus fissions into several fission fragments whose mass distribution covers a wide range of pieces, would you expect more residual radioactivity from fission than fusion? Explain.
23. The core of a nuclear reactor generates a large amount of thermal energy from the decay of fission products, even when the power-producing fission chain reaction is turned off. Would this residual heat be greatest after the reactor has run for a long time or short time? What if the reactor has been shut down for months?
24. How can a nuclear reactor contain many critical masses and not go supercritical? What methods are used to control the fission in the reactor?
25. Why can heavy nuclei with odd numbers of neutrons be induced to fission with thermal neutrons, whereas those with even numbers of neutrons require more energy input to induce fission?
26. Why is a conventional fission nuclear reactor not able to explode as a bomb?
32.7: Nuclear Weapons
27. What are some of the reasons that plutonium rather than uranium is used in all fission bombs and as the trigger in all fusion bombs?
28. Use the laws of conservation of momentum and energy to explain how a shape charge can direct most of the energy released in an explosion in a specific direction. (Note that this is similar to the situation in guns and cannons—most of the energy goes into the bullet.)
29. How does the lithium deuteride in the thermonuclear bomb shown in Figure supply tritium (\(\displaystyle ^3H\)) as well as deuterium (\(\displaystyle ^2H\))?
30. Fallout from nuclear weapons tests in the atmosphere is mainly \(\displaystyle ^{90}Sr\) and \(\displaystyle ^{137}Cs\), which have 28.6- and 32.2-y half-lives, respectively. Atmospheric tests were terminated in most countries in 1963, although China only did so in 1980. It has been found that environmental activities of these two isotopes are decreasing faster than their half-lives. Why might this be?
Problems & Exercises
32.1: Medical Imaging and Diagnostics
31. A neutron generator uses an \(\displaystyle α\) source, such as radium, to bombard beryllium, inducing the reaction \(\displaystyle ^4He+^9Be→^{12}C+n\). Such neutron sources are called RaBe sources, or PuBe sources if they use plutonium to get the \(\displaystyle α\)s. Calculate the energy output of the reaction in MeV.
Solution
5.701 MeV
32. Neutrons from a source (perhaps the one discussed in the preceding problem) bombard natural molybdenum, which is 24 percent \(\displaystyle ^{98}Mo\). What is the energy output of the reaction \(\displaystyle ^{98}Mo+n→^{99}Mo+γ\)? The mass of \(\displaystyle ^{98}Mo\) is given in Appendix A: Atomic Masses, and that of \(\displaystyle ^{99}Mo\_ is 98.907711 u.
33. The purpose of producing \(\displaystyle ^{99}Mo\) (usually by neutron activation of natural molybdenum, as in the preceding problem) is to produce \(\displaystyle ^{99m}Tc\). Using the rules, verify that the \(\displaystyle β^−\) decay of \(\displaystyle ^{99}Mo\) produces \(\displaystyle ^{99m}Tc\). (Most \(\displaystyle ^{99m}Tc\) nuclei produced in this decay are left in a metastable excited state denoted \(\displaystyle ^{99m}Tc\).)
Solution
\(\displaystyle ^{99}_{42}Mo_{57}→^{99}_{43}Tc_{56}+β^−+\bar{v_e}\)
34. (a) Two annihilation \(\displaystyle γ\) rays in a PET scan originate at the same point and travel to detectors on either side of the patient. If the point of origin is 9.00 cm closer to one of the detectors, what is the difference in arrival times of the photons? (This could be used to give position information, but the time difference is small enough to make it difficult.)
(b) How accurately would you need to be able to measure arrival time differences to get a position resolution of 1.00 mm?
35. Table indicates that 7.50 mCi of \(\displaystyle ^{99m}Tc\) is used in a brain scan. What is the mass of technetium?
Solution
\(\displaystyle 1.43×10^{−9}g\)
36. The activities of \(\displaystyle ^{131}I\) and \(\displaystyle ^{123}I\) used in thyroid scans are given in Table to be 50 and \(\displaystyle 70 μCi\), respectively. Find and compare the masses of \(\displaystyle ^{131}I\) and \(\displaystyle ^{123}I\) in such scans, given their respective half-lives are 8.04 d and 13.2 h. The masses are so small that the radioiodine is usually mixed with stable iodine as a carrier to ensure normal chemistry and distribution in the body.
37. (a) Neutron activation of sodium, which is 100% \(\displaystyle ^{23}Na\), produces \(\displaystyle ^{24}Na\), which is used in some heart scans, as seen in Table. The equation for the reaction is \(\displaystyle ^{23}Na+n→^{24}Na+γ\). Find its energy output, given the mass of \(\displaystyle ^{24}Na\) is 23.990962 u.
(b) What mass of \(\displaystyle ^{24}Na\) produces the needed 5.0-mCi activity, given its half-life is 15.0 h?
Solution
(a) 6.958 MeV
(b) \(\displaystyle 5.7×10^{−10}g\)
32.2: Biological Effects of Ionizing Radiation
38. What is the dose in mSv for:
(a) a 0.1 Gy x-ray?
(b) 2.5 mGy of neutron exposure to the eye?
(c) 1.5 mGy of α exposure?
Solution
(a) 100 mSv
(b) 80 mSv
(c) ~30 mSv
39. Find the radiation dose in Gy for:
(a) A 10-mSv fluoroscopic x-ray series.
(b) 50 mSv of skin exposure by an \(\displaystyle α\) emitter.
(c) 160 mSv of \(\displaystyle β^–\) and \(\displaystyle γ\) rays from the \(\displaystyle ^{40}K\) in your body.
40. How many Gy of exposure is needed to give a cancerous tumor a dose of 40 Sv if it is exposed to α activity?
Solution
~2 Gy
41. What is the dose in Sv in a cancer treatment that exposes the patient to 200 Gy of \(\displaystyle γ\) rays?
42. One half the \(\displaystyle γ\) rays from \(\displaystyle ^{99m}Tc\) are absorbed by a 0.170-mm-thick lead shielding. Half of the \(\displaystyle γ\) rays that pass through the first layer of lead are absorbed in a second layer of equal thickness. What thickness of lead will absorb all but one in 1000 of these \(\displaystyle γ\) rays?
Solution
1.69 mm
43. A plumber at a nuclear power plant receives a whole-body dose of 30 mSv in 15 minutes while repairing a crucial valve. Find the radiation-induced yearly risk of death from cancer and the chance of genetic defect from this maximum allowable exposure.
44. In the 1980s, the term picowave was used to describe food irradiation in order to overcome public resistance by playing on the well-known safety of microwave radiation. Find the energy in MeV of a photon having a wavelength of a picometer.
Solution
1.24 MeV
45. Find the mass of \(\displaystyle ^{239}Pu\) that has an activity of \(\displaystyle 1.00 μCi\).
32.3: Therapeutic Uses of Ionizing Radiation
46. A beam of 168-MeV nitrogen nuclei is used for cancer therapy. If this beam is directed onto a 0.200-kg tumor and gives it a 2.00-Sv dose, how many nitrogen nuclei were stopped? (Use an RBE of 20 for heavy ions.)
Solution
\(\displaystyle 7.44×10^8\)
47. (a) If the average molecular mass of compounds in food is 50.0 g, how many molecules are there in 1.00 kg of food?
(b) How many ion pairs are created in 1.00 kg of food, if it is exposed to 1000 Sv and it takes 32.0 eV to create an ion pair?
(c) Find the ratio of ion pairs to molecules.
(d) If these ion pairs recombine into a distribution of 2000 new compounds, how many parts per billion is each?
48. Calculate the dose in Sv to the chest of a patient given an x-ray under the following conditions. The x-ray beam intensity is \(\displaystyle 1.50 W/m^2\), the area of the chest exposed is \(\displaystyle 0.0750m^2\), 35.0% of the x-rays are absorbed in 20.0 kg of tissue, and the exposure time is 0.250 s.
Solution
\(\displaystyle 4.92×10^{–4}Sv\)
49. (a) A cancer patient is exposed to γ rays from a 5000-Ci \(\displaystyle ^{60}Co\) transillumination unit for 32.0 s. The \(\displaystyle γ\) rays are collimated in such a manner that only 1.00% of them strike the patient. Of those, 20.0% are absorbed in a tumor having a mass of 1.50 kg. What is the dose in rem to the tumor, if the average \(\displaystyle γ\) energy per decay is 1.25 MeV? None of the \(\displaystyle β\) s from the decay reach the patient.
(b) Is the dose consistent with stated therapeutic doses?
50. What is the mass of 60Co in a cancer therapy transillumination unit containing 5.00 kCi of 60Co?
Solution
4.43 g
51. Large amounts of \(\displaystyle ^{65}Zn\) are produced in copper exposed to accelerator beams. While machining contaminated copper, a physicist ingests \(\displaystyle 50.0 μCi\) of \(\displaystyle ^{65}Zn\). Each \(\displaystyle ^{65}Zn\) decay emits an average \(\displaystyle γ\)-ray energy of 0.550 MeV, 40.0% of which is absorbed in the scientist’s 75.0-kg body. What dose in mSv is caused by this in one day?
52. Naturally occurring \(\displaystyle ^{40}K\) is listed as responsible for 16 mrem/y of background radiation. Calculate the mass of \(\displaystyle ^{40}K\) that must be inside the 55-kg body of a woman to produce this dose. Each \(\displaystyle ^{40}K\) decay emits a 1.32-MeV β, and 50% of the energy is absorbed inside the body.
Solution
0.010 g
53. (a) Background radiation due to \(\displaystyle ^{226}Ra\) averages only 0.01 mSv/y, but it can range upward depending on where a person lives. Find the mass of \(\displaystyle ^{226}Ra\) in the 80.0-kg body of a man who receives a dose of 2.50-mSv/y from it, noting that each \(\displaystyle ^{226}Ra\) decay emits a 4.80-MeV α particle. You may neglect dose due to daughters and assume a constant amount, evenly distributed due to balanced ingestion and bodily elimination.
(b) Is it surprising that such a small mass could cause a measurable radiation dose? Explain.
54. The annual radiation dose from \(\displaystyle ^{14}C\) in our bodies is 0.01 mSv/y. Each \(\displaystyle ^{14}C\) decay emits a \(\displaystyle β^–\) averaging 0.0750 MeV. Taking the fraction of \(\displaystyle ^{14}C\) to be \(\displaystyle 1.3×10^{–12}N\) of normal \(\displaystyle ^{12}C\), and assuming the body is 13% carbon, estimate the fraction of the decay energy absorbed. (The rest escapes, exposing those close to you.)
Solution
95%
55. If everyone in Australia received an extra 0.05 mSv per year of radiation, what would be the increase in the number of cancer deaths per year? (Assume that time had elapsed for the effects to become apparent.) Assume that there are \(\displaystyle 200×10^{−4}\) deaths per Sv of radiation per year. What percent of the actual number of cancer deaths recorded is this?
32.5: Fusion
56. Verify that the total number of nucleons, total charge, and electron family number are conserved for each of the fusion reactions in the proton-proton cycle in
\(\displaystyle ^1H+^1H→^2H+e^++v_e\),
\(\displaystyle ^1H+^2H→^3He+γ\),
and
\(\displaystyle ^3He+^3He→^4He+^1H+^1H\).
(List the value of each of the conserved quantities before and after each of the reactions.)
Solution
(a) \(\displaystyle A=1+1=2, Z=1+1=1+1, efn=0=−1+1\)
(b) \(\displaystyle A=1+2=3, Z=1+1=2, efn=0=0\)
(c) \(\displaystyle A=3+3=4+1+1, Z=2+2=2+1+1, efn=0=0\)
57. Calculate the energy output in each of the fusion reactions in the proton-proton cycle, and verify the values given in the above summary.
58. Show that the total energy released in the proton-proton cycle is 26.7 MeV, considering the overall effect in \(\displaystyle ^1H+^1H→^2H+e^++v_e, ^1H+^2H→^3He+γ\), and \(\displaystyle ^3He+^3He→^4He+^1H+^1H\) and being certain to include the annihilation energy.
Solution
\(\displaystyle E=(m_i−m_f)c^2=[4m(^1H)−m(^4He)]c^2=[4(1.007825)−4.002603](931.5 MeV)=26.73 MeV\)
59. Verify by listing the number of nucleons, total charge, and electron family number before and after the cycle that these quantities are conserved in the overall proton-proton cycle in \(\displaystyle 2e^−+4^1H→^4He+2v_e+6γ\).
60. The energy produced by the fusion of a 1.00-kg mixture of deuterium and tritium was found in Example Calculating Energy and Power from Fusion. Approximately how many kilograms would be required to supply the annual energy use in the United States?
Solution
\(\displaystyle 3.12×10^5kg\) (about 200 tons)
61. Tritium is naturally rare, but can be produced by the reaction \(\displaystyle n+^2H→^3H+γ\). How much energy in MeV is released in this neutron capture?
62. Two fusion reactions mentioned in the text are
\(\displaystyle n+^3He→^4He+γ\)
and
\(\displaystyle n+^1H→^2H+γ\).
Both reactions release energy, but the second also creates more fuel. Confirm that the energies produced in the reactions are 20.58 and 2.22 MeV, respectively. Comment on which product nuclide is most tightly bound, \(\displaystyle ^4He\) or \(\displaystyle ^2H\).
Solution
\(\displaystyle E=(m_i−m_f)c^2\)
\(\displaystyle E_1=(1.008665+3.016030−4.002603)(931.5 MeV)=20.58 MeV\)
\(\displaystyle E_2=(1.008665+1.007825−2.014102)(931.5 MeV)=2.224 MeV\)
\(\displaystyle ^4He\) is more tightly bound, since this reaction gives off more energy per nucleon.
63. (a) Calculate the number of grams of deuterium in an 80,000-L swimming pool, given deuterium is 0.0150% of natural hydrogen.
(b) Find the energy released in joules if this deuterium is fused via the reaction \(\displaystyle ^2H+^2H→^3He+n\).
(c) Could the neutrons be used to create more energy?
(d) Discuss the amount of this type of energy in a swimming pool as compared to that in, say, a gallon of gasoline, also taking into consideration that water is far more abundant.
64. How many kilograms of water are needed to obtain the 198.8 mol of deuterium, assuming that deuterium is 0.01500% (by number) of natural hydrogen?
Solution
\(\displaystyle 1.19×10^4kg\)
65. The power output of the Sun is \(\displaystyle 4×10^{26}W\).
(a) If 90% of this is supplied by the proton-proton cycle, how many protons are consumed per second?
(b) How many neutrinos per second should there be per square meter at the Earth from this process? This huge number is indicative of how rarely a neutrino interacts, since large detectors observe very few per day.
66. Another set of reactions that result in the fusing of hydrogen into helium in the Sun and especially in hotter stars is called the carbon cycle. It is
\(\displaystyle ^{12}C+^1H→^{13}N+γ\),
\(\displaystyle ^{13}N→^{13}C+e^++v_e\),
\(\displaystyle ^{13}C+^1H→^{14}N+γ\),
\(\displaystyle ^{14}N+^1H→^{15}O+γ\),
\(\displaystyle ^{15}O→^{15}N+e^++v_e\),
\(\displaystyle ^{15}N+^1H→^{12}C+^4He\).
Write down the overall effect of the carbon cycle (as was done for the proton-proton cycle in \(\displaystyle 2e^−+4^1H→^4He+2v_e+6γ\)). Note the number of protons (\(\displaystyle ^1H\)) required and assume that the positrons (\(\displaystyle e^+\)) annihilate electrons to form more \(\displaystyle γ\) rays.
Solution
\(\displaystyle 2e^−+4^1H→^4He+7γ+2v_e\)
67. (a) Find the total energy released in MeV in each carbon cycle (elaborated in the above problem) including the annihilation energy.
(b) How does this compare with the proton-proton cycle output?
68. Verify that the total number of nucleons, total charge, and electron family number are conserved for each of the fusion reactions in the carbon cycle given in the above problem. (List the value of each of the conserved quantities before and after each of the reactions.)
Solution
(a) \(\displaystyle A=12+1=13, Z=6+1=7, efn=0=0\)
(b) \(\displaystyle A=13=13, Z=7=6+1, efn=0=−1+1\)
(c) \(\displaystyle A=13+1=14, Z=6+1=7, efn=0=0\)
(d) \(\displaystyle A=14+1=15, Z=7+1=8, efn=0=0\)
(e) \(\displaystyle A=15=15, Z=8=7+1, efn=0=−1+1\)
(f) \(\displaystyle A=15+1=12+4, Z=7+1=6+2, efn=0=0\)
69. Integrated Concepts
The laser system tested for inertial confinement can produce a 100-kJ pulse only 1.00 ns in duration.
(a) What is the power output of the laser system during the brief pulse?
(b) How many photons are in the pulse, given their wavelength is \(\displaystyle 1.06 µm\)?
(c) What is the total momentum of all these photons?
(d) How does the total photon momentum compare with that of a single 1.00 MeV deuterium nucleus?
70. Integrated Concepts
Find the amount of energy given to the \(\displaystyle ^4He\) nucleus and to the \(\displaystyle γ\) ray in the reaction \(\displaystyle n+^3He→^4He+γ\), using the conservation of momentum principle and taking the reactants to be initially at rest. This should confirm the contention that most of the energy goes to the γ ray.
Solution
\(\displaystyle E_γ=20.6 MeV\)
\(\displaystyle E_{^4He}=5.68×10^{−2}MeV\)
71. Integrated Concepts
(a) What temperature gas would have atoms moving fast enough to bring two \(\displaystyle ^3He\) nuclei into contact? Note that, because both are moving, the average kinetic energy only needs to be half the electric potential energy of these doubly charged nuclei when just in contact with one another.
(b) Does this high temperature imply practical difficulties for doing this in controlled fusion?
72. Integrated Concepts
(a) Estimate the years that the deuterium fuel in the oceans could supply the energy needs of the world. Assume world energy consumption to be ten times that of the United States which is \(\displaystyle 8×10^{19} J/y\) and that the deuterium in the oceans could be converted to energy with an efficiency of 32%. You must estimate or look up the amount of water in the oceans and take the deuterium content to be 0.015% of natural hydrogen to find the mass of deuterium available. Note that approximate energy yield of deuterium is \(\displaystyle 3.37×10^{14} J/kg\).
(b) Comment on how much time this is by any human measure. (It is not an unreasonable result, only an impressive one.)
Solution
(a) \(\displaystyle 3×10^9y\)
(b) This is approximately half the lifetime of the Earth.
32.6: Fission
73. (a) Calculate the energy released in the neutron-induced fission (similar to the spontaneous fission in Example)
\(\displaystyle n+^{238}U→^{96}Sr+^{140}Xe+3n,\)
given \(\displaystyle m(^{96}Sr)=95.921750 u\) and \(\displaystyle m(^{140}Xe)=139.92164.\)
(b) This result is about 6 MeV greater than the result for spontaneous fission. Why?
(c) Confirm that the total number of nucleons and total charge are conserved in this reaction.
Solution
(a) 177.1 MeV
(b) Because the gain of an external neutron yields about 6 MeV, which is the average \(\displaystyle BE/A\) for heavy nuclei.
(c) \(\displaystyle A=1+238=96+140+1+1+1,Z=92=38+53,efn=0=0\)
74. (a) Calculate the energy released in the neutron-induced fission reaction
\(\displaystyle n+^{235}U→^{92}Kr+^{142}Ba+2n,\)
given \(\displaystyle m(^{92}Kr)=91.926269 u\) and \(\displaystyle m(^{142}Ba)=141.916361u.\)
(b) Confirm that the total number of nucleons and total charge are conserved in this reaction.
75. (a) Calculate the energy released in the neutron-induced fission reaction
\(\displaystyle n+^{239}Pu→^{96}Sr+^{140}Ba+4n,\)
given \(\displaystyle m(^{96}Sr)=95.921750 u\) and \(\displaystyle m(^{140}Ba)=139.910581 u.\)
(b) Confirm that the total number of nucleons and total charge are conserved in this reaction.
Solution
(a) 180.6 MeV
(b) \(\displaystyle A=1+239=96+140+1+1+1+1,Z=94=38+56,efn=0=0\)
76. Confirm that each of the reactions listed for plutonium breeding just following Example conserves the total number of nucleons, the total charge, and electron family number.
77. Breeding plutonium produces energy even before any plutonium is fissioned. (The primary purpose of the four nuclear reactors at Chernobyl was breeding plutonium for weapons. Electrical power was a by-product used by the civilian population.) Calculate the energy produced in each of the reactions listed for plutonium breeding just following Example. The pertinent masses are \(\displaystyle m(^{239}U)=239.054289 u, m(^{239}Np)=239.052932 u,\) and \(\displaystyle m(^{239}Pu)=239.052157 u.\)
Solution
\(\displaystyle ^{238}U+n→^{239}U+γ 4.81 MeV\)
\(\displaystyle ^{239}U→^{239}Np+β^−+v_e 0.753 MeV\)
\(\displaystyle ^{239}Np→^{239}Pu+β^−+v_e 0.211 MeV\)
78. The naturally occurring radioactive isotope \(\displaystyle ^{232}Th\) does not make good fission fuel, because it has an even number of neutrons; however, it can be bred into a suitable fuel (much as \(\displaystyle ^{238}U\) is bred into \(\displaystyle ^{239}P\)).
(a) What are \(\displaystyle Z\) and \(\displaystyle N\) for \(\displaystyle ^{232}Th\)?
(b) Write the reaction equation for neutron captured by \(\displaystyle ^{232}Th\) and identify the nuclide \(\displaystyle ^AX\) produced in \(\displaystyle n+^{232}Th→^AX+γ\).
(c) The product nucleus β− decays, as does its daughter. Write the decay equations for each, and identify the final nucleus.
(d) Confirm that the final nucleus has an odd number of neutrons, making it a better fission fuel.
(e) Look up the half-life of the final nucleus to see if it lives long enough to be a useful fuel.
79. The electrical power output of a large nuclear reactor facility is 900 MW. It has a 35.0% efficiency in converting nuclear power to electrical.
(a) What is the thermal nuclear power output in megawatts?
(b) How many \(\displaystyle ^{235}U\) nuclei fission each second, assuming the average fission produces 200 MeV?
(c) What mass of \(\displaystyle ^{235}U\) is fissioned in one year of full-power operation?
Solution
(a) \(\displaystyle 2.57×10^3MW\)
(b) \(\displaystyle 8.03×10^{19}\)fission/s
(c) 991 kg
80. A large power reactor that has been in operation for some months is turned off, but residual activity in the core still produces 150 MW of power. If the average energy per decay of the fission products is 1.00 MeV, what is the core activity in curies?
32.7: Nuclear Weapons
81. Find the mass converted into energy by a 12.0-kT bomb.
Solution
0.56 g
82. What mass is converted into energy by a 1.00-MT bomb?
83. Fusion bombs use neutrons from their fission trigger to create tritium fuel in the reaction \(\displaystyle n+^6Li→^3H+^4He\). What is the energy released by this reaction in MeV?
Solution
4.781 MeV
84. It is estimated that the total explosive yield of all the nuclear bombs in existence currently is about 4,000 MT.
(a) Convert this amount of energy to kilowatt-hours, noting that \(\displaystyle 1 kW⋅h=3.60×10^6J\).
(b) What would the monetary value of this energy be if it could be converted to electricity costing 10 cents per kW·h?
85. A radiation-enhanced nuclear weapon (or neutron bomb) can have a smaller total yield and still produce more prompt radiation than a conventional nuclear bomb. This allows the use of neutron bombs to kill nearby advancing enemy forces with radiation without blowing up your own forces with the blast. For a 0.500-kT radiation-enhanced weapon and a 1.00-kT conventional nuclear bomb: (a) Compare the blast yields. (b) Compare the prompt radiation yields.
Solution
(a) Blast yields \(\displaystyle 2.1×10^{12}J\) to \(\displaystyle 8.4×10^{11}J\), or 2.5 to 1, conventional to radiation enhanced.
(b) Prompt radiation yields \(\displaystyle 6.3×10^{11}J\) to \(\displaystyle 2.1×10^{11}J\), or 3 to 1, radiation enhanced to conventional.
86. (a) How many \(\displaystyle ^{239}Pu\) nuclei must fission to produce a 20.0-kT yield, assuming 200 MeV per fission?
(b) What is the mass of this much \(\displaystyle ^{239}Pu\)?
87. Assume one-fourth of the yield of a typical 320-kT strategic bomb comes from fission reactions averaging 200 MeV and the remainder from fusion reactions averaging 20 MeV.
(a) Calculate the number of fissions and the approximate mass of uranium and plutonium fissioned, taking the average atomic mass to be 238.
(b) Find the number of fusions and calculate the approximate mass of fusion fuel, assuming an average total atomic mass of the two nuclei in each reaction to be 5.
(c) Considering the masses found, does it seem reasonable that some missiles could carry 10 warheads? Discuss, noting that the nuclear fuel is only a part of the mass of a warhead.
Solution
(a) \(\displaystyle 1.1×10^{25}\) fissions , 4.4 kg
(b) \(\displaystyle 3.2×10^{26}\) fusions , 2.7 kg
(c) The nuclear fuel totals only 6 kg, so it is quite reasonable that some missiles carry 10 overheads. The mass of the fuel would only be 60 kg and therefore the mass of the 10 warheads, weighing about 10 times the nuclear fuel, would be only 1500 lbs. If the fuel for the missiles weighs 5 times the total weight of the warheads, the missile would weigh about 9000 lbs or 4.5 tons. This is not an unreasonable weight for a missile.
88. This problem gives some idea of the magnitude of the energy yield of a small tactical bomb. Assume that half the energy of a 1.00-kT nuclear depth charge set off under an aircraft carrier goes into lifting it out of the water—that is, into gravitational potential energy. How high is the carrier lifted if its mass is 90,000 tons?
89. It is estimated that weapons tests in the atmosphere have deposited approximately 9 MCi of \(\displaystyle ^{90}Sr\) on the surface of the earth. Find the mass of this amount of \(\displaystyle ^{90}Sr\).
Solution
\(\displaystyle 7×10^4g\)
90. A 1.00-MT bomb exploded a few kilometers above the ground deposits 25.0% of its energy into radiant heat.
(a) Find the calories per \(\displaystyle cm^2\) at a distance of 10.0 km by assuming a uniform distribution over a spherical surface of that radius.
(b) If this heat falls on a person’s body, what temperature increase does it cause in the affected tissue, assuming it is absorbed in a layer 1.00-cm deep?
91. Integrated Concepts
One scheme to put nuclear weapons to nonmilitary use is to explode them underground in a geologically stable region and extract the geothermal energy for electricity production. There was a total yield of about 4,000 MT in the combined arsenals in 2006. If 1.00 MT per day could be converted to electricity with an efficiency of 10.0%:
(a) What would the average electrical power output be?
(b) How many years would the arsenal last at this rate?
Solution
(a) \(\displaystyle 4.86×10^9W\)
(b) 11.0 y
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:49.634281
| 2017-12-18T19:27:44 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/32%3A_Medical_Applications_of_Nuclear_Physics/32.E%3A_Medical_Applications_of_Nuclear_Physics_(Exercises)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "32.E: Medical Applications of Nuclear Physics (Exercises)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics
|
33: Particle Physics
Particle physics (or high energy physics) studies the nature of the particles that constitute matter (particles with mass) and radiation (massless particles). Although the word "particle" can refer to various types of very small objects (e.g., protons, gas particles, or even household dust), "particle physics" usually investigates the irreducibly smallest detectable particles and the irreducibly fundamental force fields necessary to explain them.
-
- 33.0: Prelude to Particle Physics
- In its study, we have found a relatively small number of atoms with systematic properties that explained a tremendous range of phenomena. Nuclear physics is concerned with the nuclei of atoms and their substructures. Here, a smaller number of components—the proton and neutron—make up all nuclei. Exploring the systematic behavior of their interactions has revealed even more about matter, forces, and energy.
-
- 33.1: The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited
- Particle physics as we know it today began with the ideas of Hideki Yukawa in 1935. Yukawa was interested in the strong nuclear force in particular and found an ingenious way to explain its short range. His idea is a blend of particles, forces, relativity, and quantum mechanics that is applicable to all forces. Yukawa proposed that force is transmitted by the exchange of particles (called carrier particles). The field consists of these carrier particles.
-
- 33.2: The Four Basic Forces
- There are only four distinct basic forces in all of nature. This is a remarkably small number considering the myriad phenomena they explain. Particle physics is intimately tied to these four forces. Certain fundamental particles, called carrier particles, carry these forces, and all particles can be classified according to which of the four forces they feel.
-
- 33.3: Accelerators Create Matter from Energy
- The fundamental process in creating previously unknown particles is to accelerate known particles, such as protons or electrons, and direct a beam of them toward a target. I the energy of the incoming particles is large enough, new matter is sometimes created in the collision. Limitations are placed on what can occur by known conservation laws, such as conservation of mass-energy, momentum, and charge. Even more interesting are the unknown limitations provided by nature.
-
- 33.4: Particles, Patterns, and Conservation Laws
- After World War II, accelerators energetic enough to create these particles were built. Not only were predicted and known particles created, but many unexpected particles were observed. Initially called elementary particles, their numbers proliferated to dozens and then hundreds, and the term “particle zoo” became the physicist’s lament at the lack of simplicity. But patterns were observed in the particle zoo that led to simplifying ideas such as quarks, as we shall soon see.
-
- 33.5: Quarks - Is That All There Is?
- Quarks have been mentioned at various points in this text as fundamental building blocks and members of the exclusive club of truly elementary particles. Note that an elementary or fundamental particle has no substructure (it is not made of other particles) and has no finite size other than its wavelength. This does not mean that fundamental particles are stable—some decay, while others do not. Keep in mind that all leptons seem to be fundamental, whereas no hadrons are fundamental.
-
- 33.6: GUTs - The Unification of Forces
- The search for a correct theory linking the four fundamental forces, called the Grand Unified Theory (GUT), is explored in this section in the realm of particle physics. Frontiers of Physics expands the story in making a connection with cosmology, on the opposite end of the distance scale.
Thumbnail: In this Feynman diagram, an electron and apositron annihilate, producing a photon(represented by the blue sine wave) that becomes aquark–antiquark pair, after which the antiquark radiates a gluon (represented by the green helix). (CC-SA-BY 2.5; Joel Holdsworth).
|
libretexts
|
2025-03-17T19:53:49.701173
| 2015-11-01T04:25:39 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33: Particle Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.00%3A_Prelude_to_Particle_Physics
|
33.0: Prelude to Particle Physics
Following ideas remarkably similar to those of the ancient Greeks, we continue to look for smaller and smaller structures in nature, hoping ultimately to find and understand the most fundamental building blocks that exist. Atomic physics deals with the smallest units of elements and compounds. In its study, we have found a relatively small number of atoms with systematic properties that explained a tremendous range of phenomena. Nuclear physics is concerned with the nuclei of atoms and their substructures. Here, a smaller number of components—the proton and neutron—make up all nuclei. Exploring the systematic behavior of their interactions has revealed even more about matter, forces, and energy.
Particle physics deals with the substructures of atoms and nuclei and is particularly aimed at finding those truly fundamental particles that have no further substructure. Just as in atomic and nuclear physics, we have found a complex array of particles and properties with systematic characteristics analogous to the periodic table and the chart of nuclides. An underlying structure is apparent, and there is some reason to think that we are finding particles that have no substructure. Of course, we have been in similar situations before. For example, atoms were once thought to be the ultimate substructure. Perhaps we will find deeper and deeper structures and never come to an ultimate substructure. We may never really know, as indicated in Figure.
This chapter covers the basics of particle physics as we know it today. An amazing convergence of topics is evolving in particle physics. We find that some particles are intimately related to forces, and that nature on the smallest scale may have its greatest influence on the large-scale character of the universe. It is an adventure exceeding the best science fiction because it is not only fantastic, it is real.
Summary
- Particle physics is the study of and the quest for those truly fundamental particles having no substructure.
Glossary
- particle physics
- the study of and the quest for those truly fundamental particles having no substructure
|
libretexts
|
2025-03-17T19:53:49.760518
| 2016-07-24T09:01:29 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.00%3A_Prelude_to_Particle_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.0: Prelude to Particle Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.01%3A_The_Yukawa_Particle_and_the_Heisenberg_Uncertainty_Principle_Revisited
|
33.1: The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited
Learning Objectives
By the end of this section, you will be able to:
- Define Yukawa particle.
- State the Heisenberg uncertainty principle.
- Describe pion.
- Estimate the mass of a pion.
- Explain meson.
Particle physics as we know it today began with the ideas of Hideki Yukawa in 1935. Physicists had long been concerned with how forces are transmitted, finding the concept of fields, such as electric and magnetic fields to be very useful. A field surrounds an object and carries the force exerted by the object through space. Yukawa was interested in the strong nuclear force in particular and found an ingenious way to explain its short range. His idea is a blend of particles, forces, relativity, and quantum mechanics that is applicable to all forces. Yukawa proposed that force is transmitted by the exchange of particles (called carrier particles ). The field consists of these carrier particles.
Specifically for the strong nuclear force, Yukawa proposed that a previously unknown particle, now called a pion , is exchanged between nucleons, transmitting the force between them. Figure \(\PageIndex{1}\) illustrates how a pion would carry a force between a proton and a neutron. The pion has mass and can only be created by violating the conservation of mass-energy. This is allowed by the Heisenberg uncertainty principle if it occurs for a sufficiently short period of time. As discussed in Probability: The Heisenberg Uncertainty Principle the Heisenberg uncertainty principle relates the uncertainties \(ΔE\) in energy and \(Δt\) in time by
\[ΔEΔt≥\frac{h}{4π}\]
where \(h\) is Planck’s constant. Therefore, conservation of mass-energy can be violated by an amount \(ΔE\) for a time \(Δt≈\frac{h}{4πΔE}\) in which time no process can detect the violation. This allows the temporary creation of a particle of mass m, where \(ΔE=mc^2\). The larger the mass and the greater the \(ΔE\), the shorter is the time it can exist. This means the range of the force is limited, because the particle can only travel a limited distance in a finite amount of time. In fact, the maximum distance is \(d≈cΔt\), where \(c\) is the speed of light. The pion must then be captured and, thus, cannot be directly observed because that would amount to a permanent violation of mass-energy conservation. Such particles (like the pion above) are called virtual particles , because they cannot be directly observed but their effects can be directly observed. Realizing all this, Yukawa used the information on the range of the strong nuclear force to estimate the mass of the pion, the particle that carries it. The steps of his reasoning are approximately retraced in the following worked example:
Example \(\PageIndex{1}\): Calculating the Mass of a Pion
Taking the range of the strong nuclear force to be about 1 fermi (\(10^{−15}\)m), calculate the approximate mass of the pion carrying the force, assuming it moves at nearly the speed of light.
Strategy
The calculation is approximate because of the assumptions made about the range of the force and the speed of the pion, but also because a more accurate calculation would require the sophisticated mathematics of quantum mechanics. Here, we use the Heisenberg uncertainty principle in the simple form stated above, as developed in Probability: The Heisenberg Uncertainty Principle . First, we must calculate the time \(Δt\) that the pion exists, given that the distance it travels at nearly the speed of light is about 1 fermi. Then, the Heisenberg uncertainty principle can be solved for the energy \(ΔE\), and from that the mass of the pion can be determined. We will use the units of \(MeV/c^2\) for mass, which are convenient since we are often considering converting mass to energy and vice versa.
Solution
The distance the pion travels is \(d≈cΔt\), and so the time during which it exists is approximately
\[Δt≈\frac{d}{c}=\dfrac{10^{−15}m}{3.0×10^8m/s}≈3.3×10^{−24}s. \nonumber\]
Now, solving the Heisenberg uncertainty principle for \(ΔE\) gives
\[ΔE≈\frac{h}{4πΔt}≈\dfrac{6.63×10^{−34}J⋅s}{4π(3.3×10^{−24}s)}. \nonumber\]
Solving this and converting the energy to MeV gives
\[ΔE≈(1.6×10^{−11}J)\frac{1MeV}{1.6×10^{−13}J}=100\,MeV. \nonumber\]
Mass is related to energy by \(ΔE=mc^2\), so that the mass of the pion is \(m=ΔE/c^2\), or
\[m≈100MeV/c^2. \nonumber\]
Discussion
This is about 200 times the mass of an electron and about one-tenth the mass of a nucleon. No such particles were known at the time Yukawa made his bold proposal.
Yukawa’s proposal of particle exchange as the method of force transfer is intriguing. But how can we verify his proposal if we cannot observe the virtual pion directly? If sufficient energy is in a nucleus, it would be possible to free the pion—that is, to create its mass from external energy input. This can be accomplished by collisions of energetic particles with nuclei, but energies greater than 100 MeV are required to conserve both energy and momentum. In 1947, pions were observed in cosmic-ray experiments, which were designed to supply a small flux of high-energy protons that may collide with nuclei. Soon afterward, accelerators of sufficient energy were creating pions in the laboratory under controlled conditions. Three pions were discovered, two with charge and one neutral, and given the symbols \(π^+\),\(π^−\), and \(π^0\), respectively. The masses of \(π^+\) and \(π^−\) are identical at \(139.6\, MeV/c^2\), whereas \(π^0\) has a mass of \(135.0MeV/c^2\). These masses are close to the predicted value of \(100\,MeV/c^2\) and, since they are intermediate between electron and nucleon masses, the particles are given the name meson (now an entire class of particles, as we shall see in Particles, Patterns, and Conservation Laws ).
The pions, or \(π^-\)mesons as they are also called, have masses close to those predicted and feel the strong nuclear force. Another previously unknown particle, now called the muon, was discovered during cosmic-ray experiments in 1936 (one of its discoverers, Seth Neddermeyer, also originated the idea of implosion for plutonium bombs). Since the mass of a muon is around \(106\, MeV/c^2\), at first it was thought to be the particle predicted by Yukawa. But it was soon realized that muons do not feel the strong nuclear force and could not be Yukawa’s particle. Their role was unknown, causing the respected physicist I. I. Rabi to comment, “Who ordered that?” This remains a valid question today. We have discovered hundreds of subatomic particles; the roles of some are only partially understood. But there are various patterns and relations to forces that have led to profound insights into nature’s secrets.
Summary
- Yukawa’s idea of virtual particle exchange as the carrier of forces is crucial, with virtual particles being formed in temporary violation of the conservation of mass-energy as allowed by the Heisenberg uncertainty principle.
Glossary
- pion
- particle exchanged between nucleons, transmitting the force between them
- virtual particles
- particles which cannot be directly observed but their effects can be directly observed
- meson
- particle whose mass is intermediate between the electron and nucleon masses
|
libretexts
|
2025-03-17T19:53:49.829027
| 2016-07-24T09:02:04 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.01%3A_The_Yukawa_Particle_and_the_Heisenberg_Uncertainty_Principle_Revisited",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.1: The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.02%3A_The_Four_Basic_Forces
|
33.2: The Four Basic Forces
Learning Objectives
By the end of this section, you will be able to:
- State the four basic forces.
- Explain the Feynman diagram for the exchange of a virtual photon between two positive charges.
- Define QED.
- Describe the Feynman diagram for the exchange of a between a proton and a neutron.
As first discussed in Problem-Solving Strategies and mentioned at various points in the text since then, there are only four distinct basic forces in all of nature. This is a remarkably small number considering the myriad phenomena they explain. Particle physics is intimately tied to these four forces. Certain fundamental particles, called carrier particles, carry these forces, and all particles can be classified according to which of the four forces they feel. The table given below summarizes important characteristics of the four basic forces.
| Force | Approximate relative strength | Range | +/− | Carrier particle |
|---|---|---|---|---|
| Gravity |
\(10^{−38}\) |
∞ | + only | Graviton (conjectured) |
| Electromagnetic | \(10^{−2}\) | ∞ | +/− | Photon (observed) |
| Weak force | \(10^{−13}\) | < \(10^{−18}\) m | +/− |
\(W^+,W^−,Z^0\) (observed) |
| Strong force | 1 | < \(10^{−15}\) m | +/− | Gluons (conjectured) |
Although these four forces are distinct and differ greatly from one another under all but the most extreme circumstances, we can see similarities among them. (In GUTs: the Unification of Forces , we will discuss how the four forces may be different manifestations of a single unified force.) Perhaps the most important characteristic among the forces is that they are all transmitted by the exchange of a carrier particle, exactly like what Yukawa had in mind for the strong nuclear force. Each carrier particle is a virtual particle—it cannot be directly observed while transmitting the force. Figure \(\PageIndex{1}\) shows the exchange of a virtual photon between two positive charges. The photon cannot be directly observed in its passage, because this would disrupt it and alter the force.
Figure \(\PageIndex{1}\) shows a way of graphing the exchange of a virtual photon between two positive charges. This graph of time versus position is called a Feynman diagram , after the brilliant American physicist Richard Feynman (1918–1988) who developed it.
Figure \(\PageIndex{2}\) is a Feynman diagram for the exchange of a virtual pion between a proton and a neutron representing the same interaction as in [link] . Feynman diagrams are not only a useful tool for visualizing interactions at the quantum mechanical level, they are also used to calculate details of interactions, such as their strengths and probability of occurring. Feynman was one of the theorists who developed the field of quantum electrodynamics (QED), which is the quantum mechanics of electromagnetism. QED has been spectacularly successful in describing electromagnetic interactions on the submicroscopic scale. Feynman was an inspiring teacher, had a colorful personality, and made a profound impact on generations of physicists. He shared the 1965 Nobel Prize with Julian Schwinger and S. I. Tomonaga for work in QED with its deep implications for particle physics.
Why is it that particles called gluons are listed as the carrier particles for the strong nuclear force when, in The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited , we saw that pions apparently carry that force? The answer is that pions are exchanged but they have a substructure and, as we explore it, we find that the strong force is actually related to the indirectly observed but more fundamental gluons . In fact, all the carrier particles are thought to be fundamental in the sense that they have no substructure. Another similarity among carrier particles is that they are all bosons (first mentioned in Patterns in Spectra Reveal More Quantization ), having integral intrinsic spins.
There is a relationship between the mass of the carrier particle and the range of the force. The photon is massless and has energy. So, the existence of (virtual) photons is possible only by virtue of the Heisenberg uncertainty principle and can travel an unlimited distance. Thus, the range of the electromagnetic force is infinite. This is also true for gravity. It is infinite in range because its carrier particle, the graviton, has zero rest mass. (Gravity is the most difficult of the four forces to understand on a quantum scale because it affects the space and time in which the others act. But gravity is so weak that its effects are extremely difficult to observe quantum mechanically. We shall explore it further in General Relativity and Quantum Gravity ). The \(W^+,W^−\), and \(Z^0\) particles that carry the weak nuclear force have mass, accounting for the very short range of this force. In fact, the \(W^+,W^−\), and \(Z^0\) are about 1000 times more massive than pions, consistent with the fact that the range of the weak nuclear force is about 1/1000 that of the strong nuclear force. Gluons are actually massless, but since they act inside massive carrier particles like pions, the strong nuclear force is also short ranged.
The relative strengths of the forces given in the Table are those for the most common situations. When particles are brought very close together, the relative strengths change, and they may become identical at extremely close range. As we shall see in GUTs: the Unification of Forces , carrier particles may be altered by the energy required to bring particles very close together—in such a manner that they become identical.
Summary
- The four basic forces and their carrier particles are summarized in the Table .
- Feynman diagrams are graphs of time versus position and are highly useful pictorial representations of particle processes.
- The theory of electromagnetism on the particle scale is called quantum electrodynamics (QED).
Glossary
- Feynman diagram
- a graph of time versus position that describes the exchange of virtual particles between subatomic particles
- gluons
- exchange particles, analogous to the exchange of photons that gives rise to the electromagnetic force between two charged particles
- quantum electrodynamics
- the theory of electromagnetism on the particle scale
|
libretexts
|
2025-03-17T19:53:49.977877
| 2016-07-24T09:02:45 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.02%3A_The_Four_Basic_Forces",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.2: The Four Basic Forces",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.03%3A_Accelerators_Create_Matter_from_Energy
|
33.3: Accelerators Create Matter from Energy
Learning Objectives
By the end of this section, you will be able to:
- State the principle of a cyclotron.
- Explain the principle of a synchrotron.
- Describe the voltage needed by an accelerator between accelerating tubes.
- State Fermilab’s accelerator principle.
Before looking at all the particles we now know about, let us examine some of the machines that created them. The fundamental process in creating previously unknown particles is to accelerate known particles, such as protons or electrons, and direct a beam of them toward a target. Collisions with target nuclei provide a wealth of information, such as information obtained by Rutherford using energetic helium nuclei from natural \(α\) radiation. But if the energy of the incoming particles is large enough, new matter is sometimes created in the collision. The more energy input or \(ΔE\), the more matter \(m\) can be created, since \(m=ΔE/c^2\). Limitations are placed on what can occur by known conservation laws, such as conservation of mass-energy, momentum, and charge. Even more interesting are the unknown limitations provided by nature. Some expected reactions do occur, while others do not, and still other unexpected reactions may appear. New laws are revealed, and the vast majority of what we know about particle physics has come from accelerator laboratories. It is the particle physicist’s favorite indoor sport, which is partly inspired by theory.
Early Accelerators
An early accelerator is a relatively simple, large-scale version of the electron gun. The Van de Graaff (named after the Dutch physicist), which you have likely seen in physics demonstrations, is a small version of the ones used for nuclear research since their invention for that purpose in 1932 (Figure \(\PageIndex{1}\)). These machines are electrostatic, creating potentials as great as 50 MV, and are used to accelerate a variety of nuclei for a range of experiments. Energies produced by Van de Graaffs are insufficient to produce new particles, but they have been instrumental in exploring several aspects of the nucleus.
Another, equally famous, early accelerator is the cyclotron , invented in 1930 by the American physicist, E. O. Lawrence (1901–1958). For a visual representation with more detail, see Figure \(\PageIndex{2}\). Cyclotrons use fixed-frequency alternating electric fields to accelerate particles. The particles spiral outward in a magnetic field, making increasingly larger radius orbits during acceleration. This clever arrangement allows the successive addition of electric potential energy and so greater particle energies are possible than in a Van de Graaff. Lawrence was involved in many early discoveries and in the promotion of physics programs in American universities. He was awarded the 1939 Nobel Prize in Physics for the cyclotron and nuclear activations, and he has an element and two major laboratories named for him.
A synchrotron is a version of a cyclotron in which the frequency of the alternating voltage and the magnetic field strength are increased as the beam particles are accelerated. Particles are made to travel the same distance in a shorter time with each cycle in fixed-radius orbits. A ring of magnets and accelerating tubes, as shown in Figure \(\PageIndex{2}\), are the major components of synchrotrons. Accelerating voltages are synchronized (i.e., occur at the same time) with the particles to accelerate them, hence the name. Magnetic field strength is increased to keep the orbital radius constant as energy increases. High-energy particles require strong magnetic fields to steer them, so superconducting magnets are commonly employed. Still limited by achievable magnetic field strengths, synchrotrons need to be very large at very high energies, since the radius of a high-energy particle’s orbit is very large. Radiation caused by a magnetic field accelerating a charged particle perpendicular to its velocity is called synchrotron radiation in honor of its importance in these machines. Synchrotron radiation has a characteristic spectrum and polarization, and can be recognized in cosmic rays, implying large-scale magnetic fields acting on energetic and charged particles in deep space. Synchrotron radiation produced by accelerators is sometimes used as a source of intense energetic electromagnetic radiation for research purposes.
Modern Behemoths and Colliding Beams
Physicists have built ever-larger machines, first to reduce the wavelength of the probe and obtain greater detail, then to put greater energy into collisions to create new particles. Each major energy increase brought new information, sometimes producing spectacular progress, motivating the next step. One major innovation was driven by the desire to create more massive particles. Since momentum needs to be conserved in a collision, the particles created by a beam hitting a stationary target should recoil. This means that part of the energy input goes into recoil kinetic energy, significantly limiting the fraction of the beam energy that can be converted into new particles. One solution to this problem is to have head-on collisions between particles moving in opposite directions. Colliding beams are made to meet head-on at points where massive detectors are located. Since the total incoming momentum is zero, it is possible to create particles with momenta and kinetic energies near zero. Particles with masses equivalent to twice the beam energy can thus be created. Another innovation is to create the antimatter counterpart of the beam particle, which thus has the opposite charge and circulates in the opposite direction in the same beam pipe. For a schematic representation, see Figure \(\PageIndex{3}\).
Detectors capable of finding the new particles in the spray of material that emerges from colliding beams are as impressive as the accelerators. While the Fermilab Tevatron had proton and antiproton beam energies of about 1 TeV, so that it can create particles up to 2 \(TeV/c^2\), the Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) has achieved beam energies of 3.5 TeV, so that it has a 7-TeV collision energy; CERN hopes to double the beam energy in 2014. The now-canceled Superconducting Super Collider was being constructed in Texas with a design energy of 20 TeV to give a 40-TeV collision energy. It was to be an oval 30 km in diameter. Its cost as well as the politics of international research funding led to its demise.
Figure \(\PageIndex{4}\)
: This schematic shows the two rings of Fermilab’s accelerator and the scheme for colliding protons and antiprotons (not to scale).
In addition to the large synchrotrons that produce colliding beams of protons and antiprotons, there are other large electron-positron accelerators. The oldest of these was a straight-line or linear accelerator, called the Stanford Linear Accelerator (SLAC), which accelerated particles up to 50 GeV as seen in Figure. Positrons created by the accelerator were brought to the same energy and collided with electrons in specially designed detectors. Linear accelerators use accelerating tubes similar to those in synchrotrons, but aligned in a straight line. This helps eliminate synchrotron radiation losses, which are particularly severe for electrons made to follow curved paths. CERN had an electron-positron collider appropriately called the Large Electron-Positron Collider (LEP), which accelerated particles to 100 GeV and created a collision energy of 200 GeV. It was 8.5 km in diameter, while the SLAC machine was 3.2 km long.
Figure \(\PageIndex{5 }\)
:
The Stanford Linear Accelerator was 3.2 km long and had the capability of colliding electron and positron beams. SLAC was also used to probe nucleons by scattering extremely short wavelength electrons from them. This produced the first convincing evidence of a quark structure inside nucleons in an experiment analogous to those performed by Rutherford long ago.
Example \(\PageIndex{1}\):Calculating the Voltage Needed by the Accelerator Between Accelerating Tubes
A linear accelerator designed to produce a beam of 800-MeV protons has 2000 accelerating tubes. What average voltage must be applied between tubes (such as in the gaps in Figure) to achieve the desired energy?
Strategy
The energy given to the proton in each gap between tubes is \(PE_{elec}=qV\) where \(q\) is the proton’s charge and \(V\) is the potential difference (voltage) across the gap. Since \(q=q_e=1.6×10^{−19}C\) and \(1 eV=(1 V)(1.6×10^{−19}C)\), the proton gains 1 eV in energy for each volt across the gap that it passes through. The AC voltage applied to the tubes is timed so that it adds to the energy in each gap. The effective voltage is the sum of the gap voltages and equals 800 MV to give each proton an energy of 800 MeV.
Solution
There are 2000 gaps and the sum of the voltages across them is 800 MV; thus,
\(V_{gap}=\frac{800 MV}{2000}=400 kV\).
Discussion
A voltage of this magnitude is not difficult to achieve in a vacuum. Much larger gap voltages would be required for higher energy, such as those at the 50-GeV SLAC facility. Synchrotrons are aided by the circular path of the accelerated particles, which can orbit many times, effectively multiplying the number of accelerations by the number of orbits. This makes it possible to reach energies greater than 1 TeV.
Summary
- A variety of particle accelerators have been used to explore the nature of subatomic particles and to test predictions of particle theories.
- Modern accelerators used in particle physics are either large synchrotrons or linear accelerators.
- The use of colliding beams makes much greater energy available for the creation of particles, and collisions between matter and antimatter allow a greater range of final products.
Glossary
- colliding beams
- head-on collisions between particles moving in opposite directions
- cyclotron
- accelerator that uses fixed-frequency alternating electric fields and fixed magnets to accelerate particles in a circular spiral path
- linear accelerator
- accelerator that accelerates particles in a straight line
- synchrotron
- a version of a cyclotron in which the frequency of the alternating voltage and the magnetic field strength are increased as the beam particles are accelerated
- synchrotron radiation
- radiation caused by a magnetic field accelerating a charged particle perpendicular to its velocity
- Van de Graaff
- early accelerator: simple, large-scale version of the electron gun
|
libretexts
|
2025-03-17T19:53:50.049811
| 2016-07-24T09:11:39 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.03%3A_Accelerators_Create_Matter_from_Energy",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.3: Accelerators Create Matter from Energy",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.04%3A_Particles_Patterns_and_Conservation_Laws
|
33.4: Particles, Patterns, and Conservation Laws
Learning Objectives
By the end of this section, you will be able to:
- Define matter and antimatter.
- Outline the differences between hadrons and leptons.
- State the differences between mesons and baryons.
In the early 1930s only a small number of subatomic particles were known to exist—the proton, neutron, electron, photon and, indirectly, the neutrino. Nature seemed relatively simple in some ways, but mysterious in others. Why, for example, should the particle that carries positive charge be almost 2000 times as massive as the one carrying negative charge? Why does a neutral particle like the neutron have a magnetic moment? Does this imply an internal structure with a distribution of moving charges? Why is it that the electron seems to have no size other than its wavelength, while the proton and neutron are about 1 fermi in size? So, while the number of known particles was small and they explained a great deal of atomic and nuclear phenomena, there were many unexplained phenomena and hints of further substructures.
Things soon became more complicated, both in theory and in the prediction and discovery of new particles. In 1928, the British physicist P.A.M. Dirac (Figure \(\PageIndex{1}\)) developed a highly successful relativistic quantum theory that laid the foundations of quantum electrodynamics (QED). His theory, for example, explained electron spin and magnetic moment in a natural way. But Dirac’s theory also predicted negative energy states for free electrons. By 1931, Dirac, along with Oppenheimer, realized this was a prediction of positively charged electrons (or positrons). In 1932, American physicist Carl Anderson discovered the positron in cosmic ray studies. The positron, or \(e^+\), is the same particle as emitted in \(β^+\) decay and was the first antimatter that was discovered. In 1935, Yukawa predicted pions as the carriers of the strong nuclear force, and they were eventually discovered. Muons were discovered in cosmic ray experiments in 1937, and they seemed to be heavy, unstable versions of electrons and positrons. After World War II, accelerators energetic enough to create these particles were built. Not only were predicted and known particles created, but many unexpected particles were observed. Initially called elementary particles, their numbers proliferated to dozens and then hundreds, and the term “particle zoo” became the physicist’s lament at the lack of simplicity. But patterns were observed in the particle zoo that led to simplifying ideas such as quarks, as we shall soon see.
Matter and Antimatter
The positron was only the first example of antimatter. Every particle in nature has an antimatter counterpart, although some particles, like the photon, are their own antiparticles. Antimatter has charge opposite to that of matter (for example, the positron is positive while the electron is negative) but is nearly identical otherwise, having the same mass, intrinsic spin, half-life, and so on. When a particle and its antimatter counterpart interact, they annihilate one another, usually totally converting their masses to pure energy in the form of photons as seen in Figure \(\PageIndex{2}\).
Neutral particles, such as neutrons, have neutral antimatter counterparts, which also annihilate when they interact. Certain neutral particles are their own antiparticle and live correspondingly short lives. For example, the neutral pion \(π^0\) is its own antiparticle and has a half-life about \(10^{−8}\) shorter than \(π^+\) and \(π^−\), which are each other’s antiparticles. Without exception, nature is symmetric—all particles have antimatter counterparts. For example, antiprotons and antineutrons were first created in accelerator experiments in 1956 and the antiproton is negative. Antihydrogen atoms, consisting of an antiproton and antielectron, were observed in 1995 at CERN, too. It is possible to contain large-scale antimatter particles such as antiprotons by using electromagnetic traps that confine the particles within a magnetic field so that they don't annihilate with other particles. However, particles of the same charge repel each other, so the more particles that are contained in a trap, the more energy is needed to power the magnetic field that contains them. It is not currently possible to store a significant quantity of antiprotons. At any rate, we now see that negative charge is associated with both low-mass (electrons) and high-mass particles (antiprotons) and the apparent asymmetry is not there. But this knowledge does raise another question—why is there such a predominance of matter and so little antimatter? Possible explanations emerge later in this and the next chapter.
Hadrons and Leptons
Particles can also be revealingly grouped according to what forces they feel between them. All particles (even those that are massless) are affected by gravity, since gravity affects the space and time in which particles exist. All charged particles are affected by the electromagnetic force, as are neutral particles that have an internal distribution of charge (such as the neutron with its magnetic moment). Special names are given to particles that feel the strong and weak nuclear forces. Hadrons are particles that feel the strong nuclear force, whereas leptons are particles that do not. The proton, neutron, and the pions are examples of hadrons. The electron, positron, muons, and neutrinos are examples of leptons, the name meaning low mass. Leptons feel the weak nuclear force. In fact, all particles feel the weak nuclear force. This means that hadrons are distinguished by being able to feel both the strong and weak nuclear forces.
Table \(\PageIndex{1}\) lists the characteristics of some of the most important subatomic particles, including the directly observed carrier particles for the electromagnetic and weak nuclear forces, all leptons, and some hadrons. Several hints related to an underlying substructure emerge from an examination of these particle characteristics. Note that the carrier particles are called gauge bosons . First mentioned in Patterns in Spectra Reveal More Quantization , a boson is a particle with zero or an integer value of intrinsic spin (such as s=0, 1, 2, ...), whereas a fermion is a particle with a half-integer value of intrinsic spin (s=1/2,3/2,...). Fermions obey the Pauli exclusion principle whereas bosons do not. All the known and conjectured carrier particles are bosons.
| Category | Particle Name | Symbol | Antiparticle | Rest Mass | B | \(L_e\) | \(L_μ\) | \(L_τ\) | S | Lifetime (s) |
|---|---|---|---|---|---|---|---|---|---|---|
| Gauge | Photon |
γ |
Self | 0 | 0 | 0 | 0 | 0 | 0 | Stable |
| Bosons | \(W\) | \(W^+\) |
\(W^
− \) |
\(
80.39×10^3\) |
0 | 0 | 0 | 0 | 0 |
\(
1.6×10^{−25}\) |
| \(Z\) | \(z^0\) | Self |
\(
90.19×10^3\) |
0 | 0 | 0 | 0 | 0 |
\(
1.32×10^{−25}\) |
|
| Leptons | Electron |
\(
e^−\) |
\(e^+\) |
0.511 | 0 |
\(
±1\) |
0 | 0 | 0 | Stable |
| Neutrino ( e ) |
\(ν_e\) |
\(\bar{ν_e}\) | 0 (7.0eV) | 0 |
\(
±1\) |
0 | 0 | 0 | Stable | |
| Muon | \(μ^−\) | \(μ^+\) | 105.7 | 0 | 0 |
\(
±1\) |
0 | 0 |
\(
2.20×10^{−6}\) |
|
|
Neutrino
(\(μ\)) |
\(v_μ\) |
\(\bar{v_μ}\) | 0 (<0.27) | 0 | 0 |
\(
±1\) |
0 | 0 | Stable | |
| Tau |
\(τ^−\) |
\(
τ^+\) |
1777 | 0 | 0 | 0 |
\(
±1\) |
0 |
\(
2.91×10^{−13}\) |
|
|
Neutrino
(\(τ\)) |
\(v_τ\) |
\(\bar{v_τ}\) |
0 (<31) | 0 | 0 | 0 |
\(
±1\) |
0 | Stable | |
| Hadrons (selected) | ||||||||||
| Mesons | Pion |
\(
π^+\) |
\(
π^−\) |
139.6 | 0 | 0 | 0 | 0 | 0 | \(2.60 × 10^{−8}\) |
|
\(
π^0\) |
Self | 135.0 | 0 | 0 | 0 | 0 | 0 | \(8.4 × 10^{−17}\) | ||
| Kaon |
\(
K^+\) |
\(
K^−\) |
493.7 | 0 | 0 | 0 | 0 |
\(
±1\) |
\(1.24 × 10^{−8}\) | |
|
\(
K^0\) |
\(
\bar{K}^0\) |
497.6 | 0 | 0 | 0 | 0 |
\(
±1\) |
\(0.90 × 10^{−10}\) | ||
| Eta | \(η^0\) | Self | 547.9 | 0 | 0 | 0 | 0 | 0 | \(2.53 × 10^{−19}\) | |
| (many other mesons known) | ||||||||||
| Baryons | Pronton | \(p\) | \(\bar{p}\) | 938.3 |
\(
±1\) |
0 | 0 | 0 | 0 | Stable |
| Neutron | \(n\) | \(\bar{n}\) | 936.6 |
\(
±1\) |
0 | 0 | 0 | 0 | 882 | |
| Lambda |
\(Λ^0\) |
\(\bar{Λ}^0\) | 1115.7 | \(±1\) | 0 | 0 | 0 | \(±1\) | \(2.63 × 10^{−10}\) | |
| Sigma |
\(Σ^+\) |
\(\bar{Σ}^−\) | 1189.4 | \(±1\) | 0 | 0 | 0 | \(±1\) | \(0.80 × 10^{−10}\) | |
| \(Σ^0\) | \(\bar{Σ}^0\) | 1192.6 | \(±1\) | 0 | 0 | 0 | \(±1\) | \(7.4 × 10^{−20}\) | ||
| \(Σ^−\) | \(\bar{Σ}^+\) | 1197.4 | \(±1\) | 0 | 0 | 0 | \(±1\) | \(1.48 × 10^{−10}\) | ||
| Xi |
\(
Ξ^0\) |
\(
\bar{Ξ}^−\) |
1314.9 | \(±1\) | 0 | 0 | 0 | \(±2\) | \(2.90 × 10^{−10}\) | |
|
\(
Ξ^−\) |
\(
Ξ^+\) |
1321.7 | \(±1\) | 0 | 0 | 0 | \(±2\) | \(1.64 × 10^{−10}\) | ||
| Omega | \(Ω^−\) | \(Ω^+\) | 1672.5 | \(±1\) | 0 | 0 | 0 | \(±3\) | \(0.82 × 10^{−10}\) | |
| (many other baryons known) | ||||||||||
| Rest mass is in unites of (\(MeV/c^2\)). |
All known leptons are listed in the table given above. There are only six leptons (and their antiparticles), and they seem to be fundamental in that they have no apparent underlying structure. Leptons have no discernible size other than their wavelength, so that we know they are pointlike down to about \(10^{−18}m\). The leptons fall into three families, implying three conservation laws for three quantum numbers. One of these was known from β decay, where the existence of the electron’s neutrino implied that a new quantum number, called the electron family number \(L_e\) is conserved. Thus, in \(β\) decay, an antielectron’s neutrino \(\bar{v_e}\) must be created with \(L_e=−1\) when an electron with \(L_e=+1\) is created, so that the total remains 0 as it was before decay.
Once the muon was discovered in cosmic rays, its decay mode was found to be
\[μ^−→e^−+\bar{v_e}+v_μ\]
which implied another “family” and associated conservation principle. The particle vμ is a muon’s neutrino, and it is created to conserve muon family number \(L_μ\). So muons are leptons with a family of their own, and conservation of total \(L_μ\) also seems to be obeyed in many experiments.
More recently, a third lepton family was discovered when τ particles were created and observed to decay in a manner similar to muons. One principal decay mode is
\[τ^−→μ^−+\bar{v_μ}+v_τ\]
Conservation of total \(L_τ\) seems to be another law obeyed in many experiments. In fact, particle experiments have found that lepton family number is not universally conserved, due to neutrino “oscillations,” or transformations of neutrinos from one family type to another.
Mesons and Baryons
Now, note that the hadrons in the table given above are divided into two subgroups, called mesons (originally for medium mass) and baryons (the name originally meaning large mass). The division between mesons and baryons is actually based on their observed decay modes and is not strictly associated with their masses. Mesons are hadrons that can decay to leptons and leave no hadrons, which implies that mesons are not conserved in number. Baryons are hadrons that always decay to another baryon. A new physical quantity called baryon number \(B\) seems to always be conserved in nature and is listed for the various particles in the table given above. Mesons and leptons have \(B=0\) so that they can decay to other particles with \(B=0\). But baryons have \(B=+1\) if they are matter, and \(B=−1\) if they are antimatter. The conservation of total baryon number is a more general rule than first noted in nuclear physics, where it was observed that the total number of nucleons was always conserved in nuclear reactions and decays. That rule in nuclear physics is just one consequence of the conservation of the total baryon number.
Forces, Reactions, and Reaction Rates
The forces that act between particles regulate how they interact with other particles. For example, pions feel the strong force and do not penetrate as far in matter as do muons, which do not feel the strong force. (This was the way those who discovered the muon knew it could not be the particle that carries the strong force—its penetration or range was too great for it to be feeling the strong force.) Similarly, reactions that create other particles, like cosmic rays interacting with nuclei in the atmosphere, have greater probability if they are caused by the strong force than if they are caused by the weak force. Such knowledge has been useful to physicists while analyzing the particles produced by various accelerators.
The forces experienced by particles also govern how particles interact with themselves if they are unstable and decay. For example, the stronger the force, the faster they decay and the shorter is their lifetime. An example of a nuclear decay via the strong force is \(^8Be→α+α\) with a lifetime of about \(10^{−16}s\). The neutron is a good example of decay via the weak force. The process \(n→p+e^−+\bar{v_e}\) has a longer lifetime of 882 s. The weak force causes this decay, as it does all \(β\) decay. An important clue that the weak force is responsible for \(β\) decay is the creation of leptons, such as \(e^−\) and \(\bar{v_e}\). None would be created if the strong force was responsible, just as no leptons are created in the decay of \(^8Be\). The systematics of particle lifetimes is a little simpler than nuclear lifetimes when hundreds of particles are examined (not just the ones in the table given above). Particles that decay via the weak force have lifetimes mostly in the range of \(10^{−16}\) to \(10^{−12}\) s, whereas those that decay via the strong force have lifetimes mostly in the range of \(10^{−16}\) to \(10^{−23}\) s. Turning this around, if we measure the lifetime of a particle, we can tell if it decays via the weak or strong force.
Yet another quantum number emerges from decay lifetimes and patterns. Note that the particles \(Λ,Σ,Ξ,\) and \(Ω\) decay with lifetimes on the order of \(10^{−10}\) s (the exception is \(Σ^0\), whose short lifetime is explained by its particular quark substructure.), implying that their decay is caused by the weak force alone, although they are hadrons and feel the strong force. The decay modes of these particles also show patterns—in particular, certain decays that should be possible within all the known conservation laws do not occur. Whenever something is possible in physics, it will happen. If something does not happen, it is forbidden by a rule. All this seemed strange to those studying these particles when they were first discovered, so they named a new quantum number strangeness , given the symbol \(S\) in the table given above. The values of strangeness assigned to various particles are based on the decay systematics. It is found that strangeness is conserved by the strong force , which governs the production of most of these particles in accelerator experiments. However, strangeness is not conserved by the weak force . This conclusion is reached from the fact that particles that have long lifetimes decay via the weak force and do not conserve strangeness. All of this also has implications for the carrier particles, since they transmit forces and are thus involved in these decays.
Example \(\PageIndex{1}\): Calculating Quantum Numbers in Two Decays
- The most common decay mode of the \(Ξ^−\) particle is \(Ξ^−→Λ^0+π^−\). Using the quantum numbers in the table given above, show that strangeness changes by 1, baryon number and charge are conserved, and lepton family numbers are unaffected.
- Is the decay \(K^+→μ^++ν_μ\) allowed, given the quantum numbers in the table given above?
Strategy
In part (a), the conservation laws can be examined by adding the quantum numbers of the decay products and comparing them with the parent particle. In part (b), the same procedure can reveal if a conservation law is broken or not.
Solution for (a)
Before the decay, the \(Ξ^−\) has strangeness \(S=−2\). After the decay, the total strangeness is \(–1\) for the \(Λ^0\), plus 0 for the \(π^−\). Thus, total strangeness has gone from –2 to –1 or a change of +1. Baryon number for the \(Ξ^−\) is \(B=+1\) before the decay, and after the decay the \(Λ^0\) has \(B=+1\) and the \(π^−\) has \(B=0\) so that the total baryon number remains \(+1\). Charge is \(–1\) before the decay, and the total charge after is also \(0−1=−1\). Lepton numbers for all the particles are zero, and so lepton numbers are conserved.
Discussion for (a)
The \(Ξ^−\) decay is caused by the weak interaction, since strangeness changes, and it is consistent with the relatively long \(1.64×10^{−10}-s\) lifetime of the \(Ξ^−\).
Solution for (b)
The decay \(K^+→μ^++ν)μ\) is allowed if charge, baryon number, mass-energy, and lepton numbers are conserved. Strangeness can change due to the weak interaction. Charge is conserved as \(s→d\). Baryon number is conserved, since all particles have \(B=0\) . Mass-energy is conserved in the sense that the \(K^+\) has a greater mass than the products, so that the decay can be spontaneous. Lepton family numbers are conserved at 0 for the electron and tau family for all particles. The muon family number is \(L_μ=0\) before and \(L_μ=−1+1=0\) after. Strangeness changes from \(+1\) before to 0 + 0 after, for an allowed change of 1. The decay is allowed by all these measures.
Discussion for (b)
This decay is not only allowed by our reckoning, it is, in fact, the primary decay mode of the \(K^+\) meson and is caused by the weak force, consistent with the long \(1.24×10^{−8}-s\) lifetime.
There are hundreds of particles, all hadrons, not listed in Table \(\PageIndex{1}\), most of which have shorter lifetimes. The systematics of those particle lifetimes, their production probabilities, and decay products are completely consistent with the conservation laws noted for lepton families, baryon number, and strangeness, but they also imply other quantum numbers and conservation laws. There are a finite, and in fact relatively small, number of these conserved quantities, however, implying a finite set of substructures. Additionally, some of these short-lived particles resemble the excited states of other particles, implying an internal structure. All of this jigsaw puzzle can be tied together and explained relatively simply by the existence of fundamental substructures. Leptons seem to be fundamental structures. Hadrons seem to have a substructure called quarks. Quarks: Is That All There Is? explores the basics of the underlying quark building blocks.
Summary
- All particles of matter have an antimatter counterpart that has the opposite charge and certain other quantum numbers as seen in Table . These matter-antimatter pairs are otherwise very similar but will annihilate when brought together. Known particles can be divided into three major groups—leptons, hadrons, and carrier particles (gauge bosons).
- Leptons do not feel the strong nuclear force and are further divided into three groups—electron family designated by electron family number \(L_e\); muon family designated by muon family number \(L_μ\); and tau family designated by tau family number \(L_τ\). The family numbers are not universally conserved due to neutrino oscillations.
- Hadrons are particles that feel the strong nuclear force and are divided into baryons, with the baryon family number \(B\) being conserved, and mesons.
Glossary
- boson
- particle with zero or an integer value of intrinsic spin
- baryons
- hadrons that always decay to another baryon
- baryon number
- a conserved physical quantity that is zero for mesons and leptons and ±1 for baryons and antibaryons, respectively
- conservation of total baryon number
- a general rule based on the observation that the total number of nucleons was always conserved in nuclear reactions and decays
- conservation of total electron family number
- a general rule stating that the total electron family number stays the same through an interaction
- conservation of total muon family number
- a general rule stating that the total muon family number stays the same through an interaction
- electron family number
- the number ±1 that is assigned to all members of the electron family, or the number 0 that is assigned to all particles not in the electron family
- fermion
- particle with a half-integer value of intrinsic spin
- gauge boson
- particle that carries one of the four forces
- hadrons
- particles that feel the strong nuclear force
- leptons
- particles that do not feel the strong nuclear force
- meson
- hadrons that can decay to leptons and leave no hadrons
- muon family number
- the number ±1 that is assigned to all members of the muon family, or the number 0 that is assigned to all particles not in the muon family
- strangeness
- a physical quantity assigned to various particles based on decay systematics
- tau family number
- the number ±1 that is assigned to all members of the tau family, or the number 0 that is assigned to all particles not in the tau famil
|
libretexts
|
2025-03-17T19:53:50.180016
| 2016-07-24T09:04:10 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.04%3A_Particles_Patterns_and_Conservation_Laws",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.4: Particles, Patterns, and Conservation Laws",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.05%3A_Quarks_-_Is_That_All_There_Is
|
33.5: Quarks - Is That All There Is?
Learning Objectives
By the end of this section, you will be able to:
- Define fundamental particle.
- Describe quark and antiquark.
- List the flavors of quark.
- Outline the quark composition of hadrons.
- Determine quantum numbers from quark composition.
Quarks have been mentioned at various points in this text as fundamental building blocks and members of the exclusive club of truly elementary particles. Note that an elementary or fundamental particle has no substructure (it is not made of other particles) and has no finite size other than its wavelength. This does not mean that fundamental particles are stable—some decay, while others do not. Keep in mind that all leptons seem to be fundamental, whereas no hadrons are fundamental. There is strong evidence that quarks are the fundamental building blocks of hadrons as seen in Figure \(\PageIndex{1}\). Quarks are the second group of fundamental particles (leptons are the first). The third and perhaps final group of fundamental particles is the carrier particles for the four basic forces. Leptons, quarks, and carrier particles may be all there is. In this module we will discuss the quark substructure of hadrons and its relationship to forces as well as indicate some remaining questions and problems.
Conception of Quarks
Quarks were first proposed independently by American physicists Murray Gell-Mann and George Zweig in 1963. Their quaint name was taken by Gell-Mann from a James Joyce novel—Gell-Mann was also largely responsible for the concept and name of strangeness. (Whimsical names are common in particle physics, reflecting the personalities of modern physicists.) Originally, three quark types—or flavors —were proposed to account for the then-known mesons and baryons. These quark flavors are named up (\(u\)), down (\(d\)), and strange (\(s\)). All quarks have half-integral spin and are thus fermions. All mesons have integral spin while all baryons have half-integral spin. Therefore, mesons should be made up of an even number of quarks while baryons need to be made up of an odd number of quarks. Figure \(\PageIndex{1}\) shows the quark substructure of the proton, neutron, and two pions. The most radical proposal by Gell-Mann and Zweig is the fractional charges of quarks, which are \(±(\frac{2}{3})q_e\) and \((\frac{1}{3})q_e\), whereas all directly observed particles have charges that are integral multiples of qe. Note that the fractional value of the quark does not violate the fact that the e is the smallest unit of charge that is observed, because a free quark cannot exist. Table \(\PageIndex{1}\) lists characteristics of the six quark flavors that are now thought to exist. Discoveries made since 1963 have required extra quark flavors, which are divided into three families quite analogous to leptons.
| Name | Symbol | Antiparticle | Spin | Charge | B | S | c | b | t | Mass \((GeV/c^2)\) |
|---|---|---|---|---|---|---|---|---|---|---|
| Up | u | \(\bar{u}\) | 1/2 | \(±\frac{2}{3}q_e\) |
\(±\frac{1}{3}\) |
0 | 0 | 0 | 0 | 0.005 |
| Down | d | \(\bar{d}\) | 1/2 | \(±\frac{1}{3}q_e\) | \(±\frac{1}{3}\) | 0 | 0 | 0 | 0 | 0.008 |
| Strange | s | \(\bar{s}\) | 1/2 | \(±\frac{1}{3}q_e\) | \(±\frac{1}{3}\) | \(±1\) | 0 | 0 | 0 | 0.50 |
| Charmed | c | \(\bar{c}\) | 1/2 | \(±\frac{2}{3}q_e\) | \(±\frac{1}{3}\) | 0 | \(±1\) | 0 | 0 | 1.6 |
| Bottom | b | \(\bar{b}\) | 1/2 | \(±\frac{1}{3}q_e\) | \(±\frac{1}{3}\) | 0 | 0 | \(±1\) | 0 | 5 |
| Top | t | \(\bar{t}\) | 1/2 | \(±\frac{2}{3}q_e\) | \(±\frac{1}{3}\) | 0 | 0 | 0 | \(±1\) | 173 |
How Does it Work?
To understand how these quark substructures work, let us specifically examine the proton, neutron, and the two pions pictured in Figure \(\PageIndex{1}\) before moving on to more general considerations. First, the proton p is composed of the three quarks uud , so that its total charge is
\[+\left(\frac{2}{3}\right)q_e+\left(\frac{2}{3}\right)q_e−\left(\frac{1}{3}\right)q_e=q_e\]
as expected. With the spins aligned as in the figure, the proton’s intrinsic spin is
\[+\left(\dfrac{1}{2}\right)+\left(\dfrac{1}{2}\right)−\left(\dfrac{1}{2}\right)=\left(\dfrac{1}{2}\right)\]
also as expected. Note that the spins of the up quarks are aligned, so that they would be in the same state except that they have different colors (another quantum number to be elaborated upon a little later). Quarks obey the Pauli exclusion principle . Similar comments apply to the neutron n , which is composed of the three quarks \(udd\). Note also that the neutron is made of charges that add to zero but move internally, producing its well-known magnetic moment. When the neutron \(β^−\) decays, it does so by changing the flavor of one of its quarks. Writing neutron \(β^−\) decay in terms of quarks,
\[n→p+β^−+\bar{v_e}\]
becomes
\[udd→uud+β^−+\bar{v_e}\]
We see that this is equivalent to a down quark changing flavor to become an up quark:
\[d→u+β^−+\bar{v_e}\]This is an example of the general fact that the weak nuclear force can change the flavor of a quark . By general, we mean that any quark can be converted to any other (change flavor) by the weak nuclear force. Not only can we get \(d→u\), we can also get \(u→d\). Furthermore, the strange quark can be changed by the weak force, too, making \(s→u\) and \(s→d\) possible. This explains the violation of the conservation of strangeness by the weak force noted in the preceding section. Another general fact is that the strong nuclear force cannot change the flavor of a quark .
Again, from Figure \(\PageIndex{1}\), we see that the \(π^+\) meson (one of the three pions) is composed of an up quark plus an antidown quark, or \(u\bar{d}\). Its total charge is thus \(+(\frac{2}{3})q_e+(\frac{1}{3})q_e=q_e\), as expected. Its baryon number is 0, since it has a quark and an antiquark with baryon numbers \(+(\frac{1}{3})−(\frac{1}{3})=0\). The \(π^+\) half-life is relatively long since, although it is composed of matter and antimatter, the quarks are different flavors and the weak force should cause the decay by changing the flavor of one into that of the other. The spins of the \(u\) and \(\bar{d}\) quarks are antiparallel, enabling the pion to have spin zero, as observed experimentally. Finally, the \(π^−\) meson shown in Figure \(\PageIndex{1}\) is the antiparticle of the \(π^+\) meson, and it is composed of the corresponding quark antiparticles. That is, the \(π^+\) meson is \(u\bar{d}\), while the \(π^−\) meson is \(u\bar{d}\). These two pions annihilate each other quickly, because their constituent quarks are each other’s antiparticles.
Quark Rules
Two general rules for combining quarks to form hadrons are:
- Baryons are composed of three quarks, and antibaryons are composed of three antiquarks.
- Mesons are combinations of a quark and an antiquark.
One of the clever things about this scheme is that only integral charges result, even though the quarks have fractional charge.
All Combinations are Possible
All quark combinations are possible and Table \(\PageIndex{2}\) lists some of these combinations. When Gell-Mann and Zweig proposed the original three quark flavors, particles corresponding to all combinations of those three had not been observed. The pattern was there, but it was incomplete—much as had been the case in the periodic table of the elements and the chart of nuclides. The \(Ω^−\) particle, in particular, had not been discovered but was predicted by quark theory. Its combination of three strange quarks, \(sss\), gives it a strangeness of \(−3\) and other predictable characteristics, such as spin, charge, approximate mass, and lifetime. If the quark picture is complete, the \(Ω^−\) should exist. It was first observed in 1964 at Brookhaven National Laboratory and had the predicted characteristics as seen in Figure \(\PageIndex{2}\). The discovery of the \(Ω^−\) was convincing indirect evidence for the existence of the three original quark flavors and boosted theoretical and experimental efforts to further explore particle physics in terms of quarks.
| Particle | Quark Composition | Particle | Quark Composition |
|---|---|---|---|
| Mesons | Baryons | ||
| \(π^+\) | \(u\bar{d}\) | \(p\) | \(uud\) |
| \(π^−\) | \(\bar{u}d\) | \(n\) | \(udd\) |
|
\(π^0\), mixture |
\(u\bar{u}\) \(d\bar{d}\) |
\(Δ^0\) | \(udd\) |
|
\(η^0\), mixture |
\(u\bar{u}\) \(d\bar{d}\) |
\(Δ^+\) | \(uud\) |
| \(K^0\) | \(d\bar{s}\) | \(Δ^−\) | \(ddd\) |
| \(\bar{K}^0\) | \(\bar{d}s\) | \(Δ^{++}\) | \(uuu\) |
| \(K^+\) | \(u\bar{s}\) | \(Λ^0\) | \(uds\) |
| \(K^−\) | \(\bar{u}s\) | \(Σ^0\) | \(uds\) |
| \(J/ψ\) | \(c\bar{c}\) | \(Σ^+\) | \(uus\) |
| \(ϒ\) | \(b\bar{b}\) | \(Σ^−\) | \(dds\) |
| \(Ξ^0\) | \(uss\) | ||
| \(Ξ^−\) | \(dss\) | ||
| \(Ω^−\) | \(sss\) |
PATTERNS AND PUZZLES: ATOMS, NUCLEI, AND QUARKS
Patterns in the properties of atoms allowed the periodic table to be developed. From it, previously unknown elements were predicted and observed. Similarly, patterns were observed in the properties of nuclei, leading to the chart of nuclides and successful predictions of previously unknown nuclides. Now with particle physics, patterns imply a quark substructure that, if taken literally, predicts previously unknown particles. These have now been observed in another triumph of underlying unity.
Example \(\PageIndex{1}\): Quantum Numbers From Quark Composition
Verify the quantum numbers given for the \(Ξ^0\) particle by adding the quantum numbers for its quark composition as given in Table \(\PageIndex{2}\).
Strategy
The composition of the \(Ξ^0\) is given as \(uss\) in Table \(\PageIndex{2}\). The quantum numbers for the constituent quarks are given in Table \(\PageIndex{1}\). We will not consider spin, because that is not given for the \(Ξ^0\). But we can check on charge and the other quantum numbers given for the quarks.
Solution
The total charge of \(uss\) is \(+(\frac{2}{3})q_e−(\frac{1}{3})q_e−(\frac{1}{3})q_e=0\), which is correct for the \(Ξ^0\). The baryon number is \(+(\frac{1}{3})+(\frac{1}{3})+(\frac{1}{3})=1\), also correct since the \(Ξ^0\) is a matter baryon and has \(B=1\), as listed in [link]. Its strangeness is \(S=0−1−1=−2\), also as expected from [link]. Its charm, bottomness, and topness are 0, as are its lepton family numbers (it is not a lepton).
Discussion
This procedure is similar to what the inventors of the quark hypothesis did when checking to see if their solution to the puzzle of particle patterns was correct. They also checked to see if all combinations were known, thereby predicting the previously unobserved \(Ω^−\) as the completion of a pattern.
Now, Let Us Talk About Direct Evidence
At first, physicists expected that, with sufficient energy, we should be able to free quarks and observe them directly. This has not proved possible. There is still no direct observation of a fractional charge or any isolated quark. When large energies are put into collisions, other particles are created—but no quarks emerge. There is nearly direct evidence for quarks that is quite compelling. By 1967, experiments at SLAC scattering 20-GeV electrons from protons had produced results like Rutherford had obtained for the nucleus nearly 60 years earlier . The SLAC scattering experiments showed unambiguously that there were three pointlike (meaning they had sizes considerably smaller than the probe’s wavelength) charges inside the proton as seen in Figure \(\PageIndex{3}\). This evidence made all but the most skeptical admit that there was validity to the quark substructure of hadrons.
More recent and higher-energy experiments have produced jets of particles in collisions, highly suggestive of three quarks in a nucleon. Since the quarks are very tightly bound, energy put into separating them pulls them only so far apart before it starts being converted into other particles. More energy produces more particles, not a separation of quarks. Conservation of momentum requires that the particles come out in jets along the three paths in which the quarks were being pulled. Note that there are only three jets, and that other characteristics of the particles are consistent with the three-quark substructure.
Quarks Have Their Ups and Downs
The quark model actually lost some of its early popularity because the original model with three quarks had to be modified. The up and down quarks seemed to compose normal matter as seen in Table \(\PageIndex{1}\), while the single strange quark explained strangeness. Why didn’t it have a counterpart? A fourth quark flavor called charm (c) was proposed as the counterpart of the strange quark to make things symmetric—there would be two normal quarks (u and d) and two exotic quarks (s and c). Furthermore, at that time only four leptons were known, two normal and two exotic. It was attractive that there would be four quarks and four leptons. The problem was that no known particles contained a charmed quark. Suddenly, in November of 1974, two groups (one headed by C. C. Ting at Brookhaven National Laboratory and the other by Burton Richter at SLAC) independently and nearly simultaneously discovered a new meson with characteristics that made it clear that its substructure is \(c\bar{c}\). It was called \(J\) by one group and psi (\(ψ\)) by the other and now is known as the \(J/ψ\) meson. Since then, numerous particles have been discovered containing the charmed quark, consistent in every way with the quark model. The discovery of the \(J/ψ\) meson had such a rejuvenating effect on quark theory that it is now called the November Revolution. Ting and Richter shared the 1976 Nobel Prize.
History quickly repeated itself. In 1975, the tau (\(τ\)) was discovered, and a third family of leptons emerged as seen in [link]). Theorists quickly proposed two more quark flavors called top (\(t\)) or truth and bottom (\(b\)) or beauty to keep the number of quarks the same as the number of leptons. And in 1976, the upsilon (\(Υ\)) meson was discovered and shown to be composed of a bottom and an antibottom quark or \(b\bar{b}\), quite analogous to the \(J/ψ\) being \(c\bar{c}\) as seen in Table \(\PageIndex{2}\). Being a single flavor, these mesons are sometimes called bare charm and bare bottom and reveal the characteristics of their quarks most clearly. Other mesons containing bottom quarks have since been observed. In 1995, two groups at Fermilab confirmed the top quark’s existence, completing the picture of six quarks listed in Table \(\PageIndex{1}\). Each successive quark discovery—first \(c\), then \(b\), and finally \(t\) —has required higher energy because each has higher mass. Quark masses in Table \(\PageIndex{1}\) are only approximately known, because they are not directly observed. They must be inferred from the masses of the particles they combine to form.
What’s Color got to do with it?—A Whiter Shade of Pale
As mentioned and shown in Figure \(\PageIndex{1}\), quarks carry another quantum number, which we call color. Of course, it is not the color we sense with visible light, but its properties are analogous to those of three primary and three secondary colors. Specifically, a quark can have one of three color values we call red (\(R\)), green (\(G\)), and blue (\(B\)) in analogy to those primary visible colors. Antiquarks have three values we call antired or cyan (\(\bar{R}\)), antigreen or magenta (\(\bar{G}\)), and antiblue or yellow (\(\bar{B}\)) in analogy to those secondary visible colors. The reason for these names is that when certain visual colors are combined, the eye sees white. The analogy of the colors combining to white is used to explain why baryons are made of three quarks, why mesons are a quark and an antiquark, and why we cannot isolate a single quark. The force between the quarks is such that their combined colors produce white. This is illustrated in Figure \(\PageIndex{5}\). A baryon must have one of each primary color or RGB, which produces white. A meson must have a primary color and its anticolor, also producing white.
Why must hadrons be white? The color scheme is intentionally devised to explain why baryons have three quarks and mesons have a quark and an antiquark. Quark color is thought to be similar to charge, but with more values. An ion, by analogy, exerts much stronger forces than a neutral molecule. When the color of a combination of quarks is white, it is like a neutral atom. The forces a white particle exerts are like the polarization forces in molecules, but in hadrons these leftovers are the strong nuclear force. When a combination of quarks has color other than white, it exerts extremely large forces—even larger than the strong force—and perhaps cannot be stable or permanently separated. This is part of the theory of quark confinement , which explains how quarks can exist and yet never be isolated or directly observed. Finally, an extra quantum number with three values (like those we assign to color) is necessary for quarks to obey the Pauli exclusion principle. Particles such as the \(Ω^−\), which is composed of three strange quarks, \(sss\), and the \(Δ^{++}\), which is three up quarks, uuu , can exist because the quarks have different colors and do not have the same quantum numbers. Color is consistent with all observations and is now widely accepted. Quark theory including color is called quantum chromodynamics (QCD), also named by Gell-Mann.
The Three Families of Fundamental Particles
Fundamental particles are thought to be one of three types—leptons, quarks, or carrier particles. Each of those three types is further divided into three analogous families as illustrated in Figure \(\PageIndex{6}\). We have examined leptons and quarks in some detail. Each has six members (and their six antiparticles) divided into three analogous families. The first family is normal matter, of which most things are composed. The second is exotic, and the third more exotic and more massive than the second. The only stable particles are in the first family, which also has unstable members.
Always searching for symmetry and similarity, physicists have also divided the carrier particles into three families, omitting the graviton. Gravity is special among the four forces in that it affects the space and time in which the other forces exist and is proving most difficult to include in a Theory of Everything or TOE (to stub the pretension of such a theory). Gravity is thus often set apart. It is not certain that there is meaning in the groupings shown in Figure, but the analogies are tempting. In the past, we have been able to make significant advances by looking for analogies and patterns, and this is an example of one under current scrutiny. There are connections between the families of leptons, in that the \(τ\) decays into the \(μ\) and the \(μ\) into the e . Similarly for quarks, the higher families eventually decay into the lowest, leaving only u and d quarks. We have long sought connections between the forces in nature. Since these are carried by particles, we will explore connections between gluons, \(W^±\) and \(Z^0\), and photons as part of the search for unification of forces discussed in GUTs: The Unification of Forces .
Summary
- Hadrons are thought to be composed of quarks, with baryons having three quarks and mesons having a quark and an antiquark.
- The characteristics of the six quarks and their antiquark counterparts are given in Table \(\PageIndex{1}\), and the quark compositions of certain hadrons are given in Table \(\PageIndex{2}\).
- Indirect evidence for quarks is very strong, explaining all known hadrons and their quantum numbers, such as strangeness, charm, topness, and bottomness.
- Quarks come in six flavors and three colors and occur only in combinations that produce white.
- Fundamental particles have no further substructure, not even a size beyond their de Broglie wavelength.
- There are three types of fundamental particles—leptons, quarks, and carrier particles. Each type is divided into three analogous families as indicated in Figure \(\PageIndex{6}\).
Footnotes
1 The lower of the \(±\) symbols are the values for antiquarks.
2 \(B\) is baryon number, \( S\) is strangeness, \(c\) is charm, \(b\) is bottomness, \(t\) is topness.
3 Values are approximate, are not directly observable, and vary with model.
4 These two mesons are different mixtures, but each is its own antiparticle, as indicated by its quark composition.
5 These two mesons are different mixtures, but each is its own antiparticle, as indicated by its quark composition.
6 These two mesons are different mixtures, but each is its own antiparticle, as indicated by its quark composition.
7 Antibaryons have the antiquarks of their counterparts. The antiproton \(\bar{p}\) is \(\bar{u}\bar{u}d\), for example.
8 Baryons composed of the same quarks are different states of the same particle. For example, the \(Δ^+\) is an excited state of the proton.
Glossary
- bottom
- a quark flavor
- charm
- a quark flavor, which is the counterpart of the strange quark
- color
- a quark flavor
- down
- the second-lightest of all quarks
- flavors
- quark type
- fundamental particle
- particle with no substructure
- quantum chromodynamics
- quark theory including color
- quark
- an elementary particle and a fundamental constituent of matter
- strange
- the third lightest of all quarks
- theory of quark confinement
- explains how quarks can exist and yet never be isolated or directly observed
- top
- a quark flavor
- up
- the lightest of all quarks
|
libretexts
|
2025-03-17T19:53:50.301593
| 2016-07-24T09:05:01 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.05%3A_Quarks_-_Is_That_All_There_Is",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.5: Quarks - Is That All There Is?",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.06%3A_GUTs_-_The_Unification_of_Forces
|
33.6: GUTs - The Unification of Forces
Learning Objectives
By the end of this section, you will be able to:
- State the grand unified theory.
- Explain the electroweak theory.
- Define gluons.
- Describe the principle of quantum chromodynamics.
- Define the standard model.
Present quests to show that the four basic forces are different manifestations of a single unified force follow a long tradition. In the 19th century, the distinct electric and magnetic forces were shown to be intimately connected and are now collectively called the electromagnetic force. More recently, the weak nuclear force has been shown to be connected to the electromagnetic force in a manner suggesting that a theory may be constructed in which all four forces are unified. Certainly, there are similarities in how forces are transmitted by the exchange of carrier particles, and the carrier particles themselves (the gauge bosons in [link]) are also similar in important ways. The analogy to the unification of electric and magnetic forces is quite good—the four forces are distinct under normal circumstances, but there are hints of connections even on the atomic scale, and there may be conditions under which the forces are intimately related and even indistinguishable. The search for a correct theory linking the forces, called the Grand Unified Theory (GUT) , is explored in this section in the realm of particle physics. Frontiers of Physics expands the story in making a connection with cosmology, on the opposite end of the distance scale.
Figure is a Feynman diagram showing how the weak nuclear force is transmitted by the carrier particle \(Z^0\), similar to the diagrams in [link] and [link] for the electromagnetic and strong nuclear forces. In the 1960s, a gauge theory, called electroweak theory , was developed by Steven Weinberg, Sheldon Glashow, and Abdus Salam and proposed that the electromagnetic and weak forces are identical at sufficiently high energies. One of its predictions, in addition to describing both electromagnetic and weak force phenomena, was the existence of the \(W^+,W^−\), and \(Z^0\) carrier particles. Not only were three particles having spin 1 predicted, the mass of the \(W^+\) and \(W^−\) was predicted to be 81 \(GeV/c^2\), and that of the \(Z^0\) was predicted to be 90 \(GeV/c^2\). (Their masses had to be about 1000 times that of the pion, or about 100 \(GeV/c^2\), since the range of the weak force is about 1000 times less than the strong force carried by virtual pions.) In 1983, these carrier particles were observed at CERN with the predicted characteristics, including masses having the predicted values as seen in [link]. This was another triumph of particle theory and experimental effort, resulting in the 1984 Nobel Prize to the experiment’s group leaders Carlo Rubbia and Simon van der Meer. Theorists Weinberg, Glashow, and Salam had already been honored with the 1979 Nobel Prize for other aspects of electroweak theory.
Although the weak nuclear force is very short ranged ( \(< 10^{–18} m\), its effects on atomic levels can be measured given the extreme precision of modern techniques. Since electrons spend some time in the nucleus, their energies are affected, and spectra can even indicate new aspects of the weak force, such as the possibility of other carrier particles. So systems many orders of magnitude larger than the range of the weak force supply evidence of electroweak unification in addition to evidence found at the particle scale.
Gluons (\(g\)) are the proposed carrier particles for the strong nuclear force, although they are not directly observed. Like quarks, gluons may be confined to systems having a total color of white. Less is known about gluons than the fact that they are the carriers of the weak and certainly of the electromagnetic force. QCD theory calls for eight gluons, all massless and all spin 1. Six of the gluons carry a color and an anticolor, while two do not carry color, as illustrated in Figure \(\PageIndex{2a}\). There is indirect evidence of the existence of gluons in nucleons. When high-energy electrons are scattered from nucleons and evidence of quarks is seen, the momenta of the quarks are smaller than they would be if there were no gluons. That means that the gluons carrying force between quarks also carry some momentum, inferred by the already indirect quark momentum measurements. At any rate, the gluons carry color charge and can change the colors of quarks when exchanged, as seen in Figure \(\PageIndex{2b}\). In the figure, a red down quark interacts with a green strange quark by sending it a gluon. That gluon carries red away from the down quark and leaves it green, because it is an \(R\bar{G}\) (red-antigreen) gluon. (Taking antigreen away leaves you green.) Its antigreenness kills the green in the strange quark, and its redness turns the quark red.
The strong force is complicated, since observable particles that feel the strong force (hadrons) contain multiple quarks. Figure \(\PageIndex{3}\) shows the quark and gluon details of pion exchange between a proton and a neutron as illustrated earlier in [link] and [link]. The quarks within the proton and neutron move along together exchanging gluons, until the proton and neutron get close together. As the u quark leaves the proton, a gluon creates a pair of virtual particles, a \(d\) quark and a \(d^−\) antiquark. The \(d\) quark stays behind and the proton turns into a neutron, while the \(u\)and \(\bar{d}\) move together as a \(π^+\) ([link] confirms the ud− composition for the π+.) The d− annihilates a d quark in the neutron, the u joins the neutron, and the neutron becomes a proton. A pion is exchanged and a force is transmitted.
It is beyond the scope of this text to go into more detail on the types of quark and gluon interactions that underlie the observable particles, but the theory ( quantum chromodynamics or QCD) is very self-consistent. So successful have QCD and the electroweak theory been that, taken together, they are called the Standard Model . Advances in knowledge are expected to modify, but not overthrow, the Standard Model of particle physics and forces.
MAKING CONNECTIONS: UNIFICATION OF FORCES
Grand Unified Theory (GUT) is successful in describing the four forces as distinct under normal circumstances, but connected in fundamental ways. Experiments have verified that the weak and electromagnetic force become identical at very small distances and provide the GUT description of the carrier particles for the forces. GUT predicts that the other forces become identical under conditions so extreme that they cannot be tested in the laboratory, although there may be lingering evidence of them in the evolution of the universe. GUT is also successful in describing a system of carrier particles for all four forces, but there is much to be done, particularly in the realm of gravity.
How can forces be unified? They are definitely distinct under most circumstances, for example, being carried by different particles and having greatly different strengths. But experiments show that at extremely small distances, the strengths of the forces begin to become more similar. In fact, electroweak theory’s prediction of the \(W^+, W^−\), and \(Z^0\) carrier particles was based on the strengths of the two forces being identical at extremely small distances as seen in Figure \(\PageIndex{4}\). As discussed in case of the creation of virtual particles for extremely short times, the small distances or short ranges correspond to the large masses of the carrier particles and the correspondingly large energies needed to create them. Thus, the energy scale on the horizontal axis of Figure \(\PageIndex{4}\) corresponds to smaller and smaller distances, with 100 GeV corresponding to approximately, \(10^{−18}m\) for example. At that distance, the strengths of the EM and weak forces are the same. To test physics at that distance, energies of about 100 GeV must be put into the system, and that is sufficient to create and release the \(W^+, W^−\), and \(Z^0\) carrier particles. At those and higher energies, the masses of the carrier particles becomes less and less relevant, and the \(Z^0\) in particular resembles the massless, chargeless, spin 1 photon. In fact, there is enough energy when things are pushed to even smaller distances to transform the, and \(Z^0\) into massless carrier particles more similar to photons and gluons. These have not been observed experimentally, but there is a prediction of an associated particle called the Higgs boson . The mass of this particle is not predicted with nearly the certainty with which the mass of the \(W^+,W^−\), and \(Z^0\) particles were predicted, but it was hoped that the Higgs boson could be observed at the now-canceled Superconducting Super Collider (SSC). Ongoing experiments at the Large Hadron Collider at CERN have presented some evidence for a Higgs boson with a mass of 125 GeV, and there is a possibility of a direct discovery during 2012. The existence of this more massive particle would give validity to the theory that the carrier particles are identical under certain circumstances.
The small distances and high energies at which the electroweak force becomes identical with the strong nuclear force are not reachable with any conceivable human-built accelerator. At energies of about \(10^{14}GeV\) (16,000 J per particle), distances of about \(10^{−30}m\) can be probed. Such energies are needed to test theory directly, but these are about \(10^{10}\) higher than the proposed giant SSC would have had, and the distances are about \(10^{−12}\) smaller than any structure we have direct knowledge of. This would be the realm of various GUTs, of which there are many since there is no constraining evidence at these energies and distances. Past experience has shown that any time you probe so many orders of magnitude further (here, about \(10^{12}\)), you find the unexpected. Even more extreme are the energies and distances at which gravity is thought to unify with the other forces in a TOE. Most speculative and least constrained by experiment are TOEs, one of which is called Superstring theory . Superstrings are entities that are \(10^{−35}m\) in scale and act like one-dimensional oscillating strings and are also proposed to underlie all particles, forces, and space itself.
At the energy of GUTs, the carrier particles of the weak force would become massless and identical to gluons. If that happens, then both lepton and baryon conservation would be violated. We do not see such violations, because we do not encounter such energies. However, there is a tiny probability that, at ordinary energies, the virtual particles that violate the conservation of baryon number may exist for extremely small amounts of time (corresponding to very small ranges). All GUTs thus predict that the proton should be unstable, but would decay with an extremely long lifetime of about \(10^{31}y\). The predicted decay mode is
\(p→π^0+e^+\), (proposed proton decay)
which violates both conservation of baryon number and electron family number. Although \(10^{31}y\) is an extremely long time (about \(10^{21}\) times the age of the universe), there are a lot of protons, and detectors have been constructed to look for the proposed decay mode as seen in Figure \(\PageIndex{5}\). It is somewhat comforting that proton decay has not been detected, and its experimental lifetime is now greater than \(5×10^{32}y\). This does not prove GUTs wrong, but it does place greater constraints on the theories, benefiting theorists in many ways.
From looking increasingly inward at smaller details for direct evidence of electroweak theory and GUTs, we turn around and look to the universe for evidence of the unification of forces. In the 1920s, the expansion of the universe was discovered. Thinking backward in time, the universe must once have been very small, dense, and extremely hot. At a tiny fraction of a second after the fabled Big Bang, forces would have been unified and may have left their fingerprint on the existing universe. This, one of the most exciting forefronts of physics, is the subject of Frontiers of Physics.
Summary
- Attempts to show unification of the four forces are called Grand Unified Theories (GUTs) and have been partially successful, with connections proven between EM and weak forces in electroweak theory.
- The strong force is carried by eight proposed particles called gluons, which are intimately connected to a quantum number called color—their governing theory is thus called quantum chromodynamics (QCD). Taken together, QCD and the electroweak theory are widely accepted as the Standard Model of particle physics.
- Unification of the strong force is expected at such high energies that it cannot be directly tested, but it may have observable consequences in the as-yet unobserved decay of the proton and topics to be discussed in the next chapter. Although unification of forces is generally anticipated, much remains to be done to prove its validity.
Glossary
- electroweak theory
- theory showing connections between EM and weak forces
- grand unified theory
- theory that shows unification of the strong and electroweak forces
- gluons
- eight proposed particles which carry the strong force
- Higgs boson
- a massive particle that, if observed, would give validity to the theory that carrier particles are identical under certain circumstances
- quantum chromodynamics
- the governing theory of connecting quantum number color to gluons
- standard model
- combination of quantum chromodynamics and electroweak theory
- superstring theory
- a theory of everything based on vibrating strings some \(10^{−35}m\) in length
|
libretexts
|
2025-03-17T19:53:50.373497
| 2016-07-24T09:11:00 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.06%3A_GUTs_-_The_Unification_of_Forces",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.6: GUTs - The Unification of Forces",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.E%3A_Special_Relativity_(Exercise)
|
33.E: Special Relativity (Exercise)
-
- Last updated
- Save as PDF
Conceptual Questions
33.3: Accelerators Create Matter from Energy
1. The total energy in the beam of an accelerator is far greater than the energy of the individual beam particles. Why isn’t this total energy available to create a single extremely massive particle?
2. Synchrotron radiation takes energy from an accelerator beam and is related to acceleration. Why would you expect the problem to be more severe for electron accelerators than proton accelerators?
3. What two major limitations prevent us from building high-energy accelerators that are physically small?
4. What are the advantages of colliding-beam accelerators? What are the disadvantages?
33.4: Particles, Patterns, and Conservation Laws
5. Large quantities of antimatter isolated from normal matter should behave exactly like normal matter. An antiatom, for example, composed of positrons, antiprotons, and antineutrons should have the same atomic spectrum as its matter counterpart. Would you be able to tell it is antimatter by its emission of antiphotons? Explain briefly.
6. Massless particles are not only neutral, they are chargeless (unlike the neutron). Why is this so?
7. Massless particles must travel at the speed of light, while others cannot reach this speed. Why are all massless particles stable? If evidence is found that neutrinos spontaneously decay into other particles, would this imply they have mass?
8. When a star erupts in a supernova explosion, huge numbers of electron neutrinos are formed in nuclear reactions. Such neutrinos from the 1987A supernova in the relatively nearby Magellanic Cloud were observed within hours of the initial brightening, indicating they traveled to earth at approximately the speed of light. Explain how this data can be used to set an upper limit on the mass of the neutrino, noting that if the mass is small the neutrinos could travel very close to the speed of light and have a reasonable energy (on the order of MeV).
9. Theorists have had spectacular success in predicting previously unknown particles. Considering past theoretical triumphs, why should we bother to perform experiments?
10. What lifetime do you expect for an antineutron isolated from normal matter?
11. Why does the \(\displaystyle η^0\) meson have such a short lifetime compared to most other mesons?
12. (a) Is a hadron always a baryon?
(b) Is a baryon always a hadron?
(c) Can an unstable baryon decay into a meson, leaving no other baryon?
13. Explain how conservation of baryon number is responsible for conservation of total atomic mass (total number of nucleons) in nuclear decay and reactions.
33.5: Quarks: Is That All There Is?
14. The quark flavor change \(\displaystyle d→u\) takes place in \(\displaystyle β^−\) decay. Does this mean that the reverse quark flavor change \(\displaystyle u→d\) takes place in \(\displaystyle β^+\) decay? Justify your response by writing the decay in terms of the quark constituents, noting that it looks as if a proton is converted into a neutron in β+ decay.
15. Explain how the weak force can change strangeness by changing quark flavor.
16. Beta decay is caused by the weak force, as are all reactions in which strangeness changes. Does this imply that the weak force can change quark flavor? Explain.
17. Why is it easier to see the properties of the c, b , and t quarks in mesons having composition \(\displaystyle W^−\) or \(\displaystyle t\bar{t}\) rather than in baryons having a mixture of quarks, such as \(\displaystyle udb\)?
18. How can quarks, which are fermions, combine to form bosons? Why must an even number combine to form a boson? Give one example by stating the quark substructure of a boson.
19. What evidence is cited to support the contention that the gluon force between quarks is greater than the strong nuclear force between hadrons? How is this related to color? Is it also related to quark confinement?
20. Discuss how we know that \(\displaystyle π-mesons (π^+,π,π^0)\) are not fundamental particles and are not the basic carriers of the strong force.
21. An antibaryon has three antiquarks with colors \(\displaystyle \bar{R}\bar{G}\bar{B}\). What is its color?
22. Suppose leptons are created in a reaction. Does this imply the weak force is acting? (for example, consider \(\displaystyle β\) decay.)
23. How can the lifetime of a particle indicate that its decay is caused by the strong nuclear force? How can a change in strangeness imply which force is responsible for a reaction? What does a change in quark flavor imply about the force that is responsible?
24. (a) Do all particles having strangeness also have at least one strange quark in them?
(b) Do all hadrons with a strange quark also have nonzero strangeness?
25. The sigma-zero particle decays mostly via the reaction \(\displaystyle \sum{}^0→Λ^0+γ\). Explain how this decay and the respective quark compositions imply that the \(\displaystyle \sum{}^0\) is an excited state of the \(\displaystyle Λ^0\).
26. What do the quark compositions and other quantum numbers imply about the relationships between the \(\displaystyle Δ^+\) and the proton? The \(\displaystyle Δ^0\) and the neutron?
27. Discuss the similarities and differences between the photon and the \(\displaystyle Z^0\) in terms of particle properties, including forces felt.
28. Identify evidence for electroweak unification.
29. The quarks in a particle are confined, meaning individual quarks cannot be directly observed. Are gluons confined as well? Explain
33.6: Grand Unified Theories
30. If a GUT is proven, and the four forces are unified, it will still be correct to say that the orbit of the moon is determined by the gravitational force. Explain why.
31. If the Higgs boson is discovered and found to have mass, will it be considered the ultimate carrier of the weak force? Explain your response.
32. Gluons and the photon are massless. Does this imply that the \(\displaystyle W^+, W^−\), and \(\displaystyle Z^0\) are the ultimate carriers of the weak force?
Problems & Exercises
33.1: The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited
33. A virtual particle having an approximate mass of \(\displaystyle 10^{14}GeV/c^2\) may be associated with the unification of the strong and electroweak forces. For what length of time could this virtual particle exist (in temporary violation of the conservation of mass-energy as allowed by the Heisenberg uncertainty principle)?
Solution
\(\displaystyle 3×10^{−39}s\)
34. Calculate the mass in \(\displaystyle GeV/c^2\) of a virtual carrier particle that has a range limited to \(\displaystyle 10^{−30}\) m by the Heisenberg uncertainty principle. Such a particle might be involved in the unification of the strong and electroweak forces.
35. Another component of the strong nuclear force is transmitted by the exchange of virtual K-mesons. Taking K-mesons to have an average mass of \(\displaystyle 495MeV/c^2\), what is the approximate range of this component of the strong force?
Solution
\(\displaystyle 1.99×10^{−16}m(0.2fm)\)
33.2: The Four Basic Forces
36. (a) Find the ratio of the strengths of the weak and electromagnetic forces under ordinary circumstances.
(b) What does that ratio become under circumstances in which the forces are unified?
Solution
(a) \(\displaystyle 10^{−11}\) to 1, weak to EM
(b) 1 to 1
37. The ratio of the strong to the weak force and the ratio of the strong force to the electromagnetic force become 1 under circumstances where they are unified. What are the ratios of the strong force to those two forces under normal circumstances?
33.3: Accelerators Create Matter from Energy
38. At full energy, protons in the 2.00-km-diameter Fermilab synchrotron travel at nearly the speed of light, since their energy is about 1000 times their rest mass energy.
(a) How long does it take for a proton to complete one trip around?
(b) How many times per second will it pass through the target area?
Solution
(a) \(\displaystyle 2.09×10^{−5}s\)
(b) \(\displaystyle 4.77×10^4Hz\)
39. Suppose a \(\displaystyle W^−\) created in a bubble chamber lives for \(\displaystyle 5.00×10^{−25}s\). What distance does it move in this time if it is traveling at 0.900 c? Since this distance is too short to make a track, the presence of the \(\displaystyle W^−\) must be inferred from its decay products. Note that the time is longer than the given \(\displaystyle W^−\) lifetime, which can be due to the statistical nature of decay or time dilation.
40. What length track does a \(\displaystyle π^+\) traveling at 0.100 c leave in a bubble chamber if it is created there and lives for \(\displaystyle 2.60×10^{−8}s\)? (Those moving faster or living longer may escape the detector before decaying.)
Solution
78.0 cm
41. The 3.20-km-long SLAC produces a beam of 50.0-GeV electrons. If there are 15,000 accelerating tubes, what average voltage must be across the gaps between them to achieve this energy?
42. Because of energy loss due to synchrotron radiation in the LHC at CERN, only 5.00 MeV is added to the energy of each proton during each revolution around the main ring. How many revolutions are needed to produce 7.00-TeV (7000 GeV) protons, if they are injected with an initial energy of 8.00 GeV?
Solution
\(\displaystyle 1.40×10^6\)
43. A proton and an antiproton collide head-on, with each having a kinetic energy of 7.00 TeV (such as in the LHC at CERN). How much collision energy is available, taking into account the annihilation of the two masses? (Note that this is not significantly greater than the extremely relativistic kinetic energy.)
44. When an electron and positron collide at the SLAC facility, they each have 50.0 GeV kinetic energies. What is the total collision energy available, taking into account the annihilation energy? Note that the annihilation energy is insignificant, because the electrons are highly relativistic.
Solution
100 GeV
33.4: Particles, Patterns, and Conservation Laws
45. The π0 is its own antiparticle and decays in the following manner: \(\displaystyle π^0→γ+γ\). What is the energy of each \(\displaystyle γ\) ray if the \(\displaystyle π^0\) is at rest when it decays?
Solution
67.5 MeV
46. The primary decay mode for the negative pion is \(\displaystyle π^−→μ^−+\bar{ν_μ}\). What is the energy release in MeV in this decay?
47. The mass of a theoretical particle that may be associated with the unification of the electroweak and strong forces is 1014GeV/c2.
(a) How many proton masses is this?
(b) How many electron masses is this? (This indicates how extremely relativistic the accelerator would have to be in order to make the particle, and how large the relativistic quantity \(\displaystyle γ\) would have to be.)
Solution
(a) \(\displaystyle 1×10^{14}\)
(b) \(\displaystyle 2×10^{17}\)
48. The decay mode of the negative muon is \(\displaystyle μ^−→e^−+\bar{ν_e}+ν_μ\).
(a) Find the energy released in MeV.
(b) Verify that charge and lepton family numbers are conserved.
49. The decay mode of the positive tau is \(\displaystyle τ^+→μ^++ν_μ+\bar{ν_τ}\).
(a) What energy is released?
(b) Verify that charge and lepton family numbers are conserved.
(c) The \(\displaystyle τ^+\) is the antiparticle of the \(\displaystyle τ^−\).Verify that all the decay products of the \(\displaystyle τ^+\) are the antiparticles of those in the decay of the \(\displaystyle τ^−\) given in the text.
Solution
(a) 1671 MeV
(b) \(\displaystyle Q=1,Q'=1+0+0=1.L_τ=−1;L'τ=−1;Lμ=0;L'μ=−1+1=0\)
(c) \(\displaystyle τ^−→μ^−+v_μ+\bar{v_τ}⇒μ^−\) antiparticle of \(\displaystyle μ^+; v_μ\) of \(\displaystyle \bar{v_μ}; \bar{v_τ}\) of \(\displaystyle v_τ\)
50. The principal decay mode of the sigma zero is \(\displaystyle \sum{}^0→Λ^0+γ\).
(a) What energy is released?
(b) Considering the quark structure of the two baryons, does it appear that the \(\displaystyle \sum{}^0\) is an excited state of the \(\displaystyle Λ^0\)?
(c) Verify that strangeness, charge, and baryon number are conserved in the decay.
(d) Considering the preceding and the short lifetime, can the weak force be responsible? State why or why not.
51. (a) What is the uncertainty in the energy released in the decay of a \(\displaystyle π^0\) due to its short lifetime?
(b) What fraction of the decay energy is this, noting that the decay mode is \(\displaystyle π^0→γ+γ\) (so that all the \(\displaystyle π^0\) mass is destroyed)?
Solution
(a) 3.9 eV
(b) \(\displaystyle 2.9×10^{−8}\)
52. (a) What is the uncertainty in the energy released in the decay of a \(\displaystyle τ^−\) due to its short lifetime?
(b) Is the uncertainty in this energy greater than or less than the uncertainty in the mass of the tau neutrino? Discuss the source of the uncertainty.
33.5: Quarks: Is That All There Is?
53. (a) Verify from its quark composition that the \(\displaystyle Δ^+\) particle could be an excited state of the proton.
(b) There is a spread of about 100 MeV in the decay energy of the \(\displaystyle Δ^+\), interpreted as uncertainty due to its short lifetime. What is its approximate lifetime?
(c) Does its decay proceed via the strong or weak force?
Solution
(a) The \(\displaystyle uud\) composition is the same as for a proton.
(b) \(\displaystyle 3.3×10^{−24}s\)
(c) Strong (short lifetime)
54. Accelerators such as the Triangle Universities Meson Facility (TRIUMF) in British Columbia produce secondary beams of pions by having an intense primary proton beam strike a target. Such “meson factories” have been used for many years to study the interaction of pions with nuclei and, hence, the strong nuclear force. One reaction that occurs is \(\displaystyle π^++p→Δ^{++}→π^++p\), where the \(\displaystyle Δ^{++}\) is a very short-lived particle. The graph in Figure shows the probability of this reaction as a function of energy. The width of the bump is the uncertainty in energy due to the short lifetime of the \(\displaystyle Δ^{++}\).
(a) Find this lifetime.
(b) Verify from the quark composition of the particles that this reaction annihilates and then re-creates a \(\displaystyle d\) quark and a \(\displaystyle \bar{d}\) antiquark by writing the reaction and decay in terms of quarks.
(c) Draw a Feynman diagram of the production and decay of the \(\displaystyle Δ^{++}\) showing the individual quarks involved.
This graph shows the probability of an interaction between a \(\displaystyle π^+\) and a proton as a function of energy. The bump is interpreted as a very short lived particle called a \(\displaystyle Δ^{++}\). The approximately 100-MeV width of the bump is due to the short lifetime of the \(\displaystyle Δ^{++}\).
55. The reaction \(\displaystyle π^++p→Δ^{++}\) (described in the preceding problem) takes place via the strong force.
(a) What is the baryon number of the \(\displaystyle Δ^{++}\) particle?
(b) Draw a Feynman diagram of the reaction showing the individual quarks involved.
Solution
a) \(\displaystyle Δ^{++}(uuu);B=\frac{1}{3}+\frac{1}{3}+\frac{1}{3}=1\)
b)
56. One of the decay modes of the omega minus is \(\displaystyle Ω^−→Ξ^0+π^−\).
(a) What is the change in strangeness?
(b) Verify that baryon number and charge are conserved, while lepton numbers are unaffected.
(c) Write the equation in terms of the constituent quarks, indicating that the weak force is responsible.
57. Repeat the previous problem for the decay mode \(\displaystyle Ω^−→Λ^0+K^−\).
Solution
(a) \(\displaystyle +1\)
(b) \(\displaystyle B=1=1+0, Z==0+(−1)\), all lepton numbers are 0 before and after
(c) \(\displaystyle (sss)→(uds)+(\bar{u}s)\)
58. One decay mode for the eta-zero meson is \(\displaystyle η^0→γ+γ\).
(a) Find the energy released.
(b) What is the uncertainty in the energy due to the short lifetime?
(c) Write the decay in terms of the constituent quarks.
(d) Verify that baryon number, lepton numbers, and charge are conserved.
59. One decay mode for the eta-zero meson is \(\displaystyle η^0→π^0+π^0\).
(a) Write the decay in terms of the quark constituents.
(b) How much energy is released?
(c) What is the ultimate release of energy, given the decay mode for the pi zero is π0→γ+γ?
Solution
(a) \(\displaystyle (u\bar{u}+d\bar{d})→(u\bar{u}+d\bar{d})+(u\bar{u}+d\bar{d})\)
(b) 277.9 MeV
(c) 547.9 MeV
60. Is the decay \(\displaystyle n→e^++e^−\) possible considering the appropriate conservation laws? State why or why not.
61. Is the decay \(\displaystyle μ^−→e^−+ν_e+ν_μ\) possible considering the appropriate conservation laws? State why or why not.
Solution
No. \(\displaystyle Charge=−1\) is conserved. \(\displaystyle L_{e_i}=0≠L_{e_f}=2\) is not conserved. \(\displaystyle L_μ=1\) is conserved.
62. (a) Is the decay \(\displaystyle Λ^0→n+π^0\) possible considering the appropriate conservation laws? State why or why not.
(b) Write the decay in terms of the quark constituents of the particles.
63. (a) Is the decay \(\displaystyle \sum{}^−→n+π^−\) possible considering the appropriate conservation laws? State why or why not. (b) Write the decay in terms of the quark constituents of the particles.
Solution
(a)Yes. \(\displaystyle Z=−1=0+(−1), B=1=1+0\), all lepton family numbers are 0 before and after, spontaneous since mass greater before reaction.
(b) \(\displaystyle dds→udd+\bar{u}d\)
64. The only combination of quark colors that produces a white baryon is RGB. Identify all the color combinations that can produce a white meson.
65. (a) Three quarks form a baryon. How many combinations of the six known quarks are there if all combinations are possible?
(b) This number is less than the number of known baryons. Explain why.
Solution
(a) 216
(b) There are more baryons observed because we have the 6 antiquarks and various mixtures of quarks (as for the π-meson) as well.
66. (a) Show that the conjectured decay of the proton, \(\displaystyle p→π^0+e^+\), violates conservation of baryon number and conservation of lepton number.
(b) What is the analogous decay process for the antiproton?
67. Verify the quantum numbers given for the \(\displaystyle Ω^+\) in [link] by adding the quantum numbers for its quark constituents as inferred from Table.
Solution
\(\displaystyle Ω+(\bar{s}\bar{s}\bar{s})\)
\(\displaystyle B=−\frac{1}{3}−\frac{1}{3}−\frac{1}{3}=−1,\)
\(\displaystyle L_e,μ,τ=0+0+0=0,\)
\(\displaystyle Q=\frac{1}{3}+\frac{1}{3}+\frac{1}{3}=1,\)
\(\displaystyle S=1+1+1=3.\)
68. Verify the quantum numbers given for the proton and neutron in [link] by adding the quantum numbers for their quark constituents as given in Table.
69. (a) How much energy would be released if the proton did decay via the conjectured reaction \(\displaystyle p→π^0+e^+\)?
(b) Given that the \(\displaystyle π^0\) decays to two \(\displaystyle γ\) s and that the \(\displaystyle e^+\) will find an electron to annihilate, what total energy is ultimately produced in proton decay?
(c) Why is this energy greater than the proton’s total mass (converted to energy)?
Solution
(a)803 MeV
(b) 938.8 MeV
(c) The annihilation energy of an extra electron is included in the total energy.
70. (a) Find the charge, baryon number, strangeness, charm, and bottomness of the \(\displaystyle J/Ψ\) particle from its quark composition.
(b) Do the same for the Υ particle.
71. There are particles called D -mesons. One of them is the \(\displaystyle D^+\) meson, which has a single positive charge and a baryon number of zero, also the value of its strangeness, topness, and bottomness. It has a charm of \(\displaystyle +1\). What is its quark configuration?
Solution
\(\displaystyle c\bar{d}\)
72. There are particles called bottom mesons or B -mesons. One of them is the \(\displaystyle B^−\) meson, which has a single negative charge; its baryon number is zero, as are its strangeness, charm, and topness. It has a bottomness of \(\displaystyle −1\). What is its quark configuration?
73. (a) What particle has the quark composition \(\displaystyle \bar{u}\bar{u}\bar{d}\)?
(b) What should its decay mode be?
Solution
a) The antiproton
b) \(\displaystyle p^−→π^0+e^−\)
74. (a) Show that all combinations of three quarks produce integral charges. Thus baryons must have integral charge.
(b) Show that all combinations of a quark and an antiquark produce only integral charges. Thus mesons must have integral charge.
33.6: Grand Unified Theories
75. Integrated Concepts
The intensity of cosmic ray radiation decreases rapidly with increasing energy, but there are occasionally extremely energetic cosmic rays that create a shower of radiation from all the particles they create by striking a nucleus in the atmosphere as seen in the figure given below. Suppose a cosmic ray particle having an energy of \(\displaystyle 10^{10}GeV\) converts its energy into particles with masses averaging \(\displaystyle 200MeV/c^2\).
(a) How many particles are created?
(b) If the particles rain down on a \(\displaystyle 1.00-km^2\) area, how many particles are there per square meter?
An extremely energetic cosmic ray creates a shower of particles on earth. The energy of these rare cosmic rays can approach a joule (about \(\displaystyle 10^{10}GeV\)) and, after multiple collisions, huge numbers of particles are created from this energy. Cosmic ray showers have been observed to extend over many square kilometers.
Solution
(a) \(\displaystyle 5×10^{10}\)
(b) \(\displaystyle 5×10^4particles/m^2\)
76. Integrated Concepts
Assuming conservation of momentum, what is the energy of each γ ray produced in the decay of a neutral at rest pion, in the reaction \(\displaystyle π^0→γ+γ\)?
77. Integrated Concepts
What is the wavelength of a 50-GeV electron, which is produced at SLAC? This provides an idea of the limit to the detail it can probe.
Solution
\(\displaystyle 2.5×10^{−17}m\)
78. Integrated Concepts
(a) Calculate the relativistic quantity \(\displaystyle γ=\frac{1}{\sqrt{1−v^2/c^2}}\) for 1.00-TeV protons produced at Fermilab.
(b) If such a proton created a \(\displaystyle π^+\) having the same speed, how long would its life be in the laboratory?
(c) How far could it travel in this time?
79. Integrated Concepts
The primary decay mode for the negative pion is \(\displaystyle π^−→μ^−+\bar{ν_μ}\).
(a) What is the energy release in MeV in this decay?
(b) Using conservation of momentum, how much energy does each of the decay products receive, given the \(\displaystyle π^−\) is at rest when it decays? You may assume the muon antineutrino is massless and has momentum \(\displaystyle p=E/c\), just like a photon.
Solution
(a) 33.9 MeV
(b) Muon antineutrino 29.8 MeV, muon 4.1 MeV (kinetic energy)
80. Integrated Concepts
Plans for an accelerator that produces a secondary beam of K-mesons to scatter from nuclei, for the purpose of studying the strong force, call for them to have a kinetic energy of 500 MeV.
(a) What would the relativistic quantity \(\displaystyle γ=\frac{1}{\sqrt{1−v^2/c^2}}\) be for these particles?
(b) How long would their average lifetime be in the laboratory?
(c) How far could they travel in this time?
81. Integrated Concepts
Suppose you are designing a proton decay experiment and you can detect 50 percent of the proton decays in a tank of water.
(a) How many kilograms of water would you need to see one decay per month, assuming a lifetime of \(\displaystyle 10^{31}y\)?
(b) How many cubic meters of water is this?
(c) If the actual lifetime is \(\displaystyle 10^{33}y\), how long would you have to wait on an average to see a single proton decay?
Solution
(a) \(\displaystyle 7.2×10^5kg\)
(b) \(\displaystyle 7.2×10^2m^3\)
(c) 100 months
82. Integrated Concepts
In supernovas, neutrinos are produced in huge amounts. They were detected from the 1987A supernova in the Magellanic Cloud, which is about 120,000 light years away from the Earth (relatively close to our Milky Way galaxy). If neutrinos have a mass, they cannot travel at the speed of light, but if their mass is small, they can get close.
(a) Suppose a neutrino with a \(\displaystyle 7-eV/c^2\) mass has a kinetic energy of 700 keV. Find the relativistic quantity \(\displaystyle γ=\frac{1}{\sqrt{1−v^2/c^2}}\) for it.
(b) If the neutrino leaves the 1987A supernova at the same time as a photon and both travel to Earth, how much sooner does the photon arrive? This is not a large time difference, given that it is impossible to know which neutrino left with which photon and the poor efficiency of the neutrino detectors. Thus, the fact that neutrinos were observed within hours of the brightening of the supernova only places an upper limit on the neutrino’s mass. (Hint: You may need to use a series expansion to find v for the neutrino, since its γ is so large.)
83. Construct Your Own Problem
Consider an ultrahigh-energy cosmic ray entering the Earth’s atmosphere (some have energies approaching a joule). Construct a problem in which you calculate the energy of the particle based on the number of particles in an observed cosmic ray shower. Among the things to consider are the average mass of the shower particles, the average number per square meter, and the extent (number of square meters covered) of the shower. Express the energy in eV and joules.
84. Construct Your Own Problem
Consider a detector needed to observe the proposed, but extremely rare, decay of an electron. Construct a problem in which you calculate the amount of matter needed in the detector to be able to observe the decay, assuming that it has a signature that is clearly identifiable. Among the things to consider are the estimated half life (long for rare events), and the number of decays per unit time that you wish to observe, as well as the number of electrons in the detector substance.
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:50.493205
| 2018-05-04T03:11:06 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/33%3A_Particle_Physics/33.E%3A_Special_Relativity_(Exercise)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "33.E: Special Relativity (Exercise)",
"author": null
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics
|
34: Frontiers of Physics
Frontiers are exciting. There is mystery, surprise, adventure, and discovery. The satisfaction of finding the answer to a question is made keener by the fact that the answer always leads to a new question. The picture of nature becomes more complete, yet nature retains its sense of mystery and never loses its ability to awe us. The view of physics is beautiful looking both backward and forward in time. What marvelous patterns we have discovered. How clever nature seems in its rules and connections. How awesome. And we continue looking ever deeper and ever further, probing the basic structure of matter, energy, space, and time and wondering about the scope of the universe, its beginnings and future.
Thumbnail: Lattice analogy of the deformation of spacetime caused by a planetary mass. Image used wtih permission (CC-SA-BY 3.0; Mysid).
|
libretexts
|
2025-03-17T19:53:50.551636
| 2015-11-01T04:26:03 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34: Frontiers of Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.00%3A_Prelude_to_Frontiers_of_Physics
|
34.0: Prelude to Frontiers of Physics
Frontiers are exciting. There is mystery, surprise, adventure, and discovery. The satisfaction of finding the answer to a question is made keener by the fact that the answer always leads to a new question. The picture of nature becomes more complete, yet nature retains its sense of mystery and never loses its ability to awe us. The view of physics is beautiful looking both backward and forward in time. What marvelous patterns we have discovered. How clever nature seems in its rules and connections. How awesome. And we continue looking ever deeper and ever further, probing the basic structure of matter, energy, space, and time and wondering about the scope of the universe, its beginnings and future.
You are now in a wonderful position to explore the forefronts of physics, both the new discoveries and the unanswered questions. With the concepts, qualitative and quantitative, the problem-solving skills, the feeling for connections among topics, and all the rest you have mastered, you can more deeply appreciate and enjoy the brief treatments that follow. Years from now you will still enjoy the quest with an insight all the greater for your efforts.
|
libretexts
|
2025-03-17T19:53:50.608545
| 2016-07-24T09:13:44 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.00%3A_Prelude_to_Frontiers_of_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.0: Prelude to Frontiers of Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.01%3A_Cosmology_and_Particle_Physics
|
34.1: Cosmology and Particle Physics
Learning Objectives
By the end of this section, you will be able to:
- Discuss the expansion of the universe.
- Explain the Big Bang.
Look at the sky on some clear night when you are away from city lights. There you will see thousands of individual stars and a faint glowing background of millions more. The Milky Way, as it has been called since ancient times, is an arm of our galaxy of stars -- the word galaxy coming from the Greek word galaxias, meaning milky. We know a great deal about our Milky Way galaxy and of the billions of other galaxies beyond its fringes. But they still provoke wonder and awe (see Figure \(\PageIndex{1}\)). And there are still many questions to be answered. Most remarkable when we view the universe on the large scale is that once again explanations of its character and evolution are tied to the very small scale. Particle physics and the questions being asked about the very small scales may also have their answers in the very large scales.
As has been noted in numerous Things Great and Small vignettes, this is not the first time the large has been explained by the small and vice versa. Newton realized that the nature of gravity on Earth that pulls an apple to the ground could explain the motion of the moon and planets so much farther away. Minute atoms and molecules explain the chemistry of substances on a much larger scale. Decays of tiny nuclei explain the hot interior of the Earth. Fusion of nuclei likewise explains the energy of stars. Today, the patterns in particle physics seem to be explaining the evolution and character of the universe. And the nature of the universe has implications for unexplored regions of particle physics.
Cosmology is the study of the character and evolution of the universe. What are the major characteristics of the universe as we know them today? First, there are approximately \(10^{11}\) galaxies in the observable part of the universe. An average galaxy contains more than \(10^{11}\) stars, with our Milky Way galaxy being larger than average, both in its number of stars and its dimensions. Ours is a spiral-shaped galaxy with a diameter of about 100,000 light years and a thickness of about 2000 light years in the arms with a central bulge about 10,000 light years across. The Sun lies about 30,000 light years from the center near the galactic plane. There are significant clouds of gas, and there is a halo of less-dense regions of stars surrounding the main body. (See Figure \(\PageIndex{2}\)) Evidence strongly suggests the existence of a large amount of additional matter in galaxies that does not produce light -- the mysterious dark matter we shall later discuss.
Distances are great even within our galaxy and are measured in light years (the distance traveled by light in one year). The average distance between galaxies is on the order of a million light years, but it varies greatly with galaxies forming clusters such as shown in Figure \(\PageIndex{1}\). The Magellanic Clouds, for example, are small galaxies close to our own, some 160,000 light years from Earth. The Andromeda galaxy is a large spiral galaxy like ours and lies 2 million light years away. It is just visible to the naked eye as an extended glow in the Andromeda constellation. Andromeda is the closest large galaxy in our local group, and we can see some individual stars in it with our larger telescopes. The most distant known galaxy is 14 billion light years from Earth -- a truly incredible distance. (See Figure \(\PageIndex{3}\).)
Consider the fact that the light we receive from these vast distances has been on its way to us for a long time. In fact, the time in years is the same as the distance in light years. For example, the Andromeda galaxy is 2 million light years away, so that the light now reaching us left it 2 million years ago. If we could be there now, Andromeda would be different. Similarly, light from the most distant galaxy left it 14 billion years ago. We have an incredible view of the past when looking great distances. We can try to see if the universe was different then -- if distant galaxies are more tightly packed or have younger-looking stars, for example, than closer galaxies, in which case there has been an evolution in time. But the problem is that the uncertainties in our data are great. Cosmology is almost typified by these large uncertainties, so that we must be especially cautious in drawing conclusions. One consequence is that there are more questions than answers, and so there are many competing theories. Another consequence is that any hard data produce a major result. Discoveries of some importance are being made on a regular basis, the hallmark of a field in its golden age.
Perhaps the most important characteristic of the universe is that all galaxies except those in our local cluster seem to be moving away from us at speeds proportional to their distance from our galaxy. It looks as if a gigantic explosion, universally called the Big Bang , threw matter out some billions of years ago. This amazing conclusion is based on the pioneering work of Edwin Hubble (1889–1953), the American astronomer. In the 1920s, Hubble first demonstrated conclusively that other galaxies, many previously called nebulae or clouds of stars, were outside our own. He then found that all but the closest galaxies have a red shift in their hydrogen spectra that is proportional to their distance. The explanation is that there is a cosmological red shift due to the expansion of space itself. The photon wavelength is stretched in transit from the source to the observer. Double the distance, and the red shift is doubled. While this cosmological red shift is often called a Doppler shift, it is not -- space itself is expanding. There is no center of expansion in the universe. All observers see themselves as stationary; the other objects in space appear to be moving away from them. Hubble was directly responsible for discovering that the universe was much larger than had previously been imagined and that it had this amazing characteristic of rapid expansion.
Universal expansion on the scale of galactic clusters (that is, galaxies at smaller distances are not uniformly receding from one another) is an integral part of modern cosmology. For galaxies farther away than about 50 Mly (50 million light years), the expansion is uniform with variations due to local motions of galaxies within clusters. A representative recession velocity \(v\) can be obtained from the simple formula
\[v = H_{0}d, \label{34.2.1}\]
where \(d\) is the distance to the galaxy and \(H_{0}\) is the Hubble constant . The Hubble constant is a central concept in cosmology. Its value is determined by taking the slope of a graph of velocity versus distance, obtained from red shift measurements, such as shown in Figure \(\PageIndex{4}\). We shall use an approximate value of \(H_{0} = 20 (km/s)/Mly\). Thus, \(v = H_{0}d\) is an average behavior for all but the closest galaxies. For example, a galaxy 100 Mly away (as determined by its size and brightness) typically moves away from us at a speed of \(v = \left( 20 (km/s)/Mly \right) \left( 100 Mly \right) = 2000 km/s\). There can be variations in this speed due to so-called local motions or interactions with neighboring galaxies. Conversely, if a galaxy is found to be moving away from us at speed of 100,000 km/s based on its red shift, it is at a distance \(d = v/H_{0} = \left( 10,000 km/s \right) / \left( 20 (km/s)/Mly \right) = 5000 Mly = 5 Gly\) or \(5 \times 10^{9} ly\). This last calculation is approximate, because it assumes the expansion rate was the same 5 billion years ago as now. A similar calculation in Hubble’s measurement changed the notion that the universe is in a steady state.
One of the most intriguing developments recently has been the discovery that the expansion of the universe may be faster now than in the past, rather than slowing due to gravity as expected. Various groups have been looking, in particular, at supernovas in moderately distant galaxies (less than 1 Gly) to get improved distance measurements. Those distances are larger than expected for the observed galactic red shifts, implying the expansion was slower when that light was emitted. This has cosmological consequences that are discussed in " Dark Matter and Closure ." The first results, published in 1999, are only the beginning of emerging data, with astronomy now entering a data-rich era.
Figure \(\PageIndex{5}\) shows how the recession of galaxies looks like the remnants of a gigantic explosion, the famous Big Bang. Extrapolating backward in time, the Big Bang would have occurred between 13 and 15 billion years ago when all matter would have been at a point. Questions instantly arise. What caused the explosion? What happened before the Big Bang? Was there a before, or did time start then? Will the universe expand forever, or will gravity reverse it into a Big Crunch? And is there other evidence of the Big Bang besides the well-documented red shifts?
The Russian-born American physicist George Gamow (1904–1968) was among the first to note that, if there was a Big Bang, the remnants of the primordial fireball should still be evident and should be blackbody radiation. Since the radiation from this fireball has been traveling to us since shortly after the Big Bang, its wavelengths should be greatly stretched. It will look as if the fireball has cooled in the billions of years since the Big Bang. Gamow and collaborators predicted in the late 1940s that there should be blackbody radiation from the explosion filling space with a characteristic temperature of about 7 K. Such blackbody radiation would have its peak intensity in the microwave part of the spectrum. (See Figure \(\PageIndex{6}.\) In 1964, Arno Penzias and Robert Wilson, two American scientists working with Bell Telephone Laboratories on a low-noise radio antenna, detected the radiation and eventually recognized it for what it is.
Figure \(\PageIndex{6}\)b shows the spectrum of this microwave radiation that permeates space and is of cosmic origin. It is the most perfect blackbody spectrum known, and the temperature of the fireball remnant is determined from it to be \(2.75 \pm 0.002 K\). The detection of what is now called the cosmic microwave background (CMBR) was so important (generally considered as important as Hubble’s detection that the galactic red shift is proportional to distance) that virtually every scientist has accepted the expansion of the universe as fact. Penzias and Wilson shared the 1978 Nobel Prize in Physics for their discovery.
Making Connections: Cosmology and Particle Physics
There are many connections of cosmology -- by definition involving physics on the largest scale -- with particle physics -- by definition physics on the smallest scale. Among these are the dominance of matter over antimatter, the nearly perfect uniformity of the cosmic microwave background, and the mere existence of galaxies.
Matter versus antimatter
We know from direct observation that antimatter is rare. The Earth and the solar system are nearly pure matter. Space probes and cosmic rays give direct evidence -- the landing of the Viking probes on Mars would have been spectacular explosions of mutual annihilation energy if Mars were antimatter. We also know that most of the universe is dominated by matter. This is proven by the lack of annihilation radiation coming to us from space, particularly the relative absence of 0.511-MeV \(\gamma\) rays created by the mutual annihilation of electrons and positrons. It seemed possible that there could be entire solar systems or galaxies made of antimatter in perfect symmetry with our matter-dominated systems. But the interactions between stars and galaxies would sometimes bring matter and antimatter together in large amounts. The annihilation radiation they would produce is simply not observed. Antimatter in nature is created in particle collisions and in \(\beta ^{+}\) decays, but only in small amounts that quickly annihilate, leaving almost pure matter surviving.
Particle physics seems symmetric in matter and antimatter. Why isn’t the cosmos? The answer is that particle physics is not quite perfectly symmetric in this regard. The decay of one of the neutral \(K\)-mesons, for example, preferentially creates more matter than antimatter. This is caused by a fundamental small asymmetry in the basic forces. This small asymmetry produced slightly more matter than antimatter in the early universe. If there was only one part in \(10^{9}\) more matter (a small asymmetry), the rest would annihilate pair for pair, leaving nearly pure matter to form the stars and galaxies we see today. So the vast number of stars we observe may be only a tiny remnant of the original matter created in the Big Bang. Here at last we see a very real and important asymmetry in nature. Rather than be disturbed by an asymmetry, most physicists are impressed by how small it is. Furthermore, if the universe were completely symmetric, the mutual annihilation would be more complete, leaving far less matter to form us and the universe we know.
How can something so old have so few wrinkles?
A troubling aspect of cosmic microwave background radiation (CMBR) was soon recognized. True, the CMBR verified the Big Bang, had the correct temperature, and had a blackbody spectrum as expected. But the CMBR was too smooht -- it looked identical in every direction. Galaxies and other similar entities could not be formed without the existence of fluctuations in the primordial stages of the universe and so there should be hot and cool spots in the CMBR, nicknamed wrinkles, corresponding to dense and sparse regions of gas caused by turbulence or early fluctuations. Over time, dense regions would contract under gravity and form stars and galaxies. Why aren’t the fluctuations there? (This is a good example of an answer producing more questions.) Furthermore, galaxies are observed very far from us, so that they formed very long ago. The problem was to explain how galaxies could form so early and so quickly after the Big Bang if its remnant fingerprint is perfectly smooth. The answer is that if you look very closely, the CMBR is not perfectly smooth, only extremely smooth.
A satellite called the Cosmic Background Explorer (COBE) carried an instrument that made very sensitive and accurate measurements of the CMBR. In April of 1992, there was extraordinary publicity of COBE’s first results -- there were small fluctuations in the CMBR. Further measurements were carried out by experiments including NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), which launched in 2001. Data from WMAP provided a much more detailed picture of the CMBR fluctuations. (See Figure \(\PageIndex{6}\).) These amount to temperature fluctuations of only \(200 \mu k\) out of 2.7 K, better than one part in 1000. The WMAP experiment will be followed up by the European Space Agency’s Planck Surveyor, which launched in 2009.
Let us now examine the various stages of the overall evolution of the universe from the Big Bang to the present, illustrated in Figure \(\PageIndex{8}\). Note that scientific notation is used to encompass the many orders of magnitude in time, energy, temperature, and size of the universe. Going back in time, the two lines approach but do not cross (there is no zero on an exponential scale). Rather, they extend indefinitely in ever-smaller time intervals to some infinitesimal point.
Going back in time is equivalent to what would happen if expansion stopped and gravity pulled all the galaxies together, compressing and heating all matter. At a time long ago, the temperature and density were too high for stars and galaxies to exist. Before then, there was a time when the temperature was too great for atoms to exist. And farther back yet, there was a time when the temperature and density were so great that nuclei could not exist. Even farther back in time, the temperature was so high that average kinetic energy was great enough to create short-lived particles, and the density was high enough to make this likely. When we extrapolate back to the point of \(W^{\pm}\) and \(Z^{0}\) production (thermal energies reaching 1 TeV, or a temperature of about \(10^{15} K\)), we reach the limits of what we know directly about particle physics. This is at a time about \(10^{-12}s\) after the Big Bang. While \(10^{-12}s\) may seem to be negligibly close to the instant of creation, it is not. There are important stages before this time that are tied to the unification of forces. At those stages, the universe was at extremely high energies and average particle separations were smaller than we can achieve with accelerators. What happened in the early stages before \(10^{-22} s\) is crucial to all later stages and is possibly discerned by observing present conditions in the universe. One of these is the smoothness of the CMBR.
Names are given to early stages representing key conditions. The stage before \(10^{-11} s\) back to \(10^{-34} s\) is claled the electroweak epoch , because the electromagnetic and weak forces become identical for energies above about 100 GeV. As discussed earlier, theorists expect that the strong force becomes identical to and thus unified with the electroweak force at energies of about \(10^{14} GeV\). The average particle energy would be this great at \(10^{-34} s\) after the Big Bang, if there are no surprises in the unknown physics at energies above about 1 TeV. At the immense energy of \(10^{14} GeV\) (corresponding to a temperature of about \(10^{26} K\)), the \(W^{\pm}\) and \(Z^{0}\) carrier particles would be transformed into massless gauge bosons to accomplish the unification. Before \(10^{-34} s\) back to about \(10^{43} s\), we have Grand Unification in the GUT epoch , in which all forces except gravity are identical. At \(10^{-43} s\), the average energy reaches the immense \(10^{19} GeV\) needed to unify gravity with the other forces in TOE, the Theory of Everything. Before that time is the TOE epoch ,but we have almost no idea as to the nature of the universe then, since we have no workable theory of quantum gravity. We call the hypothetical unified force superforce .
Now let us imagine starting at TOE and moving forward in time to see what type of universe is created from various events along the way. As temperatures and average energies decrease with expansion, the universe reaches the stage where average particle separations are large enough to see differences between the strong and electroweak forces (at about \(10^{-35} s\)). After this time, the forces become distinct in almost all interactions -- they spontaneous symmetry breaking , in which conditions spontaneously evolved to a point where the forces were no longer unified, breaking that symmetry. This is analogous to a phase transition in the universe, and a clever proposal by American physicist Alan Guth in the early 1980s ties it to the smoothness of the CMBR. Guth proposed that spontaneous symmetry breaking (like a phase transition during cooling of normal matter) released an immense amount of energy that caused the universe to expand extremely rapidly for the brief time from \(10^{-35} s\) to about \(10^{-32} s\). This expansion may have been by an incredible factor of \(10^{50}\) or more in the size of the universe and is thus called the inflationary scenario . One result of this inflation is that it would stretch the wrinkles in the universe nearly flat, leaving an extremely smooth CMBR. While speculative, there is as yet no other plausible explanation for the smoothness of the CMBR. Unless the CMBR is not really cosmic but local in origin, the distances between regions of similar temperatures are too great for any coordination to have caused them, since any coordination mechanism must travel at the speed of light. Again, particle physics and cosmology are intimately entwined. There is little hope that we may be able to test the inflationary scenario directly, since it occurs at energies near \(10^{14} GeV\), vastly greater than the limits of modern accelerators. But the idea is so attractive that it is incorporated into most cosmological theories.
Characteristics of the present universe may help us determine the validity of this intriguing idea. Additionally, the recent indications that the universe’s expansion rate may be increasing (see " Dark Matter and Closure ") could even imply that we are in another inflationary epoch.
It is important to note that, if conditions such as those found in the early universe could be created in the laboratory, we would see the unification of forces directly today. The forces have not changed in time, but the average energy and separation of particles in the universe have. As discussed in " The Four Basic Forces ," the four basic forces in nature are distinct under most circumstances found today. The early universe and its remnants provide evidence from times when they were unified under most circumstances.
Summary
- Cosmology is the study of the character and evolution of the universe.
- The two most important features of the universe are the cosmological red shifts of its galaxies being proportional to distance and its cosmic microwave background (CMBR). Both support the notion that there was a gigantic explosion, known as the Big Bang that created the universe.
- Galaxies farther away than our local group have, on an average, a recessional velocity given by \(v = H_{0}d\), where \(d\) is the distance to the galaxy and \(H_{0}\) is the Hubble constant, taken to have the average value \(H_{0} = 20 km/s \cdot Mly\).
- Explanations of the large-scale characteristics of the universe are intimately tied to particle physics.
- The dominance of matter over antimatter and the smoothness of the CMBR are two characteristics that are tied to particle physics.
- The epochs of the universe are known back to very shortly after the Big Bang, based on known laws of physics.
- The earliest epochs are tied to the unification of forces, with the electroweak epoch being partially understood, the GUT epoch being speculative, and the TOE epoch being highly speculative since it involves an unknown single superforce.
- The transition from GUT to electroweak is called spontaneous symmetry breaking. It released energy that caused the inflationary scenario, which in turn explains the smoothness of the CMBR.
Glossary
- Big Bang
- a gigantic explosion that threw out matter a few billion years ago
- cosmic microwave background
- the spectrum of microwave radiation of cosmic origin
- cosmological red shift
- the photon wavelength is stretched in transit from the source to the observer because of the expansion of space itself
- cosmology
- the study of the character and evolution of the universe
- electroweak epoch
- the stage before 10 −11 back to 10 −34 after the Big Bang
- GUT epoch
- the time period from 10 −43 to 10 −34 after the Big Bang, when Grand Unification Theory, in which all forces except gravity are identical, governed the universe
- Hubble constant
- a central concept in cosmology whose value is determined by taking the slope of a graph of velocity versus distance, obtained from red shift measurements
- inflationary scenario
- the rapid expansion of the universe by an incredible factor of 10 −50 for the brief time from 10 −35 to about 10 −32 s
- spontaneous symmetry breaking
- the transition from GUT to electroweak where the forces were no longer unified
- superforce
- hypothetical unified force in TOE epoch
- TOE epoch
- before 10 −43 after the Big Bang
|
libretexts
|
2025-03-17T19:53:50.690047
| 2016-07-24T09:14:41 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.01%3A_Cosmology_and_Particle_Physics",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.1: Cosmology and Particle Physics",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.02%3A_General_Relativity_and_Quantum_Gravity
|
34.2: General Relativity and Quantum Gravity
Learning Objectives
By the end of this section, you will be able to:
- Explain the effect of gravity on light.
- Discuss black hole.
- Explain quantum gravity.
When we talk of black holes or the unification of forces, we are actually discussing aspects of general relativity and quantum gravity. We know from " Special Relativity " that relativity is the study of how different observers measure the same event, particularly if they move relative to one another. Einstein’s theory of general relativity describes all types of relative motion including accelerated motion and the effects of gravity. General relativity encompasses special relativity and classical relativity in situations where acceleration is zero and relative velocity is small compared with the speed of light. Many aspects of general relativity have been verified experimentally, some of which are better than science fiction in that they are bizarre but true. Quantum gravity is the theory that deals with particle exchange of gravitons as the mechanism for the force, and with extreme conditions where quantum mechanics and general relativity must both be used. A good theory of quantum gravity does not yet exist, but one will be needed to understand how all four forces may be unified. If we are successful, the theory of quantum gravity will encompass all others, from classical physics to relativity to quantum mechanics -- truly a Theory of Everything (TOE).
General Relativity
Einstein first considered the case of no observer acceleration when he developed the revolutionary special theory of relativity, publishing his first work on it in 1905. By 1916, he had laid the foundation of general relativity, again almost on his own. Much of what Einstein did to develop his ideas was to mentally analyze certain carefully and clearly defined situations -- doing this is to perform a thought experiment . Figure \(\PageIndex{1}\) illustrates a thought experiment like the ones that convinced Einstein that light must fall in a gravitational field. Think about what a person feels in an elevator that is accelerated upward. It is identical to being in a stationary elevator in a gravitational field. The feet of a person are pressed against the floor, and objects released from hand fall with identical accelerations. In fact, it is not possible, without looking outside, to know what is happening -- acceleration upward or gravity. This led Einstein to correctly postulate that acceleration and gravity will produce identical effects in all situations. So, if acceleration affects light, then gravity will, too. Figure \(\PageIndex{1}\) shows the effect of acceleration on a beam of light shone horizontally at one wall. Since the accelerated elevator moves up during the time light travels across the elevator, the beam of light strikes low, seeming to the person to bend down. (Normally a tiny effect, since the speed of light is so great.) The same effect must occur due to gravity, Einstein reasoned, since there is no way to tell the effects of gravity acting downward from acceleration of the elevator upward. Thus gravity affects the path of light, even though we think of gravity as acting between masses and photons are massless.
Einstein’s theory of general relativity got its first verification in 1919 when starlight passing near the Sun was observed during a solar eclipse. (See Figure \(\PageIndex{3}\).) During an eclipse, the sky is darkened and we can briefly see stars. Those in a line of sight nearest the Sun should have a shift in their apparent positions. Not only was this shift observed, but it agreed with Einstein’s predictions well within experimental uncertainties. This discovery created a scientific and public sensation. Einstein was now a folk hero as well as a very great scientist. The bending of light by matter is equivalent to a bending of space itself, with light following the curve. This is another radical change in our concept of space and time. It is also another connection that any particle with mass or energy (massless photons) is affected by gravity.
There are several current forefront efforts related to general relativity. One is the observation and analysis of gravitational lensing of light. Another is analysis of the definitive proof of the existence of black holes. Direct observation of gravitational waves or moving wrinkles in space is being searched for. Theoretical efforts are also being aimed at the possibility of time travel and wormholes into other parts of space due to black holes.
Gravitational lensing
As you can see in Figure \(\PageIndex{2}\), light is bent toward a mass, producing an effect much like a converging lens (large masses are needed to produce observable effects). On a galactic scale, the light from a distant galaxy could be “lensed” into several images when passing close by another galaxy on its way to Earth. Einstein predicted this effect, but he considered it unlikely that we would ever observe it. A number of cases of this effect have now been observed; one is shown in Figure \(\PageIndex{3}\). This effect is a much larger scale verification of general relativity. But such gravitational lensing is also useful in verifying that the red shift is proportional to distance. The red shift of the intervening galaxy is always less than that of the one being lensed, and each image of the lensed galaxy has the same red shift. This verification supplies more evidence that red shift is proportional to distance. Confidence that the multiple images are not different objects is bolstered by the observations that if one image varies in brightness over time, the others also vary in the same manner.
Black holes
Black holes are objects having such large gravitational fields that things can fall in, but nothing, not even light, can escape. Bodies, like the Earth or the Sun, have what is called an escape velocity . If an object moves straight up from the body, starting at the escape velocity, it will just be able to escape the gravity of the body. The greater the acceleration of gravity on the body, the greater is the escape velocity. As long ago as the late 1700s, it was proposed that if the escape velocity is greater than the speed of light, then light cannot escape. Simon Laplace (1749–1827), the French astronomer and mathematician, even incorporated this idea of a dark star into his writings. But the idea was dropped after Young’s double slit experiment showed light to be a wave. For some time, light was thought not to have particle characteristics and, thus, could not be acted upon by gravity. The idea of a black hole was very quickly reincarnated in 1916 after Einstein’s theory of general relativity was published. It is now thought that black holes can form in the supernova collapse of a massive star, forming an object perhaps 10 km across and having a mass greater than that of our Sun. It is interesting that several prominent physicists who worked on the concept, including Einstein, firmly believed that nature would find a way to prohibit such objects.
Black holes are difficult to observe directly, because they are small and no light comes directly from them. In fact, no light comes from inside the event horizon , which is defined to be at a distance from the object at which the escape velocity is exactly the speed of light. The radius of the event horizon is known as the Schwarzschild radius \(R_{S}\) and is given by
\[R_{S} = \frac{2GM}{c^{2}}, \label{1}\]
where \(G\) is the universal gravitational constant, \(M\) is the mass of the body, and \(c\) is the speed of light. The event horizon is the edge of the black hole and \(R_{S}\) is its radius (that is, the size of a black hole is twice \(R_{S}\)). Since \(G\) is small and \(c^{2}\) is large, you can see that black holes are extremely small, only a few kilometers for masses a little greater than the Sun’s. The object itself is inside the event horizon.
Physics near a black hole is fascinating. Gravity increases so rapidly that, as you approach a black hole, the tidal effects tear matter apart, with matter closer to the hole being pulled in with much more force than that only slightly farther away. This can pull a companion star apart and heat inflowing gases to the point of producing X rays. (See Figure \(\PageIndex{4}\).) We have observed X rays from certain binary star systems that are consistent with such a picture. This is not quite proof of black holes, because the X rays could also be caused by matter falling onto a neutron star. These objects were first discovered in 1967 by the British astrophysicists, Jocelyn Bell and Anthony Hewish. Neutron stars are literally a star composed of neutrons. They are formed by the collapse of a star’s core in a supernova, during which electrons and protons are forced together to form neutrons (the reverse of neutron \(\beta\) decay). Neutron stars are slightly larger than a black hole of the same mass and will not collapse further because of resistance by the strong force. However, neutron stars cannot have a mass greater than about eight solar masses or they must collapse to a black hole. With recent improvements in our ability to resolve small details, such as with the orbiting Chandra X-ray Observatory, it has become possible to measure the masses of X-ray-emitting objects by observing the motion of companion stars and other matter in their vicinity. What has emerged is a plethora of X-ray-emitting objects too massive to be neutron stars. This evidence is considered conclusive and the existence of black holes is widely accepted. These black holes are concentrated near galactic centers.
We also have evidence that supermassive black holes may exist at the cores of many galaxies, including the Milky Way. Such a black hole might have a mass millions or even billions of times that of the Sun, and it would probably have formed when matter first coalesced into a galaxy billions of years ago. Supporting this is the fact that very distant galaxies are more likely to have abnormally energetic cores. Some of the moderately distant galaxies, and hence among the younger, are known as quasars and emit as much or more energy than a normal galaxy but from a region less than a light year across. Quasar energy outputs may vary in times less than a year, so that the energy-emitting region must be less than a light year across. The best explanation of quasars is that they are young galaxies with a supermassive black hole forming at their core, and that they become less energetic over billions of years. In closer superactive galaxies, we observe tremendous amounts of energy being emitted from very small regions of space, consistent with stars falling into a black hole at the rate of one or more a month. The Hubble Space Telescope (1994) observed an accretion disk in the galaxy M87 rotating rapidly around a region of extreme energy emission (Figure \(\PageIndex{5}\)). A jet of material being ejected perpendicular to the plane of rotation gives further evidence of a supermassive black hole as the engine.
Gravitational Waves
If a massive object distorts the space around it, like the foot of a water bug on the surface of a pond, then movement of the massive object should create waves in space like those on a pond. Gravitational waves are mass-created distortions in space that propagate at the speed of light and are predicted by general relativity. Since gravity is by far the weakest force, extreme conditions are needed to generate significant gravitational waves. Gravity near binary neutron star systems is so great that significant gravitational wave energy is radiated as the two neutron stars orbit one another. American astronomers, Joseph Taylor and Russell Hulse, measured changes in the orbit of such a binary neutron star system. They found its orbit to change precisely as predicted by general relativity, a strong indication of gravitational waves, and were awarded the 1993 Nobel Prize. But direct detection of gravitational waves on Earth would be conclusive. For many years, various attempts have been made to detect gravitational waves by observing vibrations induced in matter distorted by these waves. American physicist Joseph Weber pioneered this field in the 1960s, but no conclusive events have been observed. (No gravity wave detectors were in operation at the time of the 1987A supernova, unfortunately.) There are now several ambitious systems of gravitational wave detectors in use around the world. These include the LIGO (Laser Interferometer Gravitational Wave Observatory) system with two laser interferometer detectors, one in the state of Washington and another in Louisiana (Figure \(\PageIndex{6}\)) and the VIRGO (Variability of Irradiance and Gravitational Oscillations) facility in Italy with a single detector.
Quantum Gravity
Black holes radiate
Quantum gravity is important in those situations where gravity is so extremely strong that it has effects on the quantum scale, where the other forces are ordinarily much stronger. The early universe was such a place, but black holes are another. The first significant connection between gravity and quantum effects was made by the Russian physicist Yakov Zel’dovich in 1971, and other significant advances followed from the British physicist Stephen Hawking (Figure \(\PageIndex{7}\)). These two showed that black holes could radiate away energy by quantum effects just outside the event horizon (nothing can escape from inside the event horizon).
Black holes are, thus, expected to radiate energy and shrink to nothing, although extremely slowly for most black holes. The mechanism is the creation of a particle-antiparticle pair from energy in the extremely strong gravitational field near the event horizon. One member of the pair falls into the hole and the other escapes, conserving momentum (Figure \(\PageIndex{8}\). When a black hole loses energy and, hence, rest mass, its event horizon shrinks, creating an even greater gravitational field. This increases the rate of pair production so that the process grows exponentially until the black hole is nuclear in size. A final burst of particles and \(\gamma\) rays ensues. This is an extremely slow process for black holes about the mass of the Sun (produced by supernovas) or larger ones (like those thought to be at galactic centers), taking on the order of \(10^{67}\) years or longer! Smaller black holes would evaporate faster, but they are only speculated to exist as remnants of the Big Bang. Searches for characteristic \(\gamma\)-ray bursts have produced events attributable to more mundane objects like neutron stars accreting matter.
Wormholes and time travel
The subject of time travel captures the imagination. Theoretical physicists, such as the American Kip Thorne, have treated the subject seriously, looking into the possibility that falling into a black hole could result in popping up in another time and place -- a trip through a so-called wormhole. Time travel and wormholes appear in innumerable science fiction dramatizations, but the consensus is that time travel is not possible in theory. While still debated, it appears that quantum gravity effects inside a black hole prevent time travel due to the creation of particle pairs. Direct evidence is elusive.
The shortest times
Theoretical studies indicate that, at extremely high energies and correspondingly early in the universe, quantum fluctuations may make time intervals meaningful only down to some finite time limit. Early work indicated that this might be the case for times as long as \(10^{-43}s\), the time at which all forces were unified. If so, then it would be meaningless to consider the universe at times earlier than this. Subsequent studies indicate that the crucial time may be as short as \(10^{-95}s\). But the point remains -- quantum gravity seems to imply that there is no such thing as a vanishingly short time. Time may, in fact, be grainy with no meaning to time intervals shorter than some tiny but finite size.
The future of quantum gravity
Not only is quantum gravity in its infancy, no one knows how to get started on a theory of gravitons and unification of forces. The energies at which TOE should be valid may be so high (at least \(10^{19} GeV\)) and the necessary particle separation so small (less than \(10^{-35} m\)) that only indirect evidence can provide clues. For some time, the common lament of theoretical physicists was one so familiar to struggling students -- how do you even get started? But Hawking and others have made a start, and the approach many theorists have taken is called Superstring theory, the topic of the "Superstrings."
Summary
- Einstein’s theory of general relativity includes accelerated frames and, thus, encompasses special relativity and gravity. Created by use of careful thought experiments, it has been repeatedly verified by real experiments.
- One direct result of this behavior of nature is the gravitational lensing of light by massive objects, such as galaxies, also seen in the microlensing of light by smaller bodies in our galaxy.
- Another prediction is the existence of black holes, objects for which the escape velocity is greater than the speed of light and from which nothing can escape.
- The event horizon is the distance from the object at which the escape velocity equals the speed of light \(c\). It is called the Schwarzschild radius \(R_{S}\) and is given by \(R_{S} = \frac{2GM}{c^{2}}\) where \(G\) is the universal gravitational constant, and \(M\) is the mass of the body.
- Physics is unknown inside the event horizon, and the possibility of wormholes and time travel are being studied.
- Candidates for black holes may power the extremely energetic emissions of quasars, distant objects that seem to be early stages of galactic evolution.
- Neutron stars are stellar remnants, having the density of a nucleus, that hint that black holes could form from supernovas, too.
- Gravitational waves are wrinkles in space, predicted by general relativity but not yet observed, caused by changes in very massive objects.
- Quantum gravity is an incompletely developed theory that strives to include general relativity, quantum mechanics, and unification of forces (thus, a TOE).
- One unconfirmed connection between general relativity and quantum mechanics is the prediction of characteristic radiation from just outside black holes.
Glossary
- black holes
- objects having such large gravitational fields that things can fall in, but nothing, not even light, can escape
- general relativity
- Einstein’s theory thatdescribes all types of relative motion including accelerated motion and the effects of gravity
- gravitational waves
- mass-created distortions in space that propagate at the speed of light and that are predicted by general relativity
- escape velocity
- takeoff velocity when kinetic energy just cancels gravitational potential energy
- event horizon
- the distance from the object at which the escape velocity is exactly the speed of light
- neutron stars
- literally a star composed of neutrons
- Schwarzschild radius
- the radius of the event horizon
- thought experiment
- mental analysis of certain carefully and clearly defined situations to develop an idea
- quasars
- the moderately distant galaxies that emit as much or more energy than a normal galaxy
- Quantum gravity
- the theory that deals with particle exchange of gravitons as the mechanism for the force
|
libretexts
|
2025-03-17T19:53:50.775716
| 2016-07-24T09:15:48 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.02%3A_General_Relativity_and_Quantum_Gravity",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.2: General Relativity and Quantum Gravity",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.03%3A_Superstrings
|
34.3: Superstrings
Learning Objectives
By the end of this section, you will be able to:
- Define Superstring theory.
- Explain the relationship between Superstring theory and the Big Bang.
Superstring theory is an attempt to unify gravity with the other three forces and, thus, must contain quantum gravity. The main tenet of Superstring theory is that fundamental particles, including the graviton that carries the gravitational force, act like one-dimensional vibrating strings. Since gravity affects the time and space in which all else exists, Superstring theory is an attempt at a Theory of Everything (TOE). Each independent quantum number is thought of as a separate dimension in some super space (analogous to the fact that the familiar dimensions of space are independent of one another) and is represented by a different type of Superstring. As the universe evolved after the Big Bang and forces became distinct (spontaneous symmetry breaking), some of the dimensions of superspace are imagined to have curled up and become unnoticed.
Forces are expected to be unified only at extremely high energies and at particle separations on the order of \(10^{-35} m\). This could mean that Superstrings must have dimensions or wavelengths of this size or smaller. Just as quantum gravity may imply that there are no time intervals shorter than some finite value, it also implies that there may be no sizes smaller than some tiny but finite value. That may be about \(10^{-35}\). If so, and if Superstring theory can explain all it strives to, then the structures of Superstrings are at the lower limit of the smallest possible size and can have no further substructure. This would be the ultimate answer to the question the ancient Greeks considered. There is a finite lower limit to space.
Not only is Superstring theory in its infancy, it deals with dimensions about 17 orders of magnitude smaller than the \(10^{-18} m\) details that we have been able to observe directly. It is thus relatively unconstrained by experiment, and there are a host of theoretical possibilities to choose from. This has led theorists to make choices subjectively (as always) on what is the most elegant theory, with less hope than usual that experiment will guide them. It has also led to speculation of alternate universes, with their Big Bangs creating each new universe with a random set of rules. These speculations may not be tested even in principle, since an alternate universe is by definition unattainable. It is something like exploring a self-consistent field of mathematics, with its axioms and rules of logic that are not consistent with nature. Such endeavors have often given insight to mathematicians and scientists alike and occasionally have been directly related to the description of new discoveries.
Summary
- Superstring theory holds that fundamental particles are one-dimensional vibrations analogous to those on strings and is an attempt at a theory of quantum gravity.
Glossary
- Superstring theory
- a theory to unify gravity with the other three forces in which the fundamental particles are considered to act like one-dimensional vibrating strings
|
libretexts
|
2025-03-17T19:53:50.837959
| 2016-07-24T09:16:29 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.03%3A_Superstrings",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.3: Superstrings",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.04%3A_Dark_Matter_and_Closure
|
34.4: Dark Matter and Closure
Learning Objectives
By the end of this section, you will be able to:
- Discuss the existence of dark matter.
- Explain neutrino oscillations and their consequences.
One of the most exciting problems in physics today is the fact that there is far more matter in the universe than we can see. The motion of stars in galaxies and the motion of galaxies in clusters imply that there is about 10 times as much mass as in the luminous objects we can see. The indirectly observed non-luminous matter is called dark matter . Why is dark matter a problem? For one thing, we do not know what it is. It may well be 90% of all matter in the universe, yet there is a possibility that it is of a completely unknown form -- a stunning discovery if verified. Dark matter has implications for particle physics. It may be possible that neutrinos actually have small masses or that there are completely unknown types of particles. Dark matter also has implications for cosmology, since there may be enough dark matter to stop the expansion of the universe. That is another problem related to dark matter -- we do not know how much there is. We keep finding evidence for more matter in the universe, and we have an idea of how much it would take to eventually stop the expansion of the universe, but whether there is enough is still unknown.
Evidence
The first clues that there is more matter than meets the eye came from the Swiss-born American astronomer Fritz Zwicky in the 1930s; some initial work was also done by the American astronomer Vera Rubin. Zwicky measured the velocities of stars orbiting the galaxy, using the relativistic Doppler shift of their spectra (see Figure (a)). He found that velocity varied with distance from the center of the galaxy, as graphed in Figure (b). If the mass of the galaxy was concentrated in its center, as are its luminous stars, the velocities should decrease as the square root of the distance from the center. Instead, the velocity curve is almost flat, implying that there is a tremendous amount of matter in the galactic halo. Although not immediately recognized for its significance, such measurements have now been made for many galaxies, with similar results. Further, studies of galactic clusters have also indicated that galaxies have a mass distribution greater than that obtained from their brightness (proportional to the number of stars), which also extends into large halos surrounding the luminous parts of galaxies. Observations of other EM wavelengths, such as radio waves and X rays, have similarly confirmed the existence of dark matter. Take, for example, X rays in the relatively dark space between galaxies, which indicates the presence of previously unobserved hot, ionized gas (see Figure (c)).
Theoretical Yearnings for Closure
Is the universe open or closed? That is, will the universe expand forever or will it stop, perhaps to contract? This, until recently, was a question of whether there is enough gravitation to stop the expansion of the universe. In the past few years, it has become a question of the combination of gravitation and what is called the cosmological constant . The cosmological constant was invented by Einstein to prohibit the expansion or contraction of the universe. At the time he developed general relativity, Einstein considered that an illogical possibility. The cosmological constant was discarded after Hubble discovered the expansion, but has been re-invoked in recent years.
Gravitational attraction between galaxies is slowing the expansion of the universe, but the amount of slowing down is not known directly. In fact, the cosmological constant can counteract gravity’s effect. As recent measurements indicate, the universe is expanding faster now than in the past -- perhaps a “modern inflationary era” in which the dark energy is thought to be causing the expansion of the present-day universe to accelerate. If the expansion rate were affected by gravity alone, we should be able to see that the expansion rate between distant galaxies was once greater than it is now. However, measurements show it was less than now. We can, however, calculate the amount of slowing based on the average density of matter we observe directly. Here we have a definite answer -- there is far less visible matter than needed to stop expansion. The critical density \(\rho_{c}\) is defined to be the density needed to just halt universal expansion in a universe with no cosmological constant. It is estimated to be about
\[\rho_{c} \approx 10^{-26} kg/m^{3}.\]
However, this estimate of \(\rho_{c}\) is only good to about a factor of two, due to uncertainties in the expansion rate of the universe. The critical density is equivalent to an average of only a few nucleons per cubic meter, remarkably small and indicative of how truly empty intergalactic space is. Luminous matter seems to account for roughly \(0.5 \%\) to \(2 \%\) of the critical density, far less than that needed for closure. Taking into account the amount of dark matter we detect indirectly and all other types of indirectly observed normal matter, there is only \(10 \%\) to \(40 \%\) of what is needed for closure. If we are able to refine the measurements of expansion rates now and in the past, we will have our answer regarding the curvature of space and we will determine a value for the cosmological constant to justify this observation. Finally, the most recent measurements of the CMBR have implications for the cosmological constant, so it is not simply a device concocted for a single purpose.
After the recent experimental discovery of the cosmological constant, most researchers feel that the universe should be just barely open. Since matter can be thought to curve the space around it, we call an open universe negatively curved . This means that you can in principle travel an unlimited distance in any direction. A universe that is closed is called positively curved. This means that if you travel far enough in any direction, you will return to your starting point, analogous to circumnavigating the Earth. In between these two is a flat (zero curvature) universe. The recent discovery of the cosmological constant has shown the universe is very close to flat, and will expand forever. Why do theorists feel the universe is flat? Flatness is a part of the inflationary scenario that helps explain the flatness of the microwave background. In fact, since general relativity implies that matter creates the space in which it exists, there is a special symmetry to a flat universe.
What Is the Dark Matter We See Indirectly?
There is no doubt that dark matter exists, but its form and the amount in existence are two facts that are still being studied vigorously. As always, we seek to explain new observations in terms of known principles. However, as more discoveries are made, it is becoming more and more difficult to explain dark matter as a known type of matter.
One of the possibilities for normal matter is being explored using the Hubble Space Telescope and employing the lensing effect of gravity on light (Figure \(\PageIndex{2}\)). Stars glow because of nuclear fusion in them, but planets are visible primarily by reflected light. Jupiter, for example, is too small to ignite fusion in its core and become a star, but we can see sunlight reflected from it, since we are relatively close. If Jupiter orbited another star, we would not be able to see it directly. The question is open as to how many planets or other bodies smaller than about 1/1000 the mass of the Sun are there. If such bodies pass between us and a star, they will not block the star’s light, being too small, but they will form a gravitational lens, as discussed in General Relativity and Quantum Gravity .
In a process called microlensing light from the star is focused and the star appears to brighten in a characteristic manner. Searches for dark matter in this form are particularly interested in galactic halos because of the huge amount of mass that seems to be there. Such microlensing objects are thus called massive compact halo objects , or MACHOs. To date, a few MACHOs have been observed, but not predominantly in galactic halos, nor in the numbers needed to explain dark matter.
MACHOs are among the most conventional of unseen objects proposed to explain dark matter. Others being actively pursued are red dwarfs, which are small dim stars, but too few have been seen so far, even with the Hubble Telescope, to be of significance. Old remnants of stars called white dwarfs are also under consideration, since they contain about a solar mass, but are small as the Earth and may dim to the point that we ordinarily do not observe them. While white dwarfs are known, old dim ones are not. Yet another possibility is the existence of large numbers of smaller than stellar mass black holes left from the Big Bang -- here evidence is entirely absent.
There is a very real possibility that dark matter is composed of the known neutrinos, which may have small, but finite, masses. As discussed earlier, neutrinos are thought to be massless, but we only have upper limits on their masses, rather than knowing they are exactly zero. So far, these upper limits come from difficult measurements of total energy emitted in the decays and reactions in which neutrinos are involved. There is an amusing possibility of proving that neutrinos have mass in a completely different way.
We have noted in Particles, Patterns, and Conservation Laws that there are three flavors of neutrinos (\(v_{e}\), \(v_{\mu}\), and \(v_{r}\)) and that the weak interaction could change quark flavor. It should also change neutrino flavor -- that is, any type of neutrino could change spontaneously into any other, a process called neutrino oscillations . However, this can occur only if neutrinos have a mass. Why? Crudely, because if neutrinos are massless, they must travel at the speed of light and time will not pass for them, so that they cannot change without an interaction. In 1999, results began to be published containing convincing evidence that neutrino oscillations do occur. Using the Super-Kamiokande detector in Japan, the oscillations have been observed and are being verified and further explored at present at the same facility and others.
Neutrino oscillations may also explain the low number of observed solar neutrinos. Detectors for observing solar neutrinos are specifically designed to detect electron neutrinos \(v_{e}\) produced in huge numbers by fusion in the Sun. A large fraction of electron neutrinos \(v_{e}\) may be changing flavor to muon neutrinos \(v_{\mu}\) on their way out of the Sun, possibly enhanced by specific interactions, reducing the flux of electron neutrinos to observed levels. There is also a discrepancy in observations of neutrinos produced in cosmic ray showers. While these showers of radiation produced by extremely energetic cosmic rays should contain twice as many \(v_{\mu}\)'s as \(v_{e}\)'s, their numbers are nearly equal. This may be explained by neutrino oscillations from muon flavor to electron flavor. Massive neutrinos are a particularly appealing possibility for explaining dark matter, since their existence is consistent with a large body of known information and explains more than dark matter. The question is not settled at this writing.
The most radical proposal to explain dark matter is that it consists of previously unknown leptons (sometimes obtusely referred to as non-baryonic matter). These are called weakly interacting massive particles , or WIMPs , and would also be chargeless, thus interacting negligibly with normal matter, except through gravitation. One proposed group of WIMPs would have masses several orders of magnitude greater than nucleons and are sometimes called neutralinos . Others are called axions and would have masses about \(10^{-10}\) that of an electron mass. Both neutralinos and axions would be gravitationally attached to galaxies, but because they are chargeless and only feel the weak force, they would be in a halo rather than interact and coalesce into spirals, and so on, like normal matter (Figure \(\PageIndex{3}\).
Some particle theorists have built WIMPs into their unified force theories and into the inflationary scenario of the evolution of the universe so popular today. These particles would have been produced in just the correct numbers to make the universe flat, shortly after the Big Bang. The proposal is radical in the sense that it invokes entirely new forms of matter, in fact two entirely new forms, in order to explain dark matter and other phenomena. WIMPs have the extra burden of automatically being very difficult to observe directly. This is somewhat analogous to quark confinement, which guarantees that quarks are there, but they can never be seen directly. One of the primary goals of the LHC at CERN, however, is to produce and detect WIMPs. At any rate, before WIMPs are accepted as the best explanation, all other possibilities utilizing known phenomena will have to be shown inferior. Should that occur, we will be in the unanticipated position of admitting that, to date, all we know is only 10% of what exists. A far cry from the days when people firmly believed themselves to be not only the center of the universe, but also the reason for its existence.
Summary
- Dark matter is non-luminous matter detected in and around galaxies and galactic clusters.
- It may be 10 times the mass of the luminous matter in the universe, and its amount may determine whether the universe is open or closed (expands forever or eventually stops).
- The determining factor is the critical density of the universe and the cosmological constant, a theoretical construct intimately related to the expansion and closure of the universe.
- The critical density \(\rho_{c}\) is the density needed to just halt universal expansion. It is estimated to be approximately \(10^{-26} kg/m^{3}\).
- An open universe is negatively curved, a closed universe is positively curved, whereas a universe with exactly the critical density is flat.
- Dark matter’s composition is a major mystery, but it may be due to the suspected mass of neutrinos or a completely unknown type of leptonic matter.
- If neutrinos have mass, they will change families, a process known as neutrino oscillations, for which there is growing evidence.
Glossary
- axions
- a type of WIMPs having masses about 10 −10 of an electron mass
- cosmological constant
- a theoretical construct intimately related to the expansion and closure of the universe
- critical density
- the density of matter needed to just halt universal expansion
- dark matter
- indirectly observed non-luminous matter
- flat (zero curvature) universe
- a universe that is infinite but not curved
- microlensing
- a process in which light from a distant star is focused and the star appears to brighten in a characteristic manner, when a small body (smaller than about 1/1000 the mass of the Sun) passes between us and the star
- MACHOs
- massive compact halo objects; microlensing objects of huge mass
- neutrino oscillations
- a process in which any type of neutrino could change spontaneously into any other
- neutralinos
- a type of WIMPs having masses several orders of magnitude greater than nucleon masses
- negatively curved
- an open universe that expands forever
- positively curved
- a universe that is closed and eventually contracts
- WIMPs
- weakly interacting massive particles; chargeless leptons (non-baryonic matter) interacting negligibly with normal matter
|
libretexts
|
2025-03-17T19:53:50.915876
| 2016-07-24T09:17:35 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.04%3A_Dark_Matter_and_Closure",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.4: Dark Matter and Closure",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.05%3A_Complexity_and_Chaos
|
34.5: Complexity and Chaos
Learning Objectives
By the end of this section, you will be able to:
- Explain complex systems.
- Discuss chaotic behavior of different systems.
Much of what impresses us about physics is related to the underlying connections and basic simplicity of the laws we have discovered. The language of physics is precise and well defined because many basic systems we study are simple enough that we can perform controlled experiments and discover unambiguous relationships. Our most spectacular successes, such as the prediction of previously unobserved particles, come from the simple underlying patterns we have been able to recognize. But there are systems of interest to physicists that are inherently complex. The simple laws of physics apply, of course, but complex systems may reveal patterns that simple systems do not. The emerging field of complexity is devoted to the study of complex systems, including those outside the traditional bounds of physics. Of particular interest is the ability of complex systems to adapt and evolve.
What are some examples of complex adaptive systems? One is the primordial ocean. When the oceans first formed, they were a random mix of elements and compounds that obeyed the laws of physics and chemistry. In a relatively short geological time (about 500 million years), life had emerged. Laboratory simulations indicate that the emergence of life was far too fast to have come from random combinations of compounds, even if driven by lightning and heat. There must be an underlying ability of the complex system to organize itself, resulting in the self-replication we recognize as life. Living entities, even at the unicellular level, are highly organized and systematic. Systems of living organisms are themselves complex adaptive systems. The grandest of these evolved into the biological system we have today, leaving traces in the geological record of steps taken along the way.
Complexity as a discipline examines complex systems, how they adapt and evolve, looking for similarities with other complex adaptive systems. Can, for example, parallels be drawn between biological evolution and the evolution of economic systems ? Economic systems do emerge quickly, they show tendencies for self-organization, they are complex (in the number and types of transactions), and they adapt and evolve. Biological systems do all the same types of things. There are other examples of complex adaptive systems being studied for fundamental similarities. Cultures show signs of adaptation and evolution. The comparison of different cultural evolutions may bear fruit as well as comparisons to biological evolution. Science also is a complex system of human interactions, like culture and economics, that adapts to new information and political pressure, and evolves, usually becoming more organized rather than less. Those who study creative thinking also see parallels with complex systems. Humans sometimes organize almost random pieces of information, often subconsciously while doing other things, and come up with brilliant creative insights. The development of language is another complex adaptive system that may show similar tendencies. Artificial intelligence is an overt attempt to devise an adaptive system that will self-organize and evolve in the same manner as an intelligent living being learns. These are a few of the broad range of topics being studied by those who investigate complexity. There are now institutes, journals, and meetings, as well as popularizations of the emerging topic of complexity.
In traditional physics, the discipline of complexity may yield insights in certain areas. Thermodynamics treats systems on the average, while statistical mechanics deals in some detail with complex systems of atoms and molecules in random thermal motion. Yet there is organization, adaptation, and evolution in those complex systems. Non-equilibrium phenomena, such as heat transfer and phase changes, are characteristically complex in detail, and new approaches to them may evolve from complexity as a discipline. Crystal growth is another example of self-organization spontaneously emerging in a complex system. Alloys are also inherently complex mixtures that show certain simple characteristics implying some self-organization. The organization of iron atoms into magnetic domains as they cool is another. Perhaps insights into these difficult areas will emerge from complexity. But at the minimum, the discipline of complexity is another example of human effort to understand and organize the universe around us, partly rooted in the discipline of physics.
A predecessor to complexity is the topic of chaos, which has been widely publicized and has become a discipline of its own. It is also based partly in physics and treats broad classes of phenomena from many disciplines. Chaos is a word used to describe systems whose outcomes are extremely sensitive to initial conditions. The orbit of the planet Pluto, for example, may be chaotic in that it can change tremendously due to small interactions with other planets. This makes its long-term behavior impossible to predict with precision, just as we cannot tell precisely where a decaying Earth satellite will land or how many pieces it will break into. But the discipline of chaos has found ways to deal with such systems and has been applied to apparently unrelated systems. For example, the heartbeat of people with certain types of potentially lethal arrhythmias seems to be chaotic, and this knowledge may allow more sophisticated monitoring and recognition of the need for intervention.
Chaos is related to complexity. Some chaotic systems are also inherently complex; for example, vortices in a fluid as opposed to a double pendulum. Both are chaotic and not predictable in the same sense as other systems. But there can be organization in chaos and it can also be quantified. Examples of chaotic systems are beautiful fractal patterns such as in Figure \(\PageIndex{1}\). Some chaotic systems exhibit self-organization, a type of stable chaos. The orbits of the planets in our solar system, for example, may be chaotic (we are not certain yet). But they are definitely organized and systematic, with a simple formula describing the orbital radii of the first eight planets and the asteroid belt. Large-scale vortices in Jupiter’s atmosphere are chaotic, but the Great Red Spot is a stable self-organization of rotational energy (Figure \(\PageIndex{2}\)). The Great Red Spot has been in existence for at least 400 years and is a complex self-adaptive system.
The emerging field of complexity, like the now almost traditional field of chaos, is partly rooted in physics. Both attempt to see similar systematics in a very broad range of phenomena and, hence, generate a better understanding of them. Time will tell what impact these fields have on more traditional areas of physics as well as on the other disciplines they relate to.
Summary
- Complexity is an emerging field, rooted primarily in physics, that considers complex adaptive systems and their evolution, including self-organization.
- Complexity has applications in physics and many other disciplines, such as biological evolution.
- Chaos is a field that studies systems whose properties depend extremely sensitively on some variables and whose evolution is impossible to predict.
- Chaotic systems may be simple or complex.
- Studies of chaos have led to methods for understanding and predicting certain chaotic behaviors.
Glossary
- complexity
- an emerging field devoted to the study of complex systems
- chaos
- word used to describe systems the outcomes of which are extremely sensitive to initial conditions
|
libretexts
|
2025-03-17T19:53:50.981186
| 2016-07-24T09:18:34 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.05%3A_Complexity_and_Chaos",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.5: Complexity and Chaos",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.06%3A_High-temperature_Superconductors
|
34.6: High-temperature Superconductors
Learning Objectives
By the end of this section, you will be able to:
- Identify superconductors and their uses.
- Discuss the need for a high-T c superconductor.
Superconductors are materials with a resistivity of zero. They are familiar to the general public because of their practical applications and have been mentioned at a number of points in the text. Because the resistance of a piece of superconductor is zero, there are no heat losses for currents through them; they are used in magnets needing high currents, such as in MRI machines, and could cut energy losses in power transmission. But most superconductors must be cooled to temperatures only a few kelvin above absolute zero, a costly procedure limiting their practical applications. In the past decade, tremendous advances have been made in producing materials that become superconductors at relatively high temperatures. There is hope that room temperature superconductors may someday be manufactured.
Superconductivity was discovered accidentally in 1911 by the Dutch physicist H. Kamerlingh Onnes (1853–1926) when he used liquid helium to cool mercury. Onnes had been the first person to liquefy helium a few years earlier and was surprised to observe the resistivity of a mediocre conductor like mercury drop to zero at a temperature of 4.2 K. We define the temperature at which and below which a material becomes a superconductor to be its critical temperature , denoted by \(T_{c}\). (See Figure \(\PageIndex{1}\).) Progress in understanding how and why a material became a superconductor was relatively slow, with the first workable theory coming in 1957. Certain other elements were also found to become superconductors, but all had \(T_{c}\)'s less than \(10 K\), which are expensive to maintain. Although Onnes received a Nobel prize in 1913, it was primarily for his work with liquid helium.
In 1986, a breakthrough was announced -- a ceramic compound was found to have an unprecedented \(T_{c}\) of 35 K. It looked as if much higher critical temperatures could be possible, and by early 1988 another ceramic (this of thallium, calcium, barium, copper, and oxygen) had been found to have \(T_{c} = 125 K\) (see Figure \(\PageIndex{2}\).) The economic potential of perfect conductors saving electric energy is immense for \(T_{c}\)'s above \(77 K\), since that is the temperature of liquid nitrogen. Although liquid helium has a boiling point of \(4 K\) and can be used to make materials superconducting, it costs about $5 per liter. Liquid nitrogen boils at \(77 K\), but only costs about $0.30 per liter. There was general euphoria at the discovery of these complex ceramic superconductors, but this soon subsided with the sobering difficulty of forming them into usable wires. The first commercial use of a high temperature superconductor is in an electronic filter for cellular phones. High-temperature superconductors are used in experimental apparatus, and they are actively being researched, particularly in thin film applications.
The search is on for even higher \(T_{c}\) superconductors, many of complex and exotic copper oxide ceramics, sometimes including strontium, mercury, or yttrium as well as barium, calcium, and other elements. Room temperature (about \(293 K\)) would be ideal, but any temperature close to room temperature is relatively cheap to produce and maintain. There are persistent reports of \(T_{c}\)'s over \(200 K\) and some in the vicinity of \(270 K\). Unfortunately, these observations are not routinely reproducible, with samples losing their superconducting nature once heated and recooled (cycled) a few times (see Figure \(\PageIndex{3}\)). They are now called USOs or unidentified superconducting objects, out of frustration and the refusal of some samples to show high \(T_{c}\) even though produced in the same manner as others. Reproducibility is crucial to discovery, and researchers are justifiably reluctant to claim the breakthrough they all seek. Time will tell whether USOs are real or an experimental quirk.
The theory of ordinary superconductors is difficult, involving quantum effects for widely separated electrons traveling through a material. Electrons couple in a manner that allows them to get through the material without losing energy to it, making it a superconductor. High-\(T_{c}\) superconductors are more difficult to understand theoretically, but theorists seem to be closing in on a workable theory. The difficulty of understanding how electrons can sneak through materials without losing energy in collisions is even greater at higher temperatures, where vibrating atoms should get in the way. Discoverers of high \(T_{c}\) may feel something analogous to what a politician once said upon an unexpected election victory -- "I wonder what we did right?"
Summary
- High-temperature superconductors are materials that become superconducting at temperatures well above a few kelvin.
- The critical temperature \(T_{c}\) is the temperature below which a material is superconducting.
- Some high-temperature superconductors have verified \(T_{c}\)'s above \(125 K\), and there are reports of \(T_{c}\)'s as high as \(250 K\).
Glossary
- Superconductors
- materials with resistivity of zero
- critical temperature
- the temperature at which and below which a material becomes a superconductor
|
libretexts
|
2025-03-17T19:53:51.045156
| 2016-07-24T09:19:13 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.06%3A_High-temperature_Superconductors",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.6: High-temperature Superconductors",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.07%3A_Some_Questions_We_Know_to_Ask
|
34.7: Some Questions We Know to Ask
Learning Objectives
By the end of this section, you will be able to:
- Identify sample questions to be asked on the largest scales.
- Identify sample questions to be asked on the intermediate scale.
- Identify sample questions to be asked on the smallest scales.
Throughout the text we have noted how essential it is to be curious and to ask questions in order to first understand what is known, and then to go a little farther. Some questions may go unanswered for centuries; others may not have answers, but some bear delicious fruit. Part of discovery is knowing which questions to ask. You have to know something before you can even phrase a decent question. As you may have noticed, the mere act of asking a question can give you the answer. The following questions are a sample of those physicists now know to ask and are representative of the forefronts of physics. Although these questions are important, they will be replaced by others if answers are found to them. The fun continues.
On the Largest Scale
- Is the universe open or closed? Theorists would like it to be just barely closed and evidence is building toward that conclusion. Recent measurements in the expansion rate of the universe and in CMBR support a flat universe. There is a connection to small-scale physics in the type and number of particles that may contribute to closing the universe.
- What is dark matter? It is definitely there, but we really do not know what it is. Conventional possibilities are being ruled out, but one of them still may explain it. The answer could reveal whole new realms of physics and the disturbing possibility that most of what is out there is unknown to us, a completely different form of matter.
- How do galaxies form? They exist since very early in the evolution of the universe and it remains difficult to understand how they evolved so quickly. The recent finer measurements of fluctuations in the CMBR may yet allow us to explain galaxy formation.
- What is the nature of various-mass black holes? Only recently have we become confident that many black hole candidates cannot be explained by other, less exotic possibilities. But we still do not know much about how they form, what their role in the history of galactic evolution has been, and the nature of space in their vicinity. However, so many black holes are now known that correlations between black hole mass and galactic nuclei characteristics are being studied.
- What is the mechanism for the energy output of quasars? These distant and extraordinarily energetic objects now seem to be early stages of galactic evolution with a supermassive black-hole-devouring material. Connections are now being made with galaxies having energetic cores, and there is evidence consistent with less consuming, supermassive black holes at the center of older galaxies. New instruments are allowing us to see deeper into our own galaxy for evidence of our own massive black hole.
- Where do the \(\gamma\) bursts come from? We see bursts of \(\gamma\) rays coming from all directions in space, indicating the sources are very distant objects rather than something associated with our own galaxy. Some \(\gamma\) bursts finally are being correlated with known sources so that the possibility they may originate in binary neutron star interactions or black holes eating a companion neutron star can be explored.
On the Intermediate Scale
- How do phase transitions take place on the microscopic scale ? We know a lot about phase transitions, such as water freezing, but the details of how they occur molecule by molecule are not well understood. Similar questions about specific heat a century ago led to early quantum mechanics. It is also an example of a complex adaptive system that may yield insights into other self-organizing systems.
- Is there a way to deal with nonlinear phenomena that reveals underlying connections ? Nonlinear phenomena lack a direct or linear proportionality that makes analysis and understanding a little easier. There are implications for nonlinear optics and broader topics such as chaos.
- How do high-\(T_{c}\) superconductors become resistanceless at such high temperatures ? Understanding how they work may help make them more practical or may result in surprises as unexpected as the discovery of superconductivity itself.
- There are magnetic effects in materials we do not understand -- how do they work ? Although beyond the scope of this text, there is a great deal to learn in condensed matter physics (the physics of solids and liquids). We may find surprises analogous to lasing, the quantum Hall effect, and the quantization of magnetic flux. Complexity may play a role here, too.
On the Smallest Scale
- Are quarks and leptons fundamental, or do they have a substructure ? The higher energy accelerators that are just completed or being constructed may supply some answers, but there will also be input from cosmology and other systematics.
- Why do leptons have integral charge while quarks have fractional charge ? If both are fundamental and analogous as thought, this question deserves an answer. It is obviously related to the previous question.
- Why are there three families of quarks and leptons ? First, does this imply some relationship? Second, why three and only three families?
- Are all forces truly equal (unified) under certain circumstances ? They don’t have to be equal just because we want them to be. The answer may have to be indirectly obtained because of the extreme energy at which we think they are unified.
- Are there other fundamental forces ? There was a flurry of activity with claims of a fifth and even a sixth force a few years ago. Interest has subsided, since those forces have not been detected consistently. Moreover, the proposed forces have strengths similar to gravity, making them extraordinarily difficult to detect in the presence of stronger forces. But the question remains; and if there are no other forces, we need to ask why only four and why these four.
- Is the proton stable ? We have discussed this in some detail, but the question is related to fundamental aspects of the unification of forces. We may never know from experiment that the proton is stable, only that it is very long lived.
- Are there magnetic monopoles ? Many particle theories call for very massive individual north- and south-pole particles -- magnetic monopoles. If they exist, why are they so different in mass and elusiveness from electric charges, and if they do not exist, why not?
- Do neutrinos have mass ? Definitive evidence has emerged for neutrinos having mass. The implications are significant, as discussed in this chapter. There are effects on the closure of the universe and on the patterns in particle physics.
- What are the systematic characteristics of high- \(Z\) nuclei? All elements with \(Z = 118\) or less (with the exception of 115 and 117) have now been discovered. It has long been conjectured that there may be an island of relative stability near \(Z = 114\), and the study of the most recently discovered nuclei will contribute to our understanding of nuclear forces.
These lists of questions are not meant to be complete or consistently important -- you can no doubt add to it yourself. There are also important questions in topics not broached in this text, such as certain particle symmetries, that are of current interest to physicists. Hopefully, the point is clear that no matter how much we learn, there always seems to be more to know. Although we are fortunate to have the hard-won wisdom of those who preceded us, we can look forward to new enlightenment, undoubtedly sprinkled with surprise.
Summary
- On the largest scale, the questions which can be asked may be about dark matter, dark energy, black holes, quasars, and other aspects of the universe.
- On the intermediate scale, we can query about gravity, phase transitions, nonlinear phenomena, high-\(T_{c}\) superconductors, and magnetic effects on materials.
- On the smallest scale, questions may be about quarks and leptons, fundamental forces, stability of protons, and existence of monopoles.
|
libretexts
|
2025-03-17T19:53:51.112028
| 2016-07-24T09:19:50 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.07%3A_Some_Questions_We_Know_to_Ask",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.7: Some Questions We Know to Ask",
"author": "OpenStax"
}
|
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.E%3A_Frontiers_of_Physics_(Exercises)
|
34.E: Frontiers of Physics (Exercises)
-
- Last updated
- Save as PDF
Conceptual Questions
34.1: Cosmology and Particle Physics
1. Explain why it only appears that we are at the center of expansion of the universe and why an observer in another galaxy would see the same relative motion of all but the closest galaxies away from her.
2. If there is no observable edge to the universe, can we determine where its center of expansion is? Explain.
3. If the universe is infinite, does it have a center? Discuss.
4. Another known cause of red shift in light is the source being in a high gravitational field. Discuss how this can be eliminated as the source of galactic red shifts, given that the shifts are proportional to distance and not to the size of the galaxy.
5. If some unknown cause of red shift—such as light becoming “tired” from traveling long distances through empty space—is discovered, what effect would there be on cosmology?
6. Olbers’s paradox poses an interesting question: If the universe is infinite, then any line of sight should eventually fall on a star’s surface. Why then is the sky dark at night? Discuss the commonly accepted evolution of the universe as a solution to this paradox.
7. If the cosmic microwave background radiation (CMBR) is the remnant of the Big Bang’s fireball, we expect to see hot and cold regions in it. What are two causes of these wrinkles in the CMBR? Are the observed temperature variations greater or less than originally expected?
8. The decay of one type of \(\displaystyle K\)-meson is cited as evidence that nature favors matter over antimatter. Since mesons are composed of a quark and an antiquark, is it surprising that they would preferentially decay to one type over another? Is this an asymmetry in nature? Is the predominance of matter over antimatter an asymmetry?
9. Distances to local galaxies are determined by measuring the brightness of stars, called Cepheid variables, that can be observed individually and that have absolute brightnesses at a standard distance that are well known. Explain how the measured brightness would vary with distance as compared with the absolute brightness.
10. Distances to very remote galaxies are estimated based on their apparent type, which indicate the number of stars in the galaxy, and their measured brightness. Explain how the measured brightness would vary with distance. Would there be any correction necessary to compensate for the red shift of the galaxy (all distant galaxies have significant red shifts)? Discuss possible causes of uncertainties in these measurements.
11. If the smallest meaningful time interval is greater than zero, will the lines in Figure ever meet?
34.2: General Relativity and Quantum Gravity
12. Quantum gravity, if developed, would be an improvement on both general relativity and quantum mechanics, but more mathematically difficult. Under what circumstances would it be necessary to use quantum gravity? Similarly, under what circumstances could general relativity be used? When could special relativity, quantum mechanics, or classical physics be used?
13. Does observed gravitational lensing correspond to a converging or diverging lens? Explain briefly.
14. Suppose you measure the red shifts of all the images produced by gravitational lensing, such as in Figure.You find that the central image has a red shift less than the outer images, and those all have the same red shift. Discuss how this not only shows that the images are of the same object, but also implies that the red shift is not affected by taking different paths through space. Does it imply that cosmological red shifts are not caused by traveling through space (light getting tired, perhaps)?
15. What are gravitational waves, and have they yet been observed either directly or indirectly?
16. Is the event horizon of a black hole the actual physical surface of the object?
17. Suppose black holes radiate their mass away and the lifetime of a black hole created by a supernova is about 1067 years. How does this lifetime compare with the accepted age of the universe? Is it surprising that we do not observe the predicted characteristic radiation?
34.4: Dark Matter and Closure
18. Discuss the possibility that star velocities at the edges of galaxies being greater than expected is due to unknown properties of gravity rather than to the existence of dark matter. Would this mean, for example, that gravity is greater or smaller than expected at large distances? Are there other tests that could be made of gravity at large distances, such as observing the motions of neighboring galaxies?
19. How does relativistic time dilation prohibit neutrino oscillations if they are massless?
20. If neutrino oscillations do occur, will they violate conservation of the various lepton family numbers \(\displaystyle (L_e, L_μ,\) and \(\displaystyle L_τ)\)? Will neutrino oscillations violate conservation of the total number of leptons?
21. Lacking direct evidence of WIMPs as dark matter, why must we eliminate all other possible explanations based on the known forms of matter before we invoke their existence?
34.5: Complexity and Chaos
22. Must a complex system be adaptive to be of interest in the field of complexity? Give an example to support your answer.
23. State a necessary condition for a system to be chaotic.
34.6: High-temperature Superconductors
24. What is critical temperature \(\displaystyle T_c\)? Do all materials have a critical temperature? Explain why or why not.
25. Explain how good thermal contact with liquid nitrogen can keep objects at a temperature of 77 K (liquid nitrogen’s boiling point at atmospheric pressure).
26. Not only is liquid nitrogen a cheaper coolant than liquid helium, its boiling point is higher (77 K vs. 4.2 K). How does higher temperature help lower the cost of cooling a material? Explain in terms of the rate of heat transfer being related to the temperature difference between the sample and its surroundings.
34.7: Some Questions We Know to Ask
27. For experimental evidence, particularly of previously unobserved phenomena, to be taken seriously it must be reproducible or of sufficiently high quality that a single observation is meaningful. Supernova 1987A is not reproducible. How do we know observations of it were valid? The fifth force is not broadly accepted. Is this due to lack of reproducibility or poor-quality experiments (or both)? Discuss why forefront experiments are more subject to observational problems than those involving established phenomena.
28. Discuss whether you think there are limits to what humans can understand about the laws of physics. Support your arguments.
Problems & Exercises
34.1: Cosmology and Particle Physics
29. Find the approximate mass of the luminous matter in the Milky Way galaxy, given it has approximately \(\displaystyle 10^{11}\) stars of average mass 1.5 times that of our Sun.
Solution
\(\displaystyle 3×10^{41}kg\)
30. Find the approximate mass of the dark and luminous matter in the Milky Way galaxy. Assume the luminous matter is due to approximately 1011 stars of average mass 1.5 times that of our Sun, and take the dark matter to be 10 times as massive as the luminous matter.
31. (a) Estimate the mass of the luminous matter in the known universe, given there are \(\displaystyle 10^{11}\) galaxies, each containing \(\displaystyle 10^{11}\) stars of average mass 1.5 times that of our Sun.
(b) How many protons (the most abundant nuclide) are there in this mass?
(c) Estimate the total number of particles in the observable universe by multiplying the answer to (b) by two, since there is an electron for each proton, and then by \(\displaystyle 10^9\), since there are far more particles (such as photons and neutrinos) in space than in luminous matter.
Solution
(a) \(\displaystyle 3×10^{52}kg\)
(b) \(\displaystyle 2×10^{79}\)
(c) \(\displaystyle 4×10^{88}\)
32. If a galaxy is 500 Mly away from us, how fast do we expect it to be moving and in what direction?
33. On average, how far away are galaxies that are moving away from us at 2.0% of the speed of light?
Solution
0.30 Gly
34. Our solar system orbits the center of the Milky Way galaxy. Assuming a circular orbit 30,000 ly in radius and an orbital speed of 250 km/s, how many years does it take for one revolution? Note that this is approximate, assuming constant speed and circular orbit, but it is representative of the time for our system and local stars to make one revolution around the galaxy.
35. (a) What is the approximate speed relative to us of a galaxy near the edge of the known universe, some 10 Gly away?
(b) What fraction of the speed of light is this? Note that we have observed galaxies moving away from us at greater than \(\displaystyle 0.9c\).
Solution
(a) \(\displaystyle 2.0×10^5km/s\)
(b) 0.67c
36. (a) Calculate the approximate age of the universe from the average value of the Hubble constant, \(\displaystyle H_0=20km/s⋅Mly\). To do this, calculate the time it would take to travel 1 Mly at a constant expansion rate of 20 km/s.
(b) If deceleration is taken into account, would the actual age of the universe be greater or less than that found here? Explain.
37. Assuming a circular orbit for the Sun about the center of the Milky Way galaxy, calculate its orbital speed using the following information: The mass of the galaxy is equivalent to a single mass \(\displaystyle 1.5×10^{11}\) times that of the Sun (or \(\displaystyle 3×10^{41}kg\)), located 30,000 ly away.
Solution
\(\displaystyle 2.7×10^5m/s\)
38. (a) What is the approximate force of gravity on a 70-kg person due to the Andromeda galaxy, assuming its total mass is \(\displaystyle 10^{13}\) that of our Sun and acts like a single mass 2 Mly away?
(b) What is the ratio of this force to the person’s weight? Note that Andromeda is the closest large galaxy.
39. Andromeda galaxy is the closest large galaxy and is visible to the naked eye. Estimate its brightness relative to the Sun, assuming it has luminosity \(\displaystyle 10^{12}\) times that of the Sun and lies 2 Mly away.
Solution
\(\displaystyle 6×10^{−11}\) (an overestimate, since some of the light from Andromeda is blocked by gas and dust within that galaxy)
40. (a) A particle and its antiparticle are at rest relative to an observer and annihilate (completely destroying both masses), creating two γ rays of equal energy. What is the characteristic γ-ray energy you would look for if searching for evidence of proton-antiproton annihilation? (The fact that such radiation is rarely observed is evidence that there is very little antimatter in the universe.)
(b) How does this compare with the 0.511-MeV energy associated with electron-positron annihilation?
41. The average particle energy needed to observe unification of forces is estimated to be \(\displaystyle 10^{19}GeV.\)
(a) What is the rest mass in kilograms of a particle that has a rest mass of \(\displaystyle 10^{19}GeV/c^2\)?
(b) How many times the mass of a hydrogen atom is this?
Solution
(a) \(\displaystyle 2×10^{−8}kg\)
(b) \(\displaystyle 1×10^{19}\)
42. The peak intensity of the CMBR occurs at a wavelength of 1.1 mm.
(a) What is the energy in eV of a 1.1-mm photon?
(b) There are approximately \(\displaystyle 10^9\) photons for each massive particle in deep space. Calculate the energy of \(\displaystyle 10^9\) such photons.
(c) If the average massive particle in space has a mass half that of a proton, what energy would be created by converting its mass to energy?
(d) Does this imply that space is “matter dominated”? Explain briefly.
43. (a) What Hubble constant corresponds to an approximate age of the universe of \(\displaystyle 10^{10}y\)? To get an approximate value, assume the expansion rate is constant and calculate the speed at which two galaxies must move apart to be separated by 1 Mly (present average galactic separation) in a time of \(\displaystyle 10^{10}y\).
(b) Similarly, what Hubble constant corresponds to a universe approximately \(\displaystyle 2×10^{10}-y\) old?
Solution
(a) 30km/s⋅Mly
(b) 15km/s⋅Mly
44. Show that the velocity of a star orbiting its galaxy in a circular orbit is inversely proportional to the square root of its orbital radius, assuming the mass of the stars inside its orbit acts like a single mass at the center of the galaxy. You may use an equation from a previous chapter to support your conclusion, but you must justify its use and define all terms used.
45. The core of a star collapses during a supernova, forming a neutron star. Angular momentum of the core is conserved, and so the neutron star spins rapidly. If the initial core radius is \(\displaystyle 5.0×10^5km\) and it collapses to 10.0 km, find the neutron star’s angular velocity in revolutions per second, given the core’s angular velocity was originally 1 revolution per 30.0 days.
Solution
960 rev/s
46. Using data from the previous problem, find the increase in rotational kinetic energy, given the core’s mass is 1.3 times that of our Sun. Where does this increase in kinetic energy come from?
47. Distances to the nearest stars (up to 500 ly away) can be measured by a technique called parallax, as shown in Figure. What are the angles \(\displaystyle θ_1\) and \(\displaystyle θ_2\) relative to the plane of the Earth’s orbit for a star 4.0 ly directly above the Sun?
Solution
\(\displaystyle 89.999773º\) (many digits are used to show the difference between \(\displaystyle 90º\))
48. (a) Use the Heisenberg uncertainty principle to calculate the uncertainty in energy for a corresponding time interval of \(\displaystyle 10^{−43}s\).
(b) Compare this energy with the \(\displaystyle 10^{19}GeV\) unification-of-forces energy and discuss why they are similar.
49. Construct Your Own Problem
Consider a star moving in a circular orbit at the edge of a galaxy. Construct a problem in which you calculate the mass of that galaxy in kg and in multiples of the solar mass based on the velocity of the star and its distance from the center of the galaxy.
Distances to nearby stars are measured using triangulation, also called the parallax method. The angle of line of sight to the star is measured at intervals six months apart, and the distance is calculated by using the known diameter of the Earth’s orbit. This can be done for stars up to about 500 ly away.
34.2: General Relativity and Quantum Gravity
50. What is the Schwarzschild radius of a black hole that has a mass eight times that of our Sun? Note that stars must be more massive than the Sun to form black holes as a result of a supernova.
Solution
23.6 km
51. Black holes with masses smaller than those formed in supernovas may have been created in the Big Bang. Calculate the radius of one that has a mass equal to the Earth’s.
52. Supermassive black holes are thought to exist at the center of many galaxies.
(a) What is the radius of such an object if it has a mass of \(\displaystyle 10^9\) Suns?
(b) What is this radius in light years?
Solution
(a) \(\displaystyle 2.95×10^{12}m\)
(b) \(\displaystyle 3.12×10^{−4}ly\)
53. Construct Your Own Problem
Consider a supermassive black hole near the center of a galaxy. Calculate the radius of such an object based on its mass. You must consider how much mass is reasonable for these large objects, and which is now nearly directly observed. (Information on black holes posted on the Web by NASA and other agencies is reliable, for example.)
34.3: Superstrings
54. The characteristic length of entities in Superstring theory is approximately \(\displaystyle 10^{−35}m\).
(a) Find the energy in GeV of a photon of this wavelength.
(b) Compare this with the average particle energy of \(\displaystyle 10^{19}GeV\) needed for unification of forces.
Solution
(a) \(\displaystyle 1×10^{20}\)
(b) 10 times greater
34.4: Dark Matter and Closure
55. If the dark matter in the Milky Way were composed entirely of MACHOs (evidence shows it is not), approximately how many would there have to be? Assume the average mass of a MACHO is 1/1000 that of the Sun, and that dark matter has a mass 10 times that of the luminous Milky Way galaxy with its \(\displaystyle 10^{11}\) stars of average mass 1.5 times the Sun’s mass.
Solution
\(\displaystyle 1.5×10^{15}\)
56. The critical mass density needed to just halt the expansion of the universe is approximately \(\displaystyle 10^{−26}kg/m^3\).
(a) Convert this to \(\displaystyle eV/c^2⋅m^3\).
(b) Find the number of neutrinos per cubic meter needed to close the universe if their average mass is \(\displaystyle 7eV/c^2\) and they have negligible kinetic energies.
57. Assume the average density of the universe is 0.1 of the critical density needed for closure. What is the average number of protons per cubic meter, assuming the universe is composed mostly of hydrogen?
Solution
\(\displaystyle 0.6m^{−3}\)
68. To get an idea of how empty deep space is on the average, perform the following calculations:
(a) Find the volume our Sun would occupy if it had an average density equal to the critical density of \(\displaystyle 10^{−26}kg/m^3\) thought necessary to halt the expansion of the universe.
(b) Find the radius of a sphere of this volume in light years.
(c) What would this radius be if the density were that of luminous matter, which is approximately 5% that of the critical density?
(d) Compare the radius found in part (c) with the 4-ly average separation of stars in the arms of the Milky Way.
34.6: High-temperature Superconductors
69. A section of superconducting wire carries a current of 100 A and requires 1.00 L of liquid nitrogen per hour to keep it below its critical temperature. For it to be economically advantageous to use a superconducting wire, the cost of cooling the wire must be less than the cost of energy lost to heat in the wire. Assume that the cost of liquid nitrogen is $0.30 per liter, and that electric energy costs $0.10 per kW·h. What is the resistance of a normal wire that costs as much in wasted electric energy as the cost of liquid nitrogen for the superconductor?
Solution
0.30 Ω
Contributors and Attributions
-
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0) .
|
libretexts
|
2025-03-17T19:53:51.284106
| 2017-12-28T17:49:39 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/34%3A_Frontiers_of_Physics/34.E%3A_Frontiers_of_Physics_(Exercises)",
"book_url": "https://commons.libretexts.org/book/phys-1419",
"title": "34.E: Frontiers of Physics (Exercises)",
"author": null
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math
|
1: Trade Math Last updated Save as PDF Page ID 17915 BC Cook Articulation Committee BCcampus Learning Objectives Demonstrate the use of the metric and imperial/U.S. measuring systems Convert and adjust measurements, recipes, and formulas 1.1: Units of Measurement 1.2: Temperature 1.3: Converting Within the Metric System 1.4: Imperial and U.S. Systems of Measurement 1.5: Converting and Adjusting Recipes and Formulas
|
libretexts
|
2025-03-17T19:53:53.538584
| 2021-09-21T06:41:23 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1: Trade Math",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.01%3A_Units_of_Measurement
|
1.1: Units of Measurement
Canadian cooks should feel comfortable working in three different measurement systems. Two of these systems (U.S. and imperial) are closely related, while the third (S.I., more commonly called metric) is different from the other two.
Although the metric system was introduced in Canada a number of years ago, the food industry and home cooks still rely heavily on equipment and cookbooks imported from the United States. In addition, because we used imperial measurements in Canada for the sale of liquids, some industry recipes will call for imperial measurements rather than U.S. liquid measurements.
The imperial and U.S. measuring systems evolved out of the system used in Europe prior to the 20th century. Although both the imperial and U.S. systems use the same terminology, there are slight differences in actual measurements that you must account for, particularly with volume .
The easiest way to work with the three systems is to have different sets of measuring devices: one for the metric system, one for the imperial system, and one for the U.S. system. Alternatively, you could have one set of devices that have measurements for all three systems indicated. U.S. measuring instruments can be used with slight adjustments for imperial measuring.
It is not good practice to use two systems of measurement when preparing a recipe. Working between two systems of measurement in a recipe may result in inaccuracies that could affect the final product’s taste, yield, consistency, and appearance. To ensure a consistent and successful result, a good practice is to convert the recipe into one standard system of measurement.
The S.I. (Metric) System: Types, Units, and Symbols
All measuring systems have basic units for length, mass (weight), capacity (volume), and temperature. The basic units for the metric system are shown in Table 1.
| Type of Measurement | Unit | Symbol |
|---|---|---|
| length (distance) | metre | m |
| mass (weight) | gram | g |
| capacity (volume) | litre | L |
| temperature | degrees Celsius | °C |
Note that the abbreviation or symbol of the unit is not followed by a period and that all the abbreviations are lowercase letters except for litre which is usually a capital L .
In the metric system, the basic units are turned into larger or smaller measurements by using a prefix that carries a specific meaning as shown in Table 2. The most commonly used prefixes are kilo (k), centi (c), and milli (m).
| Prefix | Symbol | Meaning |
|---|---|---|
| kilo | k | 1000 |
| hecto | h | 100 |
| deca | da | 10 |
| deci | d | 1/10 or 0.1 |
| centi | c | 1/100 or 0.01 |
| milli | m | 1/1000 or 0.001 |
When you read a measurement in the metric system, it is fairly easy to translate the measurement into a number of the basic units. For example, 5 kg (five kilograms) is the same as 5 × 1000 (the meaning of kilo) grams or 5000 grams. Or 2 mL (two millilitres) is the same as 2 × 0.001 (the meaning of milli) litres or 0.002 litres. This process is discussed further in the section on converting below.
The most commonly used measurements in commercial kitchens are mass (weight), capacity (volume), and temperature.
Units of Length (Distance)
The basic unit of length or distance in the metric system is the metre. The most frequently used units of length used in the Canadian food industry are the centimetre and millimetre. The units of length in the metric system are shown in Table 3.
| Unit | Abbreviation | Length (Distance) |
|---|---|---|
| kilometre | km | 1000 meter |
| hectometre | hm | 100 metres |
| decametre | dam | 10 metres |
| metre | m | 1 metre |
| decimetre | dm | 0.1 metres |
| centimetre | cm | 0.01 metres |
| millimetre | mm | 0.001 metres |
Units of Mass (Weight)
The basic unit of mass or weight in the metric system is the gram. The most frequently used units of mass or weight used in the Canadian food industry are the gram and kilogram. The units of mass in the metric system are shown in Table 4.
| Unit | Abbreviation | Mass (Weight) |
|---|---|---|
| tonne | t | 1000 kilograms |
| kilogram | kg | 1000 grams |
| hectogram | hg | 100 grams |
| decagram | dag | 10 grams |
| gram | g | 1 gram |
| decigram | dg | 0.1 g |
| centigram | cg | 0.01 g |
| milligram | mg | 0.001 |
Units of Capacity (Volume)
The basic unit of volume or capacity is the litre. The most commonly used units in cooking are the litre and the millilitre. The units of volume in the metric system are shown in Table 5.
| Unit | Abbreviation | Volume |
|---|---|---|
| kilolitre | kL | 1000 L |
| hectolitre | hL | 100 L |
| decalitre | daL | 10 L |
| litre | L | 1 L |
| decilitre | dL | 0.1 L |
| centilitre | cL | 0.01 L |
| millilitre | mL | 0.001 L |
Occasionally, you will encounter a unit of volume called cubic measurement (sometimes used to express the volume of solids or the capacity of containers), and the units will be expressed as “cc” or cm 3 (cubic centimetre). Cubic centimetres are the same as millilitres. That is, 1 cc = 1 cm 3 = 1 mL
In the metric system, 1 mL (cc) of water weighs 1 gram. We will explore this later when discussing the difference between measuring by weight and by volume.
|
libretexts
|
2025-03-17T19:53:53.622778
| 2021-09-21T06:41:26 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.01%3A_Units_of_Measurement",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1.1: Units of Measurement",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.02%3A_Temperature
|
1.2: Temperature
The metric units for temperature are degrees Celsius (°C). There are no other units used. Temperature is one area where you may find it necessary to convert from Celsius to Fahrenheit and vice versa, as you probably do not have two ovens or stoves at your disposal. However, many modern stoves and ovens, as well as most thermometers, have both Celsius and Fahrenheit temperatures marked on their controls.
Note: There are many “apps” available for converting measurements. These can easily be downloaded onto a smartphone or tablet and used in the kitchen.
|
libretexts
|
2025-03-17T19:53:53.678180
| 2021-09-21T06:41:26 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.02%3A_Temperature",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1.2: Temperature",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.03%3A_Converting_Within_the_Metric_System
|
1.3: Converting Within the Metric System
To convert from one unit to another within the metric system usually means moving a decimal point. If you can remember what the prefixes mean, you can convert within the metric system relatively easily by simply multiplying or dividing the number by the value of the prefix.
The most common metric measurements used in the food service industry are kilograms, grams, litres, and millilitres.
Examples of How to Convert Between Measurements
Example 1
Convert 26.75 kg to g.
First, write the question with the meaning of the prefix inserted. In this example, k is the prefix, and k means 1000, so:
26.75 kg = 26.75 × (1000) g = 26 750 g
Notice that there is no comma used in the answer 26 750 g. In the metric system, large numbers are separated every three digits by a space, not a comma.
Example 2
Convert 0.2 L to mL.
Again, write the question with the meaning of the prefix inserted. In this example, m is the prefix, and m means 0.001, so:
0.2 L = _____ (0.001) L
To find the blank (the value of the millilitres), divide the left-hand number by the right-hand number.
0.2 L ÷ 0.001 L = 200
This means 0.2 L = 200 mL.
Notice that there is a zero (0) before (to the left of) the decimal point. When writing decimal numbers that are smaller than 1 in the metric system, it is customary to place a zero to the left of the decimal point. Thus .6 in the metric system is written 0.6.
If you are working with two prefixes, you can convert in much the same way as above.
Example 3
Find the number of dL in 12.2 mL.
The prefixes are d, which means 0.1, and m , which means 0.001. Insert the values of the prefixes into the conversion.
_____ dL = 12.2 mL
_____ (0.1) L = 12.2 (0.001) L
_____ (0.1) L = 0.0122 L
To find the value of the blank, divide the right-hand number by the left-hand number.
0.0122 L ÷ 0.1 L = 0.122
This means that 12.2 mL = 0.122 dL.
|
libretexts
|
2025-03-17T19:53:53.738598
| 2021-09-21T06:41:27 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.03%3A_Converting_Within_the_Metric_System",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1.3: Converting Within the Metric System",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.04%3A_Imperial_and_U.S._Systems_of_Measurement
|
1.4: Imperial and U.S. Systems of Measurement
Canada used the U.S. and imperial systems of measurement until 1971 when the S.I. or metric system was declared the official measuring system for Canada, which is now in use in most of the world, with the United States being the major exception. However, “declaring” and “truly adopting” are not always the same.
Because of Canada’s strong ties to the United States, a lot of our food products come from across the border, and many Canadian producers also sell in the U.S. market. This is one of the main reasons Canadians need to know how to work in both systems. Most Canadian packages include both Canadian and U.S. or imperial measurements on the label, and many suppliers still quote prices in cost per pound instead of cost per kilogram.
The most commonly used units of measurement in the U.S. and imperial systems are shown in Table 6.
| Type of Measurement | Unit | Abbreviation |
|---|---|---|
| Weight | Pound | lb. or # |
| Weight | Ounce | oz. |
| Volume | Gallon | gal. |
| Volume | Quart | qt. |
| Volume | Pint | pt. |
| Volume | Cup | c. |
| Volume | Fluid ounce | fl.oz. or oz. |
| Volume | Tablespoon | Tbsp. or tbsp. |
| Volume | Teaspoon | tsp. |
| Length | Mile | m. |
| Length | Yard | yd. |
| Length | Foot | ft. or ′ |
| Length | Inch | in. or ″ |
Differences between the U.S. and Imperial Systems
The only difference between the imperial system and the U.S. system is in volume measurements. Not only are the number of ounces in pints, quarts, and gallons all larger in the imperial system, the size of one fluid ounce is also different, as shown in the table in Table 7.
| Unit of Measurement | Imperial System | Metric Equivalent | U.S. System | Metric Equivalent |
|---|---|---|---|---|
| 1 ounce | 1 (fluid) oz. | 28.41 mL | 1 (fluid) oz. | 29.57 mL |
| 1 gill | 5 (fluid) oz. | 142.07 mL | Not commonly used | |
| 1 cup | Not commonly used | 8 (fluid) oz. | 236.59 mL | |
| 1 pint | 20 (fluid) oz. | 568.26 mL | 16 (fluid) oz. | 473.18 mL |
| 1 quart | 40 (fluid) oz. | 1.137 L | 32 (fluid) oz. | 946.36 mL |
| 1 gallon | 160 (fluid) oz. | 4.546 L | 128 (fluid) oz. | 3.785 L |
Where you will notice this most is with any liquid products manufactured in Canada; these products will show the metric conversion using imperial measurement, but any products originating in the United States will show the conversion using U.S. measurements. For example, if you compare 12 fl. oz. bottles or cans of soft drinks or beer, you will see that American brands contain 355 mL (12 fl. oz. U.S.) and Canadian brands contain 341 mL (12 fl. oz. imperial).
If you are using a recipe written in cups and ounces, always verify the source of your recipe to determine if it has been written using the U.S. or imperial system of measurement. The difference in volume measurements can be quite noticeable when producing large quantities.
If the recipe is from the United States, use U.S. measurements for the conversion, if the recipe originated in the United Kingdom, Australia, or any other country that was once part of the British Empire, use imperial for the conversion.
Converting between Units in the Imperial and U.S. Systems
On occasion, you may need to convert between the various units of volume and between units of volume and units of weight in the U.S. system. To do this, you must know the equivalents for each of the units as shown in Table 8.
| Types of Measurement | Conversion |
|---|---|
| Weight | 1 pound = 16 ounces |
| Volume (U.S.) | 1 gallon = 4 quarts or 128 (fluid) ounces |
| Volume (U.S.) | 1 quart = 2 pints or 4 cups or 32 (fluid) ounces |
| Volume (U.S.) | 1 pint = 2 cups or 16 (fluid) ounces |
| Volume (U.S.) | 1 cup= 8 (fluid) ounces or 16 tablespoons |
| Volume (U.S.) | 1 (fluid) ounce = 2 tablespoons |
| Volume (U.S.) | 1 tablespoon = 3 teaspoons |
| Volume (imperial) | 1 gallon = 4 quarts or 160 (fluid) ounces |
| Volume (imperial) | 1 quart = 2 pints or 40 (fluid) ounces |
| Volume (imperial) | 1 pint = 20 (fluid) ounces |
| Volume (imperial) | 1 gill = 5 (fluid) ounces or 10 tablespoons |
| Volume (imperial) | 1 (fluid) ounce = 2 tablespoons |
| Volume (imperial) | 1 tablespoon = 3 teaspoons |
| Length | 1 mile = 1760 yards |
| Length | 1 yard = 3 feet |
| Length | 1 foot = 12 inches |
To convert from one unit to another, you either divide or multiply, depending on whether you are converting a smaller unit to a larger one, or a larger unit or to a smaller one.
Converting Smaller to Larger Units
To convert from a smaller to a larger unit, you need to divide. For example, to convert 6 tsp. to tablespoons, divide the 6 by the number of teaspoons in one tablespoon, which is 3 (see Table 8).
6 tsp = __ tbsp.
6 ÷ 3 = 2
6 tsp. = 2 tbsp.
To convert ounces to cups, you need to divide by 8 since there are 8 oz. in 1 cup. For example, if you need to convert 24 oz. to cups, you have to divide 24 by 8.
24 oz. = __ cups
24 ÷ 8 = 3
24 oz. = 3 cups
Converting Larger to Smaller Units
To change larger units to smaller, you have to multiply the larger unit by the number of smaller units in that unit. For example, to convert quarts to pints, you have to multiply the number of quarts by 2 because there are 2 pts. in 1 qt. Therefore, to convert 6 qts. to pints you need to multiply:
6 qts. = __ pts.
6 × 2 = 12
6 qts. = 12 pts.
To convert cups to tablespoons, you need to multiply by 16 since there are 16 tbsp. in 1 cup.
3/4 cup = __ tbsp.
16 × 3/4 = 12
3/4 cup = 12 tbsp.
Converting between Metric and Imperial/U.S. Measurement Systems
You can convert between metric and imperial or U.S. measurements with straightforward multiplication or division if you know the conversion ratios . Table 9.1 and 9.2 are a good reference, but there are also many online converters or apps available to make this task easier.
| When you know | Divide by | To get |
|---|---|---|
| millilitres | 4.93 | teaspoons |
| millilitres | 14.79 | tablespoons |
| millilitres | 28.41 | fluid ounces (imperial) |
| millilitres | 29.57 | fluid ounces (U.S.) |
| millilitres | 236.59 | cups |
| litres | 0.236 | cups |
| millilitres | 473.18 | pints (U.S.) |
| litres | 0.473 | pints (U.S.) |
| millilitres | 568.26 | pints (imperial) |
| litres | 0.568 | pints (imperial) |
| millilitres | 946.36 | quarts (U.S.) |
| litres | 0.946 | quarts (U.S.) |
| millilitres | 1137 | quarts (imperial) |
| litres | 1.137 | quarts (imperial) |
| litres | 3.785 | gallons (U.S.) |
| litres | 4.546 | gallons (imperial) |
| grams | 28.35 | ounces |
| grams | 454 | pounds |
| kilograms | 0.454 | pounds |
| centimetres | 2.54 | inches |
| millimetres | 25.4 | inches |
| Celsius (Centigrade) | multiply by 1.8 and add 32 | Fahrenheit |
| When you know | Multiply by | To get |
|---|---|---|
| teaspoons | 4.93 | millilitres |
| tablespoons | 14.79 | millilitres |
| fluid ounces (imperial) | 28.41 | millilitres |
| fluid ounces (U.S.) | 29.57 | millilitres |
| cups | 236.59 | millilitres |
| cups | 0.236 | litres |
| pints (U.S.) | 473.18 | millilitres |
| pints (U.S.) | 0.473 | litres |
| pints (imperial) | 568.26 | millilitres |
| pints (imperial) | 0.568 | litres |
| quarts (U.S.) | 946.36 | millilitres |
| quarts (U.S.) | 0.946 | litres |
| quarts (imperial) | 1137 | millilitres |
| quarts (imperial) | 1.137 | litres |
| gallons (U.S.) | 3.785 | litres |
| gallons (imperial) | 4.546 | litres |
| ounces | 28.35 | grams |
| pounds | 454 | grams |
| pounds | 0.454 | kilograms |
| inches | 2.54 | centimetres |
| inches | 25.4 | millimetres |
| Fahrenheit | subtract 32 and divide by 1.8 | Celsius (Centigrade) |
Table 10 lists the most common U.S. measurements and metric units of measure, and their equivalents used in professional kitchens. Table 11 presents the conversion factors.
| Measurement type | Unit | Equivalent |
|---|---|---|
| Length | 1 inch | 25.4 millimetres |
| Length | 1 centimetre | 0.39 inches |
| Length | 1 metre | 39.4 inches |
| Volume | 1 fluid ounce (U.S.) | 29.57 millilitres |
| Volume | 1 cup | 237 millilitres |
| Volume | 1 quart | 946 millilitres |
| Volume | 1 millilitre | 0.034 fluid ounces |
| Volume | 1 litre | 33.8 fluid ounces |
| Weight | 1 ounce | 28.35 grams |
| Weight | 1 pound | 454 grams |
| Weight | 1 gram | 0.035 ounce |
| Weight | 1 kilogram | 2.205 pounds |
| Measurement type | To convert | Multiply by | Result |
|---|---|---|---|
| Length | Inches to millimeters | 25.4 | 1 inch = 25.4 mm |
| Length | Inches to centimetres | 2.54 | 1 inch = 2.54 cm |
| Length | Millimetres to inches | 0.03937 | 1 mm = 0.03937 in. |
| Length | Centimetres to inches | 0.3937 | 1 cm = 0.3937 in. |
| Length | Metres to inches | 39.3701 | 1m = 39.37 in. |
| Volume | Quarts to litres | 0.946 | 1 qt. = 0.946 L |
| Volume | Litres to fluid ounces (U.S.) | 33.8 | 1 L = 33.8 oz. |
| Volume | Quarts to millilitres | 946 | 1 qt. = 946 mL |
| Volume | Millilitres to ounces | 0.0338 | 1 mL = 0.0338 oz. |
| Volume | Litres to quarts | 1.05625 | 1 L = 1.05625 qt. |
| Weight | Ounces to grams | 28.35 | 1 oz. = 28.35 g |
| Weight | Grams to ounces | 0.03527 | 1 g = 0.03527 oz. |
| Weight | Kilograms to pounds | 2.2046 | 1 kg = 2.2046 lb. |
Soft Conversions
Many times, cooks will use what are called “soft conversions” rather than exact conversions, especially in small batch recipes where a slight variation can be tolerated, as it is often difficult to measure very fine quantities using liquid measures. This is a shortcut that can be used if you are faced with only a set of metric measuring tools and a U.S. recipe (or vice versa). Table 12 lists the common soft conversions.
| Metric | U.S. Measurements |
|---|---|
| 1 millilitre | 1/4 teaspoon |
| 2 millilitres | 1/2 teaspoon |
| 5 millilitres | 1 teaspoon |
| 15 millilitres | 1 tablespoon |
| 30 millilitres | 1 fluid ounce |
| 250 millilitres | 1 cup |
| 500 millilitres | 1 pint |
| 1 litre | 1 quart |
| 4 litres | 1 gallon |
Types of Measurements Used in the Kitchen
There are three types of measurements used to measure ingredients and to serve portions in the restaurant trade. Measurement can be by volume, by weight, or by count.
Recipes may have all three types of measurement. A recipe may call for 3 eggs (measurement by count), 250 mL of milk (measurement by volume), and 0.5 kg of cheese (measurement by weight).
There are formal and informal rules governing which type of measurement should be used. There are also specific procedures to ensure that the measuring is done accurately and consistently.
Number or Count
Number measurement is only used when an accurate measurement is not critical and the items to be used are understood to be close in size.
For example, “3 eggs” is a common measurement called for in recipes, not just because 3 is easy to count but also because eggs are graded to specific sizes. Most recipes call for large eggs unless stated otherwise.
Numbers are also used if the final product is countable. For example, 24 premade tart shells would be called for if the final product is to be 24 filled tart shells.
Volume
Volume measurement is usually used with liquids or fluids because such items are awkward to weigh. It is also used for dry ingredients in home cooking, but it is less often used for dry measurement in the industry.
Volume is often the measure used when portioning sizes of finished product. For example, portion scoops are used to dole out vegetables, potato salad, and sandwich fillings to keep serving size consistent. Ladles of an exact size are used to portion out soups and sauces.
Often scoops and ladles used for portioning are sized by number. On a scoop, such a number refers to the number of full scoops needed to fill a volume of one litre or one quart. Ladles are sized in millilitres or ounces.
Weight
Weight is the most accurate way to measure ingredients or portions. When proportions of ingredients are critical, their measurements are always given in weights. This is particularly true in baking where it is common to list all ingredients by weight, including eggs (which, as mentioned earlier, in almost all other applications are called for by count). Whether measuring solids or liquids, measuring by weight is more reliable and consistent.
Weighing is a bit more time consuming and requires the use of scales, but it pays off in accuracy. Digital portion scales are most commonly used in industry and come in various sizes to measure weights up to 5 kg (11 lbs.). This is adequate for most recipes, although larger operations may require scales with a larger capacity.
The reason weight is more accurate than volume is because it takes into account factors such as density, moisture, and temperature that can have an effect on the volume of ingredients. For example, 250 mL (1 cup) of brown sugar (measured by volume) could change drastically depending on whether it is loosely or tightly packed in the vessel. On the other hand, 500 grams (17.63 oz.) of brown sugar, will always be 500 grams (17.63 oz.).
Even flour, which one might think is very consistent, will vary from location to location, and the result will mean an adjustment in the amount of liquid needed to get the same consistency when mixed with a given volume.
Another common mistake is interchanging between volume and weight. The only ingredient that will have the same volume and weight consistently is water: 1 L of water = 1 kg of water.
There is no other ingredient that can be measured interchangeably because of gravity and the density of an item. Every ingredient has a different density and different gravitational weight, which will also change according to location. This is called specific gravity. Water has a specific gravity of 1.0. Liquids that are lighter than water (such as oils that float on water) have a specific gravity of less than 1.0. Those that are heavier than water and will sink, such as molasses, have a specific gravity greater than 1.0. Unless you are measuring water, remember not to use a volume measure for a weight measure, and vice versa.
Example 4
1 L water = 1 kg water
1 L water + 1 L canola oil = 2 L of water and oil mixture (volume)
1 L water + 1 L canola oil = 1.92 kg (weight)
In order to convert your existing recipes that only call for volume measurement to weight, you will need to measure each ingredient by volume, weigh it, and then record the amount in your recipe. There are also tools that can help with this conversion.
- Aqua-calc: Online Food Calculator is an online calculator has an extensive database of foods and can convert from volume to weight in both the metric and U.S. measuring systems.
- Lee Valley Kitchen Calculator is a conversion calculator has the capacity to convert between weight and volume. It comes with an attached list of ingredients and their specific gravitational weights. It is, however, a list of only the most common ingredients and will not likely cover everything that a commercial kitchen uses.
|
libretexts
|
2025-03-17T19:53:53.956678
| 2021-09-21T06:41:27 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.04%3A_Imperial_and_U.S._Systems_of_Measurement",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1.4: Imperial and U.S. Systems of Measurement",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.05%3A_Converting_and_Adjusting_Recipes_and_Formulas
|
1.5: Converting and Adjusting Recipes and Formulas
Recipes often need to be adjusted to meet the needs of different situations. The most common reason to adjust recipes is to change the number of individual portions that the recipe produces. For example, a standard recipe might be written to prepare 25 portions. If a situation arises where 60 portions of the item are needed, the recipe must be properly adjusted.
Other reasons to adjust recipes include changing portion sizes (which may mean changing the batch size of the recipe) and better utilizing available preparation equipment (for example, you need to divide a recipe to make two half batches due to a lack of oven space).
Conversion Factor Method
The most common way to adjust recipes is to use the conversion factor method. This requires only two steps: finding a conversion factor and multiplying the ingredients in the original recipe by that factor.
Finding Conversion Factors
To find the appropriate conversion factor to adjust a recipe, follow these steps:
- Note the yield of the recipe that is to be adjusted. The number of portions is usually included at the top of the recipe (or formulation) or at the bottom of the recipe. This is the information that you HAVE.
- Decide what yield is required. This is the information you NEED.
- Obtain the conversion factor by dividing the required yield (from Step 2) by the old yield (from Step 1). That is, conversion factor = (required yield)/(recipe yield) or conversion factor = what you NEED ÷ what you HAVE.
Example 5
To find the conversion factor needed to adjust a recipe that produces 25 portions to produce 60 portions, these are steps you would take:
- Recipe yield = 25 portions
- Required yield = 60 portions
-
Conversion factor
- = (required yield) ÷ (recipe yield)
- = 60 portions ÷ 25 portions
- = 2.4
If the number of portions and the size of each portion change, you will have to find a conversion factor using a similar approach:
- Determine the total yield of the recipe by multiplying the number of portions and the size of each portion.
- Determine the required yield of the recipe by multiplying the new number of portions and the new size of each portion.
- Find the conversion factor by dividing the required yield (Step 2) by the recipe yield (Step 1). That is, conversion factor = (required yield)/(recipe yield).
Example 6
For example, to find the conversion factor needed to change a recipe that produces 20 portions with each portion weighing 150 g into a recipe that produces 60 portions with each portion containing 120 g, these are the steps you would take:
- Old yield of recipe = 20 portions × 150 g per portion = 3000 g
- Required yield of recipe = 40 portions × 120 g per portion = 4800 g
-
Conversion factor
- = required yield ÷ old yield
- = 4800 ÷ 3000
- = 1.6
Key Takeaway
To ensure you are finding the conversion factor properly, remember that if you are increasing your amounts, the conversion factor will be greater than 1. If you are reducing your amounts, the factor will be less than 1.
Adjusting Recipes Using Conversion Factors
Now that you have the conversion factor, you can use it to adjust all the ingredients in the recipe. The procedure is to multiply the amount of each ingredient in the original recipe by the conversion factor. Before you begin, there is an important first step:
Before converting a recipe, express the original ingredients by weight whenever possible.
Converting to weight is particularly important for dry ingredients. Most recipes in commercial kitchens express the ingredients by weight, while most recipes intended for home cooks express the ingredients by volume. If the amounts of some ingredients are too small to weigh (such as spices and seasonings), they may be left as volume measures. Liquid ingredients also are sometimes left as volume measures because it is easier to measure a litre of liquid than it is to weigh it. However, a major exception is measuring liquids with a high sugar content, such as honey and syrup; these should always be measured by weight, not volume.
Converting from volume to weight can be a bit tricky and may require the use of tables that provide the approximate weight of different volume measures of commonly used recipe ingredients. Once you have all ingredients in weight, you can then multiply by the conversion factor to adjust the recipe.
When using U.S. or imperial recipes, often you must change the quantities of the original recipe into smaller units. For example, pounds may need to be expressed as ounces, and cups, pints, quarts, and gallons must be converted into fluid ounces.
Converting a U.S. Measuring System Recipe
The following example will show the basic procedure for adjusting a recipe using U.S. measurements.
Example 7
Adjust a standard formulation (Table 13) designed to produce 75 biscuits to have a new yield of 300 biscuits.
| Ingredient | Amount |
|---|---|
| Flour | 3¼ lbs. |
| Baking Powder | 4 oz. |
| Salt | 1 oz. |
| Shortening | 1 lb. |
| Milk | 6 cups |
Solution
-
Find the conversion factor.
- conversion factor = new yield/old yield
- = 300 biscuits ÷ 75 biscuits
- = 4
- Multiply the ingredients by the conversion factor. This process is shown in Table 14.
| Ingredient | Original Amount (U.S) | Conversion factor | New Ingredient Amount |
|---|---|---|---|
| Flour | 3¼ lbs. | 4 | 13 lbs. |
| Baking powder | 4 oz. | 4 | 16 oz. (= 1 lb.) |
| Salt | 1 oz. | 4 | 4 oz. |
| Shortening | 1 lb. | 4 | 4 lbs. |
| Milk | 6 cups | 4 | 24 cups (= 6 qt. or 1½ gal.) |
Converting an Imperial Measuring System Recipe
The process for adjusting an imperial measure recipe is identical to the method outlined above. However, care must be taken with liquids as the number of ounces in an imperial pint, quart, and gallon is different from the number of ounces in a U.S. pint, quart, and gallon. (If you find this confusing, refer back to Table 7 and the discussion on imperial and U.S. measurements.)
Converting a Metric Recipe
The process of adjusting metric recipes is the same as outlined above. The advantage of the metric system becomes evident when adjusting recipes, which is easier with the metric system than it is with the U.S. or imperial system. The relationship between a gram and a kilogram (1000 g = 1 kg) is easier to remember than the relationship between an ounce and a pound or a teaspoon and a cup.
Example 8
Adjust a standard formulation (Table 15) designed to produce 75 biscuits to have a new yield of 150 biscuits.
| Ingredient | Amount |
|---|---|
| Flour | 1.75 kg |
| Baking powder | 50 g |
| Salt | 25 g |
| Shortening | 450 g |
| Milk | 1.25 L |
Solution
-
Find the conversion factor.
- conversion factor = new yield/old yield
- = 150 biscuits÷75 biscuits
- = 2
- Multiply the ingredients by the conversion factor. This process is shown in Table 16.
| Ingredient | Amount | Conversion Factor | New Amount |
|---|---|---|---|
| Flour | 1.75 kg | 2 | 3.5 kg |
| Baking powder | 50 g | 2 | 100 g |
| Salt | 25 g | 2 | 50 g |
| Shortening | 450 g | 2 | 900 g |
| Milk | 1.25 L | 2 | 2.5 L |
Cautions when Converting Recipes
Although recipe conversions are done all the time, several problems can occur. Some of these include the following:
- Substantially increasing the yield of small home cook recipes can be problematic as all the ingredients are usually given in volume measure, which can be inaccurate, and increasing the amounts dramatically magnifies this problem.
- Spices and seasonings must be increased with caution as doubling or tripling the amount to satisfy a conversion factor can have negative consequences. If possible, it is best to under-season and then adjust just before serving.
- Cooking and mixing times can be affected by recipe adjustment if the equipment used to cook or mix is different from the equipment used in the original recipe.
The fine adjustments that have to be made when converting a recipe can only be learned from experience, as there are no hard and fast rules. Generally, if you have recipes that you use often, convert them, test them, and then keep copies of the recipes adjusted for different yields, as shown in Table 17.
Recipes for Different Yields of Cheese Puffs
| Ingredient | Amount |
|---|---|
| Butter | 90 g |
| Milk | 135 mL |
| Water | 135 mL |
| Salt | 5 mL |
| Sifted flour | 150 g |
| Large eggs | 3 |
| Grated cheese | 75 g |
| Cracked pepper | To taste |
| Ingredient | Amount |
|---|---|
| Butter | 180 g |
| Milk | 270 mL |
| Water | 270 mL |
| Salt | 10 mL |
| Sifted flour | 300 g |
| Large eggs | 6 |
| Grated cheese | 150 g |
| Cracked pepper | To taste |
| Ingredient | Amount |
|---|---|
| Butter | 270 g |
| Milk | 405 mL |
| Water | 405 mL |
| Salt | 15 mL |
| Sifted flour | 450 g |
| Large eggs | 9 |
| Grated cheese | 225 g |
| Cracked pepper | To taste |
| Ingredient | Amount |
|---|---|
| Butter | 360 g |
| Milk | 540 mL |
| Water | 540 mL |
| Salt | 20 mL |
| Sifted flour | 600 g |
| Large eggs | 12 |
| Grated cheese | 300 g |
| Cracked pepper | To taste |
Baker’s Percentage
Many professional bread and pastry formulas are given in what is called baker’s percentage . Baker’s percentage gives the weights of each ingredient relative to the amount of flour (Table 18). This makes it very easy to calculate an exact amount of dough for any quantity.
| Ingredient | % | Total | Unit |
|---|---|---|---|
| Flour | 100.0% | 15 | kg |
| Water | 62.0% | 9.3 | kg |
| Salt | 2.0% | 0.3 | kg |
| Sugar | 3.0% | 0.45 | kg |
| Shortening | 1.5% | 0.225 | kg |
| Yeast | 2.5% | 0.375 | kg |
| Total weight: | 171.0% | 25.65 | kg |
To convert a formula using baker’s percentage, there are a few options:
If you know the percentages of the ingredients and amount of flour, you can calculate the other ingredients by multiplying the percentage by the amount of flour to determine the quantities. Table 19 shows that process for 20 kg flour.
| Ingredient | % | Total | Unit |
|---|---|---|---|
| Flour | 100.0% | 20 | kg |
| Water | 62.0% | 12.4 | kg |
| Salt | 2.0% | 0.4 | kg |
| Sugar | 3.0% | 0.6 | kg |
| Shortening | 1.5% | 0.3 | kg |
| Yeast | 2.5% | 0.5 | kg |
| Total weight: | 171.0% | 34.20 | kg |
If you know the ingredient amounts, you can find the percentage by dividing the weight of each ingredient by the weight of the flour. Remember, flour is always 100%. For example, the percentage of water is 6.2 ÷ 10 = 0.62 × 100 or 62%. Table 20 shows that process for 10 kg of flour.
| Ingredient | % | Total | Unit |
|---|---|---|---|
| Flour | 100.0% | 10 | kg |
| Water | 62.0% | 6.2 | kg |
| Salt | 2.0% | 0.2 | kg |
| Sugar | 3.0% | 0.3 | kg |
| Shortening | 1.5% | 0.15 | kg |
| Yeast | 2.5% | 0.25 | kg |
Example 9
Use baker’s percentage to find ingredient weights when given the total dough weight.
For instance, you want to make 50 loaves at 500 g each. The weight is 50 × 0.5 kg = 25 kg of dough.
You know the total dough weight is 171% of the weight of the flour.
To find the amount of flour, 100% (flour) is to 171% (total %) as n (unknown) is to 25 (Table 21). That is,
- 100 ÷ 171 = n ÷ 25
- 25 × 100 ÷ 171 = n
- 14.62 = n
| Ingredient | % | Total | Unit |
|---|---|---|---|
| Flour | 100.0% | 14.62 | kg |
| Water | 62.0% | 9.064 | kg |
| Salt | 2.0% | 0.292 | kg |
| Sugar | 3.0% | 0.439 | kg |
| Shortening | 1.5% | 0.219 | kg |
| Yeast | 2.5% | 0.366 | kg |
| Total weight: | 171.0% | 25.00 | kg |
As you can see, both the conversion factor method and the baker’s percentage method give you ways to convert recipes. If you come across a recipe written in baker’s percentage, use baker’s percentage to convert the recipe. If you come across a recipe that is written in standard format, use the conversion factor method.
|
libretexts
|
2025-03-17T19:53:54.089400
| 2021-09-21T06:41:28 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/01%3A_Trade_Math/1.05%3A_Converting_and_Adjusting_Recipes_and_Formulas",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "1.5: Converting and Adjusting Recipes and Formulas",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control
|
2: Inventory Control Last updated Save as PDF Page ID 17916 BC Cook Articulation Committee BCcampus Learning Objectives Describe basic inventory procedures Take a basic inventory Extend a basic inventory Apply ordering and purchasing procedures 2.1: Basic Inventory Procedures 2.2: Purchasing
|
libretexts
|
2025-03-17T19:53:54.168938
| 2021-09-21T06:41:24 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "2: Inventory Control",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control/2.01%3A_Basic_Inventory_Procedures
|
2.1: Basic Inventory Procedures
A key component in effective kitchen management is inventory control. By knowing what supplies are on hand at a given time, the manager will be able to plan food orders, calculate food costs since the previous inventory, and make menu item changes if needed. By keeping an eye on inventory, it is possible to note potential problems with pilferage and waste.
Managing inventory is like checking a bank account. Just as you are interested in how much money you have in the bank and whether that money is paying you enough in interest, so the manager should be interested in the value of the supplies in the storeroom and in the kitchen.
An inventory is everything that is found within your establishment. Produce, dry stores, pots and pans, uniforms, liquor, linens, or anything that costs money to the business should be counted as part of inventory. Kitchen items should be counted separately from the front of house and bar inventory and so forth.
Regardless of the size of your operation, the principles of inventory control are the same. In larger operations there will be more people and sometimes even whole teams involved with the various steps, and in a small operation all responsibility for managing the inventory may fall on one or two key people.
Effective inventory control can be broken down into a few important steps:
- Set up systems to track and record inventory
- Develop specifications and procedures for ordering and purchasing
- Develop standards and procedures to efficiently receive deliveries
- Determine the frequency and processes for reconciling inventory
- Analyze inventory data and determine any areas for improvement
Setting Up Systems to Track and Record Inventory
One of the reasons you take inventory is to determine food costs and to work out cost percentages. There are several procedures that simplify finding the value of goods in storage. These techniques are based on keeping good records of how much supplies cost and when supplies were purchased.
The temptation in small operations is to treat inventory control casually. Perhaps there are only one or two people doing the purchasing and they are usually aware of the supplies that are on hand. This doesn’t eliminate the need to track purchases against sales to see if you are managing your costs as well as you can.
Almost all inventory control procedures are time consuming. Moreover, such records must be kept up-to-date and done accurately. Trying to save a few hours by cutting back on the time needed to keep inventory records may be money poorly saved.
The simplest method for tracking inventory is using a spreadsheet. A simple spreadsheet might list all of the products that are regularly purchased, with the current prices and the numbers on hand at the last inventory count. The prices can be updated regularly as invoices are processed for payment, and a schedule can be set to count the product on hand.
In large operations, the systems need to be more sophisticated as there are more people involved. Purchases might be made by a separate department, inventory records might be kept by a storeroom clerk, and the tracking and counting of inventory might be tied to a system using scanners and barcodes, which in turn may be linked with your sales system so that there is always a record of what should be in stock.
No matter the depth of detail used, having a system to track inventory gives managers a good idea of supplies on hand and a tool to use to manage costs.
Incoming Inventory
The primary reason for establishing a consistent method for accepting ordered goods is to ensure that the establishment receives exactly what has been ordered. Errors frequently occur, and unless the quantity and quality of the items delivered are carefully checked against what was ordered, substantial losses can take place. When receiving procedures are carefully performed, mistakes that could cost the restaurant time and money are avoided. In addition, an effective receiving method encourages honesty on the part of suppliers and delivery people.
Invoices
The most important document in determining if the goods received are the goods ordered is the invoice . An invoice is an itemized list of the goods or products delivered to a food preparation premise. An invoice shows the quantity, quality, price per kilogram or unit, and, in some cases, the complete extension of the cost chargeable. Only by carefully comparing and checking can you be sure that the information on the invoice tallies with the products received. This comparison may require that items be weighed and/or counted.
Whenever possible, the receiver should check the invoice against the purchase order or purchase request slips. This will ensure that the quantity and price of the goods shipped match those listed on the order form. If the invoice is not checked against the purchase order when the goods arrive, there is the potential that you will be missing products you need or receive products that were not ordered or are in incorrect quantities.
In addition, the quality of the goods should be determined before they are accepted. For example, boxes of fresh produce and frozen foods should be opened and inspected to ensure quality.
When you are satisfied that the delivery is in order, sign the invoice. In most cases, the invoice is in duplicate or triplicate: you keep the original and the delivery driver retains the other copy or copies. Once you have signed, you have relieved the delivery company of its responsibilities and the supplies now belong to your company. You may, therefore, become responsible for any discrepancies between what is on the invoice and what has been delivered. It is good practice to bring any discrepancies or errors to the attention of the driver and have him or her acknowledge the mistake by signing the invoice. If a credit note is issued, that should also be marked on the invoice by the driver.
Note: Do not sign the invoice until you are sure that all discrepancies have been taken care of and recorded on the invoice.Take the signed invoice and give it to whoever is responsible for collecting invoices for the company.
The receiving of deliveries can be time consuming for both the food establishment and the delivery service. Often the delivery people (particularly if they are not the supplier) will not want to wait while these checks are done. In this case, it is important that your company has an understanding with the supplier that faults discovered after the delivery service has left are the supplier’s problems, not yours.
Once the invoices have been signed, put the delivered products in the proper locations. If you are required to track incoming inventory, do so at the same time.
Outgoing Inventory
When a supply leaves the storeroom or cooler, a record must be kept to track where it has gone. In most small operations, the supplies go directly to the kitchen where they are used to produce the menu items. In an ideal world, accurate records of incoming and outgoing supplies are kept, so knowing what is on hand is a simple matter of subtraction. Unfortunately, systems aren’t always that simple.
In a smaller operation, knowing what has arrived and what gets used every day can easily be reconciled by doing a regular count of inventory. In larger operations and hotels, the storage rooms and coolers may be on a different floor than the kitchen, and therefore a system is needed that requires each department and the kitchens to requisition food from the storeroom or purchasing department, much like a small restaurant would do directly from the supplier. In this model, the hotel would purchase all of the food and keep it in a central storage area, and individual departments would then “order” their food from the storerooms.
Requisitions
To control inventory and to determine daily menu costs in a larger operation, it is necessary to set up a requisition procedure where anything transferred from storage to the kitchen is done by a request in writing. The requisition form should include the name and quantity of the items needed by the kitchen. These forms often have space for the storeroom clerk or whoever handles the storeroom inventory to enter the unit price and total cost of each requested item (Figure 1).
In an efficiently run operation, separate requisition forms should be used by serving personnel to replace table supplies such as sugar, salt, and pepper. However, often personnel resist using requisition forms because they find it much easier and quicker to simply enter the storage room and grab what is needed, but this practice leaves no record and makes accurate record keeping impossible. To reduce the possibility of this occurring, the storage area should be secure with only a few people having the right to enter the rooms, storage freezers, or storage refrigerators.
Figure 1: Sample Requisition Form
Date: _____________
Department: Food Service
| Quantity | Description | Unit Cost | Total Cost |
|---|---|---|---|
| 6 #10 cans | Kernel corn | ||
| 25 kg | Sugar | ||
| 20 kg | Ground beef | ||
| 6 each | Pork loins |
Charge to:
Catering Dept.
C. Andrews
Chef
Not only does the requisition keep tabs on inventory, it also can be used to determine the dollar value of foods requested by each department and so be used to determine expenses. In a larger operation where purchases may be made from different suppliers at different prices, it may be necessary to tag all staples with their costs and date of arrival. Expensive items such as meats are often tagged with a form that contains information about weight, cost per unit (piece, pound or kilogram), date of purchase, and name of supplier.
Pricing all items is time consuming, but that time will soon be recovered when requisition forms are being filled out or when the stock has to be given a monetary value. In addition, having prices on goods may help to remind staff that waste is costly.
Inventory Record Keeping
There are two basic record keeping methods to track inventory. The first is taking perpetual inventory . A perpetual inventory is simply a running balance of what is on hand. Perpetual inventory is best done by keeping records for each product that is in storage, as shown in Figure 2.
When more of the product is received, the number of cans or items is recorded and added to the inventory on hand; when some of the product is requisitioned, the number going out is recorded and the balance is reduced. In addition, the perpetual inventory form can indicate when the product should be reordered (the reorder point) and how much of the product should ideally be on hand at a given time ( par stock ).
In small operations, a perpetual inventory is usually only kept for expensive items as the time (and cost) of keeping up the records can be substantial.
The second inventory record keeping system is taking a physical inventory . A physical inventory requires that all items in storage be counted periodically. To be an effective control, physical inventory should be taken at least monthly. The inventory records are kept in a spreadsheet or in another system reserved for that purpose.
The inventory sheet (Figure 3) can list the items alphabetically or in the order they will appear on the shelves in the storage areas.
Figure 3: Physical Inventory Form
| Product | Unit | Count | Unit Price | Total Value |
|---|---|---|---|---|
| Lima beans | 6 #10 | 4 1/3 | $23.00 | $99.60 |
| Green beans | 6 #10 | 3 5/6 | 28.95 | 110.98 |
| Flour | 25 kg bag | 3 | 14.85 | 44.55 |
| Rice | 50 kg bag | 1 | 32.50 | 32.50 |
| Total | $593.68 |
In addition to the quantity of items, the inventory usually has room for the unit cost and total value of each item in storage. The total values of the items are added together to give the total dollar value of the inventory. This is also knows as extending the inventory. The total value of the inventory is known as the closing inventory for the day the inventory was taken. This amount will also be used as the opening inventory to compare with the next physical inventory. If the inventory is taken on the same day of each month, the figures can be used to accurately determine the monthly food cost.
The physical inventory is used to verify the accuracy of the perpetual inventory. For example, if 15 whole beef tenderloins are counted during a physical inventory, but the perpetual inventory suggests that there should be 20 tenderloins on hand, then a control problem exists and you need to find the reason for the variance.
Computerized Inventory Control
Most people today use computerized systems to calculate, track, and extend inventory. These systems enable the restaurant to have a much tighter and more accurate control over the inventory on hand and the costs of that inventory. Having access to information such as ordering history and the best price paid is just one of the benefits of these systems. They can also help the purchaser predict demand levels throughout the year. These programs in many cases are also integrated with the point-of-sale (POS) system used to track sales, and can even remove an item from a computerized inventory list when the waiter registers the sale of any menu item on the restaurant terminal. That is, if a customer orders one chicken dish from the menu, all the items required to make one portion of the chicken are discounted from inventory. This provides management with an constant up-to-date perpetual inventory of most inventory items.
Smaller operations will use a spreadsheet application to manage inventory, so you should also be familiar with a program like Microsoft Excel if you are responsible for ordering and inventory. The information required for the program to do the calculations properly is available from the invoices received with your supplies. That is, the quantities and prices of the goods you most recently received should be entered into the computer program either by you or by the restaurant’s purchaser. These prices and quantities are automatically used to calculate the cost of the goods on hand. This automated process can save you an enormous amount of time and, if the information entered into the computer is accurate, may also save you money. In any inventory system, there is always a possibility for error, but with computerized assistance, this risk is minimized.
Pricing and Costing for Physical Inventory
The cost of items purchased can vary widely between orders. For example, cans of pineapple might cost $2.25 one week, $2.15 the second week, and $2.60 another week. The daily inventory reports will reflect the changes in price, but unless the individual cans have been marked, it is difficult to decide what to use as a cost on the physical inventory form.
There are several different ways to view the cost of the stock on the shelves if the actual cost of each item is difficult to determine. Most commonly, the last price paid for the product is used to determine the value of the stock on hand. For example, if canned pineapple last cost $2.60 a can and there are 25 cans on hand, the total value of the pineapple is assumed to be $65 (25 x $2.60) even though not all of the cans may have been bought at $2.60 per can.
Another method for costing assumes the stock has rotated properly and is known as the FIFO (first-in first-out) system. Then, if records have been kept up-to-date, it is possible to more accurately determine the value of the stock on hand.
Here is an example showing how the FIFO system works.
Example 10
The daily inventory shows the following:
| Date | Number and value of cans |
|---|---|
| Opening inventory, 1st of month | 15 cans @ $2.15 = $32.25 |
| Received on 8th of month | 24 cans @ $2.25 = $54.00 |
| Received in 15th of month | 24 cans @ $2.15 – $51.60 |
| Received on 23 of month | 12 cans @ $2.60 – $31.20 |
If the stock has rotated according to FIFO, you should have used all of the opening inventory, all of the product received on the 8th, and some of the product received on the 15th. The 25 remaining cans must consist of the 12 cans received on the 23rd and 13 of the cans received on the 15th. The value of these cans is then
12 cans @ $2.60 = $31.20
13 cans @ $2.15 = $27.95
Total = $59.15
As you can see, the choice of costing method can have a marked effect on the value of stock on hand. It is always advisable to use the method that best reflects the actual cost of the products. Once a method is adopted, the same method must be used consistently or the statistical data generated will be invalid.
Costing Prepared or Processed Items
When you are building your inventory forms, be sure to calculate the costs of any processed items. For instance, sauces and stocks that you make from raw ingredients need to be costed accurately and recorded on the spreadsheet along with purchased products so that when you are counting your inventory you are able to reflect the value of all supplies on the premises that have not been sold.
(We will discuss more about calculating the costs of products and menu items later in this book.)
Inventory Turnover
When accurate inventory records are kept, it is possible to use the data in the records to determine the inventory turnover rate. The inventory turnover rate shows the number of times in a given period (usually a month) that the inventory is turned into revenue. An inventory turnover of 1.5 means that the inventory turns over about 1.5 times a month, or 18 times a year. In this case, you would have about three weeks of supplies in inventory at any given time (actually 2.88 weeks, which is 52 weeks ÷ 18). Generally, an inventory turnover every one to two weeks (or two to three times per month) is considered normal.
A common method used to determine inventory turnover is to find the average food inventory for a month and divide it into the total food cost for the same month. The total food cost is calculated by adding the daily food purchases (found on the daily receiving reports) to the value of the food inventory at the beginning of the month and subtracting the value of the food inventory at the end of the month.
That is,
average food inventory = (beginning inventory + ending inventory) ÷ 2
cost of food = beginning inventory + purchases − ending inventory
inventory turnover = (cost of food) ÷ (average food inventory)
Example 11
A restaurant has a beginning inventory of $8000 and an ending inventory of $8500. The daily receiving reports show that purchases for the month totalled $12,000. Determine the cost of food and the inventory turnover.
Cost of food = $8000 + $12 000 − $8500 = $11 500
Average food inventory = ($8000 + $8500) ÷ 2 = $8250
Inventory turnover = $11 500 ÷ $8250 = 1.4
The turnover rate in the example would be considered low and would suggest that the business has invested too much money in inventory. Having a lot of inventory on hand can lead to spoilage, high capital costs, increased storage space requirements, and other costs.
Inventory turnover rates are not exact, for a few reasons. One is that in many food operations, accurate inventory records are usually kept only for more expensive items. Another is that the simple food cost used in the calculation does not truly reflect the actual food cost. (Food costs are discussed in another chapter in this book.) In addition, not all inventory turns over at the same rate. For example, perishables turn over as quickly as they arrive while canned goods turn over more slowly.
Even though turnover rates are not exact, they do give managers at least a rough idea of how much inventory they are keeping on hand.
Image Descriptions
Figure 2 Image description: A sample perpetual inventory form for canned peaches.
| Date | In | Out | Balance |
|---|---|---|---|
| June 16 | None | 3 | 12 |
| June 17 | None | 3 | 9 |
| June 18 | 6 | None | 15 |
| June 19 | None | 2 | 13 |
|
libretexts
|
2025-03-17T19:53:54.266741
| 2021-09-21T06:41:29 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control/2.01%3A_Basic_Inventory_Procedures",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "2.1: Basic Inventory Procedures",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control/2.02%3A_Purchasing
|
2.2: Purchasing
The purchasing process is an essential part of every food service operation. All competent cooks should be skilled in buying the appropriate ingredients, in accurate amounts, at the right time, and at the best price.
Every kitchen operation has different purchasing procedures. But there is one rule that should always be followed:
Buy only as much as it is anticipated will be needed until the next delivery.
This will ensure that foods stay fresh and will create a high inventory turnover. All foods deteriorate in time, some more quickly than others. It is the job of the purchaser to ensure that only those quantities that will be used immediately or in the near future are purchased.
Market Sourcing
Sources of supply vary considerably from location to location. Large cities have a greater number and variety of suppliers than do small towns and isolated communities. Purchasers should establish contact with available suppliers such as wholesalers, local producers and packers, retailers, cooperative associations, and food importers. In most instances, the person in charge of buying will contact several suppliers to obtain the necessary foods. Some wholesalers diversify their product lines in order to meet all food-related kitchen needs.
Food products are obtained from various sources of supply. For example, a packing house supplies meat and meat products, while a food wholesaler supplies dry goods. Once business is established with a supplier, all transactions should be well documented and kept readily available on file.
There are two major food categories: perishables and non-perishables.
Perishables
Perishable items include fruits, vegetables, fresh fish and shellfish, fresh meats, poultry, and dairy products. As a rule, perishables are bought frequently to ensure freshness. Frozen foods, such as vegetables, fish and meat products, have a longer lifespan and can be ordered less frequently and stored in a freezer.
Non-perishables
Non-perishable items include dry goods, flour, cereals, and miscellaneous items such as olives, pickles, and other condiments. These can be ordered on a weekly or monthly basis.
Keep in mind that just because something does not go bad isn’t a reason to buy it in quantities larger than you need. Every item in your inventory is equal to a dollar amount that you could be saving or spending on something else. Consider that a case of 1000 sheets of parchment paper may cost $250. If you have a case and a half sitting in your inventory, but only use a few sheets a day, that is a lot of money sitting in your storeroom.
Factors That Impact Prices
Food products in particular fluctuate in price over the year, due to many factors:
- Seasonality: When food is in season, there is more of it available in the local food supply, bringing prices down. Additionally, foods in season are usually of higher quality and have longer shelf life than those that are out of season and need to be transported long distances to market.
- Weather: Severe weather can have a huge impact on the cost of food. Drought, flooding, and unseasonable frost have all affected major produce-supplying areas of the world in recent years, causing a rise in prices for many items.
- Costs of transportation: If the cost of fuel or transportation rises, so does the cost of food that needs to travel to market.
- Commodity prices: A number of foods are traded on the commodity market, such as meats and grains. These prices fluctuate as buyers who trade in these products in large volumes buy and sell, much like the stock market.
Before purchasing any food items, ask the following questions.
- When is the item to be used?
- Which supplier has the best price and the best quality? Where an item is purchased should be determined by the price and the quality of the available supplies. When ordering supplies, it is advisable to get prices from at least three sources, then purchase from the supplier who quotes the best price for comparable quality.
- When will the item be delivered? Depending on the distance of the food service establishment from the supplier, delivery may take hours or days. Remember, it is extremely difficult to maintain food quality and consistency if you do not know when your order will be delivered. For this reason, menu planning and a running inventory are two of the most important aspects of purchasing procedures.
Specifications
Meat, seafood, poultry, processed fruits and vegetables, and fresh fruits and vegetables can be ordered under different specification s. For example,
- Meats can be ordered by grade, cut, weight/thickness, fat limitation, age, whether fresh or frozen, and type of packaging.
- Seafood can be ordered by type (e.g., fin fish/shellfish), species, market form, condition, grade, place of origin, whether fresh or frozen, count, size, and packaging,
- Poultry can be ordered by type, grade, class (e.g., broiler, fryer), style (e.g., breasts, wings), size, whether fresh or frozen, and packaging.
- Processed fruits and vegetables can be ordered by grade (sometimes), variety, packaging size and type, drained weight, count per case, packing medium, and whether canned or frozen.
- Fresh fruits and vegetables can be ordered by grade (sometimes), variety, size, weight per container, growing area, and count per container,
Figure 4 shows an example of a purchasing specification sheet that might be kept in a commercial kitchen or receiving area.
Figure 4: Purchasing Specifications
| Beef | Grade | Weight, Size, and Cut Specifications |
|---|---|---|
| Prime rib | Grade AA | 7 kg, fully trimmed |
| New York strip | Grade AAA | 6 kg, bone out, fully trimmed, max. 15 cm width, min. 5 cm depth |
| Tenderloin | Grade AAA | 3 kg, fully trimmed to silverside |
| Roast sirloin | Grade A | 7 kg, boneless butt |
| Short loins | Grade AAA | 6 kg, fully trimmed, 5 cm from eye |
| Pork | Grade | Weight, Size, and Cut Specifications |
|---|---|---|
| Pork leg | Fresh—Canada #1 | 6 kg, oven ready, lean |
| Pork loin | Fresh—Canada #1 | 5-6 kg, trimmed, lean |
| Ham | 6-8 kg, fully cooked, lean, bone in |
| Poultry | Grade | Weight, Size, and Cut Specifications |
|---|---|---|
| Chicken—Frying | Fancy, Eviscerated | 1.5 kg, always fresh |
| Turkey | Fancy, Eviscerated | 9-13 kg |
| Lamb | Grade | Weight, Size, and Cut Specifications |
|---|---|---|
| Legs | Fresh—Canada #1 | 3-5 kg, bone in |
| Lamb loin | 2-3 kg, trimmed with all fat removed |
| Seafood | Grade | Weight, Size, and Cut Specifications |
|---|---|---|
| Shrimp | Jumbo | 24-30/kg, fresh |
| Oysters | Canada #1 | 35/L |
Contract Buying
Some restaurants and hotels, particularly those belonging to chains, will have contracts in place for the purchasing of all products or for certain items. This may mean that the property can only purchase from a specific supplier, but in return it will have negotiated set pricing for the duration of the contract. This has advantages and disadvantages. On the positive side, the contract price remains stable and the job of managing food costs becomes more consistent since there are no price fluctuations. On the negative side, contract buying takes away the opportunity to compare prices between suppliers and take advantage of specials that may be offered.
Additional Resources
- The CFIA (Canadian Food Inspection Agency) has specifications for food labelling, packaging, an so forth
-
These books are great resources for purchase specifications:
- The Visual Food Encyclopedia
- The Visual Food Lover’s Guide : Includes essential information on how to buy, prepare, and store over 1000 types of food
- Chef’s Book of Formulas, Yields and Sizes
Purchasing Procedures
In most kitchens, purchasing and ordering are done by the chef and sous-chefs, although in larger hotels there may be purchasing departments assigned this responsibility. Most kitchens will have a list of suppliers, contacts, delivery dates and schedules, and order sheets with par stock levels to make purchasing easier. For a special function or event, such as a banquet, it may also be necessary to determine the required supplies for that function alone.
Portion Control Chart
To calculate the quantities of food items to be ordered for any size banquet, a portion control chart must be consulted first. Most establishments will have a portion control chart similar to the one shown in Figure 5. The chart indicates the portions to be used per person for any given menu item.
Figure 5: Portion control chart
| Food Item | Menu Item | Portion Size |
|---|---|---|
| Shrimp | Shrimp cocktail | 80 g (2.82 oz.) |
| Lemon | Shrimp cocktail | 1 wedge (6/lemon) |
| Cocktail sauce | Shrimp cocktail | 60 mL (2.11 oz.) |
| Head lettuce | Tossed salad | 1/4 head |
| Tomato | Tossed salad | 1/2 each |
| Dressing | Tossed salad | 60 mL (2.11 oz.) |
| Prime rib, raw, trimmed ready | Prime rib | 500 g (17.6 oz.) |
| Potato | Baked potato | 1 each (100 count) |
| Green beans | Green beans | 80 g (2.82 oz.) |
| Carrots | Carrots | 80 g (2.82 oz.) |
| Strawberries | Fresh strawberries | 100 g (3.52 oz.) |
| Whipping cream | Berries and cream | 60 mL (2.11 oz.) |
| Coffee | Coffee | 500 g (17.6 oz.) for 75 people |
| Coffee cream | Coffee | 60 mL (2.11 oz.) |
One use for a portion control chart is to estimate the quantity of major ingredients and supplies needed to produce a predicted number of menu servings.
You need to prepare shrimp cocktails and prime rib for a 100-person banquet. Using the portion control chart in Figure 5, you can quickly determine what amounts of major ingredients (Figure 6).
Figure 6L Calculating purchase amounts
| Required Servings | Amount to Order |
|---|---|
| 100 x 80 g shrimp | 8000 g or 8 kg (17.6 lbs.) shrimp |
| 100 x 1 wedge of lemon | 100 wedges = 17 lemons (6 wedges per lemon) |
| 100 x 1/4 head of lettuce | 25 heads lettuce |
| 100 x 500 g prime rib raw oven ready | 50 kg (110 lbs.) prime rib |
Purchase Order Chart with Par Levels
The primary purpose for using a purchasing standard is to ensure that sufficient quantities of all food are on hand to meet daily requirements. To establish and maintain these standards, food inventory must become a daily routine. Having set par levels (the amount you should have on hand to get through to the next order) will help in this regard.
There are three main things you need to know:
- Amount required (par level)
- Amount on hand
- Amount to order
To find the amount to order, subtract the amount on hand from the amount required (Figure 7). In some cases, you may have to order a minimum amount based on the package size, so will need to round your quantity up (such as the whole tub of garlic and full cases of mushrooms, apples, and lettuce in Figure 7).
Figure 7: Purchase order chart
| Meats | Amount Required (Par Level) | Amount on Hand | Amount to Order | Actual Order |
|---|---|---|---|---|
|
10 kg | 2 kg | 8 kg | 8 kg |
|
20 kg | 5 kg | 15 kg | 15 kg |
|
10 kg | – | 10 kg | 10 kg |
|
5 kg | 500 g | 4.5 kg | 4.5 kg |
|
10 kg | 3 kg | 7 kg | 7 kg |
| Fish | Amount Required (Par Level) | Amount on Hand | Amount to Order | Actual Order |
|---|---|---|---|---|
|
25 kg | 5 kg | 20 kg | 20 kg |
| Vegetables | Amount Required (Par Level) | Amount on Hand | Amount to Order | Actual Order |
|---|---|---|---|---|
|
2 kg tub | 250 g | 1.750 kg | 2 kg tub |
|
5 kg case | 500 g | 4.5 kg | 5 kg case |
|
2 cases (24/case) | 12 (1/2 case) | 1 1/2 cases | 2 cases |
| Fruits | Amount Required (Par Level) | Amount on Hand | Amount to Order | Actual Order |
|---|---|---|---|---|
|
2 cases | 1/2 case | 1 1/2 cases | 2 cases |
|
10 kg | – | 10 kg | |
|
1 case | 2 cases | – | – |
Integrating these par levels into your regular ordering sheets or your ordering system will make it very easy to manage inventory coming in.
More and more suppliers are moving to online ordering systems, which have current prices, case sizes, and often your purchase history available to you when placing an order. Online ordering can often be more convenient as the person placing the order does not have to make calls into an order desk during regular office hours.
|
libretexts
|
2025-03-17T19:53:54.384319
| 2021-09-21T06:41:31 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/02%3A_Inventory_Control/2.02%3A_Purchasing",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "2.2: Purchasing",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/03%3A_Food_Costing
|
3: Food Costing Last updated Save as PDF Page ID 17917 BC Cook Articulation Committee BCcampus Learning Objectives Describe food cost controls Perform yield and cost calculations Cost and price menu items Describe overall food costs Describe the principles of menu engineering 3.1: Controlling Food Costs 3.2: Yield Testing 3.3: Cooking Loss Test 3.4: Monthly Food Costs 3.5: The Principles of Menu Engineering
|
libretexts
|
2025-03-17T19:53:54.465223
| 2021-09-21T06:41:24 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/03%3A_Food_Costing",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "3: Food Costing",
"author": "BC Cook Articulation Committee"
}
|
https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/03%3A_Food_Costing/3.01%3A_Controlling_Food_Costs
|
3.1: Controlling Food Costs
Food service establishments are businesses. In order to stay in business, everyone involved with the enterprise should have at least a basic idea of how costs are determined and how such costs have an impact on an operating budget.
Food costs are controlled by five standards to which all employees and managers must adhere:
- Standard purchase specifications
- Standard recipes
- Standard yields
- Standard portion sizes
- Standard portion costs
To calculate the cost of each item, you need to understand the relationship between standardized recipes , standard portions, and yield tests. All of these play a role in calculating the cost of each item on your menu.
After goods are ordered, there should be no surprises when the goods arrive. The more specific the order, the less the chance of receiving supplies that are too high in price, too poor in quality, or too many in number.
Specifications can include brand names, grades of meat, product size, type of packaging, container size, fat content, count per kilogram, special trimming, and so on. The specifications should be specific, realistic, and easy to verify.
Precise specifications can:
- Reduce purchasing costs as higher quality products need not be accepted
- Ensure constant quality in menu items
- Allow for accurate competitive bidding among suppliers and so reduce costs
Specifications usually do not include general delivery procedures or purchase price. Directions and prices can change quickly. Specifications should be well thought out and are usually not subject to quick change.
Standardized Recipes
A standardized recipe is one that holds no surprises. A standardized recipe will produce a product that is close to identical in taste and yield every time it is made, no matter who follows the directions. A standardized recipe usually includes:
- A list of all ingredients including spices and herbs
- Exact quantities of each ingredient (with the exception of spices that may be added to taste)
- Specific directions for the order of operations and types of operations (e.g., blend, fold, mix, sauté)
- The size and number of portions the recipe will produce
Standard Yields
The yield of a recipe is the number of portions it will produce. Standard yields for high-cost ingredients such as meat are determined by calculating the cost per cooked portion. For example, a 5 kg roast might be purchased for $17 a kilogram. The cooked roast is to be served in 250 g portions as part of a roast beef dinner. After trimming and cooking, the roast will not weigh 5 kg, but significantly less. By running a yield test , the cost per portion and unit weight, and the standard yield and yield percentage, can be determined.
Standard Portions
A standard recipe includes the size of the portions that will make up a serving of the recipe. Controlling portion size has two advantages in food management: portion costs for the item will be consistent until ingredient or labour costs increase, and customers will receive consistent quantities each time they order a given plate or drink.
Standard portions mean that every plate of a given dish that leaves the kitchen will be almost identical in weight, count, or volume. Only by controlling portions is it possible to control food costs. If one order of bacon and eggs goes out with six rashers of bacon and another goes out with three rashers, it is impossible to determine the actual cost of the menu item.
Adhering to the principles of standard portions is crucial to keeping food costs in line. Without portion control, there is no consistency. This not only could have drastic effects on your food costs (having no real constant costs to budget for) but also on your customers. Customers appreciate consistency. They expect that the food you prepare will taste good, be presented properly, and be the same portion size every time they order it. Consider how the customer would feel if the portion size fluctuated with the cook’s mood. A cook’s bad mood might mean a smaller portion or, if the cook was in a good mood because the work week was over, the portion might be very large. It may be hard to grasp the importance of consistency with one single portion, but consider if fast-food outlets did not have portion control. Their costs as well as their ordering and inventory systems would be incredibly inaccurate, all of which would impact negatively on their profit margin.
Strict portion control has several side benefits beyond keeping costs under control. First, customers are more satisfied when they can see that the portion they have is very similar to the portions of the same dish they can see around them. Second, servers are quite happy because they know that if they pick up a dish from the kitchen, it will contain the same portions as another server’s plate of the same order.
Simple methods to control portion include weighing meat before it is served, using the same size juice glasses when juice is served, counting items such as shrimp, and portioning with scoops and ladles that hold a known volume. Another method is using convenience products. These products are received usually frozen and are ready to cook. Portions are consistent in size and presentation and are easily costed out on a per unit basis. This can be helpful when determining the standard portion costs.
Note : Using convenience products is usually more costly than preparing the item in-house. However, some chefs and managers feel that using premade convenience products is easier than hiring and training qualified staff. But always keep in mind that if the quality of the convenience item is not comparable to an in-house made product, the reputation of the restaurant may suffer.Standard portions are assured if the food operation provides and requires staff to use such tools as scales, measured ladles, and standard size scoops. Many operations use a management portion control record for menu items, similar to the one shown in Figure 8. The control record is posted in the kitchen so cooks and those who plate the dishes know what constitutes standard portions. Some operations also have photographs of each item posted in the kitchen area to remind workers what the final product should look like.
Figure 8: Portion control record
| Item | Purchased Size | Yield % | Cooked Yield | Portion Size | No. of Portions |
|---|---|---|---|---|---|
| Baked ham | 6-7 kg | 50% | 3.0-3.5 kg | ||
| Lunch | 50 g | 60-70 | |||
| Dinner | 85 g | 35-41 | |||
| Prime rib | 9-12 kg | 40% | 3.6-4.8 kg | 150 g | 24-32 |
| Fillet of sole | 500 g | 100% | 500.0 g | ||
| Lunch | 50 g | 10 | |||
| Dinner | 85 g | 6 | |||
| Potatoes: | 50 kg | ||||
|
75% | Peeled – 37.5 kg | 100 g | 375 | |
|
56% | Peeled – 28.0 kg | 100 g | 280 | |
| Daily veg | 5 kg | ||||
| Green beans | 80% | Trimmed – 4 kg | 50 g | 80 | |
| Carrots | 80% | Peeled – 4 kg | 50 g | 80 |
Standard Portion Costs
A standard recipe served in standard portions has a standard portion cost. A standard portion cost is simply the cost of the ingredients (and sometimes labour) found in a standard recipe divided by the number of portions produced by the recipe. Standard portion costs change when food costs change, which means that standard portion costs should be computed and verified regularly, particularly in times of high inflation. If market conditions are fairly constant, computing standard portion costs need not be done more than every few months.
Details about recipe costs are not usually found on a standard recipe document but on a special recipe detail and cost sheet or database that lists the cost per unit (kilogram, pound, millilitre, ounce, etc.) and the cost per amount of each ingredient used in the recipe or formula.
The standard portion cost can be quickly computed if portions and recipes are standardized. Simply determine the cost of each ingredient used in the recipe and ingredients used for accompaniment or garnish.
The ingredients in a standard recipe are often put on a recipe detail sheet (Figure 9). The recipe detail sheet differs from the standard recipe in that room is provided for putting the cost of each ingredient next to the ingredient. Recipe detail sheets often have the cost per portion included as part of their information, and need to be updated if ingredient costs change substantially. They can also be built in a POS system database or spreadsheet program that is linked to your inventory to allow for the updating of recipe costs as ingredient costs change.
Figure 9: Menu item – Seafood Newburg
Yield:
10 portions
Portion size:
125 g of seafood
Selling price:
$12.99
Cost/portion:
$4.07
Food cost %:
31.3%
| Ingredients | Quantity | Units | Cost/Unit | Extension |
|---|---|---|---|---|
| Lobster meat | 500 g | kg | $38.00 | $19.00 |
| Scallops | 250 g | kg | $25.00 | $6.25 |
| Shrimps | 250 g | kg | $14.00 | $3.50 |
| Sole | 250 g | kg | $8.50 | $2.13 |
| Cream, heavy | 250 mL | L | $4.00 | $1.00 |
| Fish Velouté | 750 mL | L | $1.00 | |
| Butter | 250 g | 500 g | $2.85 | $1.43 |
| Pepper and salt | ||||
| Paprika | 5 g | $0.15 | ||
| Sherry | 250 mL | 750 mL | $12.00 | $4.00 |
| Egg yolks | 6 | 12 | $2.00 | $1.00 |
| Patty shells | 10 | each | $0.12 | $1.20 |
| Total | $40.66 |
Procedure: Quarter the scallops, dice the lobster meat, halve the shrimps, and chop the sole before sautéing well in melted butter. Add sherry and simmer for a few minutes. Add the fish velouté sauce and paprika and continue to simmer. Combine the egg yolks and the heavy cream before adding them slowly to the simmering pan. Season to taste with salt and white pepper. Serve in patty shells.
Note that the portion cost and selling price used in Figure 9 is for the Seafood Newburg alone (a true à la carte price) and not the cost of all accompaniments found on the plate when the dish is served.
For example, the cost of bread and butter, vegetables, and even garnishes such as a wedge of lemon and a sprig of parsley must be added to the total cost to determine the appropriate selling price for the Seafood Newburg.
Costing Individual Items on a Plate
If you need to determine the total cost of a plate that has multiple components, rather than a recipe, you can follow the procedure in the example below.
Example 11
Standard order of bacon and eggs: the plate contains two eggs, three strips of bacon, toast, and hash browns.
The cost of ingredients used for accompaniment and garnish can be determined by using the standard portion cost formula, which is the purchase price of a container (often called a unit) divided by the number of portions in the container. That is,
standard portion cost = unit cost ÷ portions in the unit
An example is a carton of eggs. If eggs cost $2.00 a dozen and a standard portion in a menu breakfast item is two eggs, the standard portion cost can be found.
Recall the equation:
standard portion cost = unit cost ÷ portions in the unit
Now, find the portions in the unit.
portions in the unit = number in unit ÷ number in a portion
= 12 ÷ 2
= 6
That is, there are six 2-egg portions in a dozen eggs.
Substitute the known quantities into the equation.
standard portion cost = unit cost ÷ portions in unit
= $2.00 ÷ 6
= $0.33
You could get the same answer by calculating how much each egg in the dozen is worth ($2.00 ÷ 12 = $0.17) and then multiplying the cost per egg by the number of eggs needed ($0.17 × 2 = $0.34). No matter what method is used, the standard portion of two eggs in this order of bacon and eggs has a standard portion cost of $0.34.
You can find the standard portion cost of the bacon in the same way. If a 500 g package of bacon contains 20 rashers and costs $3.75, the standard portion cost of a portion consisting of four rashers can be found quickly:
portions in the unit = 20 ÷ 4
= 5
standard portion cost = unit cost/portions in unit
= $3.75 ÷ 5
= $0.75
The bacon and eggs on the plate would have a standard portion cost of $1.09. You could determine the cost of hash browns, toast, jam, and whatever else is on the plate in the same manner.
Often, restaurants will serve the same accompaniments with several dishes. In order to make the costing of the entire plate easier, they may assign a “plate cost,” which would include the average cost of the standard starch and vegetable accompaniments. This makes the process of pricing daily specials or menu items that change frequently easier, as you only need to calculate the cost of the main dish and any specific sauces and garnishes, and then add the basic plate cost to the total to determine the total cost of the plate.
Figures 10 and 11 provide an example for calculating the basic plate cost and the cost of daily features.
Figure 10
| Mashed potatoes, one serving | $0.50 |
| Mixed vegetables, one serving | $0.75 |
| Demi-glace, one serving | $0.30 |
| Herb garnish | $0.20 |
| Total basic plate cost | $1.75 |
Figure 11
| Day | Feature | Feature Cost per Portion | Basic Plate Cost | Total Cost |
|---|---|---|---|---|
| Monday | Roast beef | $5.00 | + $1.75 | = $6.75 |
| Tuesday | Pork chop | $3.75 | + $1.75 | = $5.50 |
| Wednesday | Half roast chicken | $4.00 | + $1.75 | = $5.75 |
|
libretexts
|
2025-03-17T19:53:54.569464
| 2021-09-21T06:41:32 |
{
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://workforce.libretexts.org/Bookshelves/Food_Production_Service_and_Culinary_Arts/Basic_Kitchen_and_Food_Service_Management_(BC_Campus)/03%3A_Food_Costing/3.01%3A_Controlling_Food_Costs",
"book_url": "https://commons.libretexts.org/book/workforce-17914",
"title": "3.1: Controlling Food Costs",
"author": "BC Cook Articulation Committee"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.